• wonderingwanderer@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 hours ago

    AI bot swarms threaten to undermine democracy

    When AI Can Fake Majorities, Democracy Slips Away

    Full article

    A joint essay with Daniel Thilo Schroeder & Jonas R. Kunst, based on a new paper on swarms with 22 authors (including myself) that just appeared in Science. (A preprint version is here, and you can see WIRED’s coverage here.)

    Automated bots that purvey disinformation have been a problem since the early days of social media, and bad actors have been quick to jump on LLMs as a way of automating the generation of disinformation. But as we outline in the new article in Science we foresee something worse: swarms of AI bots acting together in concert.

    The unique danger of a swarm is that it acts less like a megaphone and more like a coordinated social organism. Earlier botnets were simple-minded, mostly just copying and pasting messages at scale—and in well-studied cases (including Russia’s 2016 IRA effort on Twitter), their direct persuasive effects were hard to detect. Today’s swarms, now emerging, can coordinate fleets of synthetic personas—sometimes with persistent identities—and move in ways that are hard to distinguish from real communities. This is not hypothetical: in July 2024, the U.S. Department of Justice said it disrupted a Russia-linked, AI-enhanced bot farm tied to 968 X accounts impersonating Americans. And bots already make up a measurable slice of public conversation: a 2025 peer-reviewed analysis of major events estimated roughly one in five accounts/posts in those conversations were automated. Swarms don’t just broadcast propaganda; they can infiltrate communities by mimicking local slang and tone, build credibility over time, and then adapt in real time to audience reactions—testing variations at machine speed to discover what persuades.

    Why is this dangerous for democracy? No democracy can guarantee perfect truth, but democratic deliberation depends on something more fragile: the independence of voices. The “wisdom of crowds” works only if the crowd is made of distinct individuals. When one operator can speak through thousands of masks, that independence collapses. We face the rise of synthetic consensus: swarms seeding narratives across disparate niches and amplifying them to create the illusion of grassroots agreement. Venture capital is already helping industrialize astroturfing: Doublespeed, backed by Andreessen Horowitz, advertises a way to “orchestrate actions on thousands of social accounts” and to mimic “natural user interaction” on physical devices so the activity appears human. Concrete signs of industrialization are already emerging: the Vanderbilt Institute of National Security released a cache of documents describing “GoLaxy” as an AI-driven influence machine built around data harvesting, profiling, and AI personas for large-scale operations.

    Because humans update their views partly based on social evidence—looking to peers to see what is “normal”—fabricated swarms can make fringe views look like majority opinions. If swarms flood the web with duplicative, crawler-targeted content, they can execute “LLM grooming,” poisoning the training data that future AI models (and citizens) rely on. Even so-called “thinking” AI models are vulnerable to this,

    We cannot ban our way out of the threat of generative-AI-fueled swarms of misinformation bots, but we can change the economics of manipulation. We need five concrete shifts.

    First, social media platforms must move away from the “whack-a-mole” approach they currently use. Right now, companies rely on episodic takedowns—waiting until a disinformation campaign has already gone viral and done its damage before purging thousands of accounts in a single wave. This is too slow. Instead, we need continuous monitoring that looks for statistically unlikely coordination. Because AI can now generate unique text for every single post, looking for copy-pasted content no longer works. We must look at network behavior instead: a thousand users might be tweeting different things, but if they exhibit statistically improbable correlations in their semantic trajectories or propagate narratives with a synchronized efficiency that defies organic human diffusion.

    Second, we need to stop waiting for attackers to invent new tactics before we build defenses. A defense that only reacts to yesterday’s tricks is destined to fail. We should instead proactively stress-test our defenses using agent-based simulations. Think of this like a digital fire drill or a vaccine trial: researchers can build a “synthetic” social network populated by AI agents, and then release their own test-swarms into that isolated environment. By watching how these test-bots try to manipulate the system, we can see which safeguards crumble and which hold up, allowing us to patch vulnerabilities before bad actors act on them in the real world.

    Third, we must make it expensive to be a fake person. Policymakers need to incentivize cryptographic attestations and reputation standards to strengthen provenance. This doesn’t mean forcing every user to hand over their ID card to a tech giant—that would be dangerous for whistleblowers and dissidents living under authoritarian regimes. Instead, we need “verified-yet-anonymous” credentialing. Imagine a digital stamp that proves you are a unique human being without revealing which human you are. If we require this kind of “proof-of-human” for high-reach interactions, we make it mathematically difficult and financially ruinous for one operator to secretly run ten thousand accounts.

    Fourth, we need mandated transparency through free data access for researchers. We cannot defend society if the battlefield is hidden behind proprietary walls. Currently, platforms restrict access to the data needed to detect these swarms, leaving independent experts blind. Legislation must guarantee vetted academic and civil society researchers free, privacy-preserving access to platform data. Without a guaranteed “right to study,” we are forced to trust the self-reporting of the very corporations that profit from the engagement these swarms generate.

    Finally, we need to end the era of plausible deniability with an AI Influence Observatory. Crucially, this cannot be a government-run “Ministry of Truth.” Instead, it must be a distributed ecosystem of independent academic groups and NGOs. Their mandate is not to police content or decide who is right, but strictly to detect when the “public” is actually a coordinated swarm. By standardizing how evidence of bot-like networking is collected and publishing verified reports, this independent watchdog network would prevent the paralysis of “we can’t prove anything,” establishing a shared, factual record of when our public discourse is being engineered.

    None of this guarantees safety. But it does change the economics of large-scale manipulation.

    The point is not that AI makes democracy impossible. The point is that when it costs pennies to coordinate a fake mob and moments to counterfeit a human identity, the public square is left wide open to attack. Democracies don’t need to appoint a central authority to decide what is “true.” Instead, they need to rebuild the conditions where authentic human participation is unmistakable. We need an environment where real voices stand out clearly from synthetic noise.

    Most importantly, we must ensure that secret, coordinated manipulation is economically punishing and operationally difficult. Right now, a bad actor can launch a massive bot swarm cheaply and safely. We need to flip those physics. The goal is to build a system where faking a consensus costs the attacker a fortune, where their network collapses like a house of cards the moment one bot is detected, and where it becomes technically impossible to grow a fake crowd large enough to fool the real one without getting caught.

    – Daniel Thilo Schroeder, Gary Marcus, Jonas R. Kunst

    Daniel Thilo Schroeder is a Research Scientist at SINTEF. His work combines large-scale data and simulation to study coordinated influence and AI-enabled manipulation (danielthiloschroeder.org).

    Gary Marcus, Professor Emeritus at NYU, is a cognitive scientist and AI researcher with a strong interest in combatting misinformation.

    Jonas R. Kunst is a professor of communication at BI Norwegian Business School, where he co-leads the Center for Democracy and Information Integrity.