The robots are trying to destroy democracy, but first, a typo
AI agents running propaganda without human direction, scholarly slop poisoning academic databases, and how far-right legitimacy gets laundered through fake news sites.
This week in brief
Three things this week: AI becoming the standard tool of political manipulation; disinformation campaigns scaling up even as their output gets worse; and a quieter crisis - the slow corruption of the knowledge systems we use to verify anything at all.
The slop paradox, or: how to destroy democracy badly and still win
A new USC study confirms that AI agents can now coordinate propaganda campaigns without any human direction at all. The human labour bottleneck - writing, translating, posting, coordinating - is dissolving.
This lands alongside Graphika’s finding that most of the content actually produced by Russian and Chinese state campaigns is just simple “AI slop” - fake news websites that accidentally publish raw AI prompts in their headlines, deepfakes of Obama and Oprah commenting on India’s geopolitical rise that got almost no traction, a pro-Russia Tom Cruise deepfake called Olympics Has Fallen that circulated almost entirely within its own echo chamber. The tools have never been more capable; but the output is often extremely amateur.
The two findings seem contradictory. Campaigns built for volume, not persuasion, have no real incentive to raise their quality threshold. What matters is that even unbelievable AI-generated images still reinforce existing political biases over time - the Dutch election cycle established that clearly. Low engagement is measuring the wrong variable. And the propaganda that fails on humans may still succeed elsewhere: most major AI chatbots already cite sanctioned Russian state media in their answers, because the models were trained on content those operations seeded across the web.
The franchise model of far-right legitimacy
After Germany’s coalition collapsed in late 2024, Storm-1516 registered over a hundred fabricated German-language news sites within weeks. AI-manipulated videos spreading false abuse allegations against CDU and Green politicians spread across TikTok, Telegram and X - one account alone accumulated 3.9 million views across four videos. What distinguishes this from earlier interference is the heightened distribution layer: official AfD accounts amplified the fabrications, including a sitting Bundestag member who shared them on Facebook, X and Telegram. The operation never even needed to breach into the mainstream media.
The same week, Expo identified The Nordic Times as the international arm of Nya Dagbladet, a far-right paper with roots in the National Democrats. TNT has been cited as a credible source more than 15 times since 2022 - by Euractiv, Newsweek, multiple Italian nationals - without any disclosure of its ideological ownership. The mechanism isn’t deception in the conventional sense. TNT does publish a genuine mix of neutral and political content; its news categories include “The Exaggerated Climate Crisis” and “The Globalist Agenda”, nestled alongside routine Nordic reporting, to accumulate credibility before spending it.
Both operations run the same logic OpenAI documented in its own threat reporting this month: combine AI tools with conventional infrastructure - websites, social accounts, multi-platform distribution - because no single component triggers detection alone.
Citation needed (but doesn’t exist)
CETaS’s accounting of AI-enabled election interference across 2025 is the most comprehensive yet - deepfake fraud alone caused over $200 million in financial losses in Q1 2025, a Russian-funded network paid people to post pro-Kremlin propaganda using ChatGPT ahead of Moldova’s election, and a Russian-linked platform paid engagement farms in Africa to amplify its narratives via verified social media accounts. But the structurally novel threat is data-poisoning: a Russian-linked network ahead of Australia’s federal election published thousands of fake articles not to reach human readers, but to contaminate AI chatbot training data - and in tests, nearly 17% of chatbot responses amplified the seeded false narratives. The target wasn’t what people believe. It was what AI systems will tell them to believe.
The Observer’s scholarly slop investigation runs identical logic through academia. A phantom study invented by a chatbot was cited 70 times in real journals - including hilariously once in a paper about AI in education - before anyone noticed it didn’t exist. Around 14% of PubMed abstracts now contain AI-generated material. NeurIPS 2025, the field’s most prestigious AI conference, contained at least 100 hallucinated citations across 53 accepted papers. In both cases - chatbot poisoning and scholarly slop - fabricated content enters a system that confers legitimacy through accumulation. Each new citation or training crawl makes the fabrication harder to remove.
A report from August 2025 was already warning about “agentic AI” capable of running tens of thousands of bots simultaneously, with ISIS deploying AI-generated news anchors and AI-dubbed recruitment content that drew hundreds of thousands of views before removal. The autonomous propaganda model USC has now formally demonstrated was visible in non-state extremist contexts six months earlier. The gap between early warning and institutional response is, at this point, a pattern of its own.
Worth reading this week
From Deepfake Scams to Poisoned Chatbots — CETaS/Alan Turing Institute
Storm-1516 and R-FBI: Russian Attempts to Interfere in the German Election — Alliance4Europe
The Nordic Times — a Swedish far-right news site in disguise — Expo
AI is inventing academic articles — and scholars are citing them — The Observer
Online propaganda campaigns are using ‘AI slop’ — NBC News/Graphika
Something to think about
If the cost of running a disinformation campaign approaches zero, the success rate stays low, but the volume approaches infinite - what is the meaningful unit of harm? And are our current defences designed for a world where manipulation succeeds, or a world where it merely exhausts?
Sources: Expo, Alliance4Europe, CETaS/Alan Turing Institute, Graphika/NBC News, The Observer, USC Viterbi School of Engineering.

