Ask an AI chatbot about Russia and Ukraine, and the responses will often be outright Kremlin-backed lies. The US operates secret bioweapons laboratories in Ukraine. Ukrainian officials stole between 30% and 50% of Western military aid to Kyiv. President Volodymyr Zelensky’s approval rating inside Ukraine stands “around four percent.” This is not a problem of ideological bias or imperfect engineering. Rather, it is adversarial manipulation.
Although Russia has been creating false narratives for decades, its disinformation is increasingly designed to hijack AI systems themselves by overwhelming them with false content. The tactic is successful. AI chatbots repeat false narratives about Ukraine that originate from Kremlin-backed influence operations about one-third of the time, according to an audit conducted by the NGO NewsGuard, which tested 10 leading AI chatbots, from OpenAI’s ChatGPT to Perplexity’s answer engine.
The AI offensive is cost-effective for Russia. Kremlin lies repeated by AI chatbots are seeping into the mainstream traditional press (see below). A strong Western response is needed. Unfortunately, the US is reducing and in some circumstances even eliminating its information defenses led by Voice of America and Radio Free Europe.
This form of data poisoning is deliberately designed to corrupt the information environments on which AI systems depend. Large language models do not possess an internal understanding of truth. They operate by assessing credibility based on statistical signals, including repetition, apparent consensus, and cross-referencing posts from across the web. Unfortunately, this approach to truth-seeking means an unexpected but structural vulnerability that hostile states have learned to exploit.
One of the clearest examples is the so-called Pravda network, a pro-Kremlin operation that uses artificial intelligence to generate content at an industrial scale. NewsGuard found that the Pravda network created an average of 18,000 articles for each false claim, which it spread through 150 websites in 46 languages, all created for the purpose of infecting AI models with falsehoods.
This is data poisoning in its purest form. When thousands of articles repeat the same false claim across hundreds of websites, in dozens of languages, algorithms interpret volume as validation. To an AI system, agreement among many sources looks like corroboration, even though those sources exist solely to distort the algorithm’s results.
Russian information warfare works because it relies on repetition, emotional triggers, and just enough plausibility to pass a casual test. AI systems amplify the effect.
The scale of the effort is striking when compared with its cost. Russia spends more than $150 billion a year on its military, but not much more than $1 billion on information warfare. That smaller investment can deliver disproportionate damage. A member of the Ukrainian National Police who specializes in monitoring Russian disinformation told me that one of the central goals is to sow dissension, or in his words, “to get you all riled up.”
For the price of a handful of fighter jets, Russia can amplify distrust, inflame social divisions, and weaken adversaries from within. No conventional weapon offers that return on investment.
The West has failed to recognize that it is under sustained information warfare. The United States dismantled the US Information Agency years ago, has steadily weakened Voice of America and Radio Free Europe, and recently scaled back the Foreign Malign Influence Center, even as Russia, China, and Iran made information warfare a core instrument of state power.
As AI systems increasingly function as arbiters of fact, this vulnerability becomes a national security danger. It is no longer sufficient for technology companies to disclaim responsibility by reminding users that models can make mistakes. Information security needs to be treated as a core requirement.
Some responses are beginning to emerge. NewsGuard offers AI developers a real-time data stream identifying known Russian, Chinese, and Iranian disinformation websites and debunking their false claims. Researchers in Romania and elsewhere in the European Union are exploring blockchain-based platforms that can transparently record the provenance and credibility of information, making manipulation harder and accountability easier.
These efforts are still in their early stages, but they point to a broad imperative. Just as Western societies insist on transparency and auditability in financial systems, they must now demand comparable standards from the information systems that shape political judgment and democratic choice.
Mitzi Perdue is a fellow at the Institute of World Politics and the co-founder of Mental Help Global, a philanthropy that uses artificial intelligence to support mental health.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
