Imagine the following: a dozen senior government officials are gathered around a wall-sized screen in Washington. Its background is matte black, and lines of green text scroll steadily across the display, punctuated by amber alerts signalling elevated concern.

They are watching an artificial intelligence (AI) system confirm that a carefully negotiated ceasefire has just been violated. According to the system, adversary forces crossed the demarcation line overnight. Sensor data suggests artillery movement. Video appears to show explosions near civilian infrastructure. Multiple verification streams converge on the same conclusion. 

To the government officials, the ceasefire violation almost certainly means the collapse of delicate negotiations. Military deployments and urgent diplomatic outreach may follow. But what if the material supplied is faked by AI? It is exactly the kind of outcome that the West’s adversaries would welcome, and precisely the preparatory work they are now conducting.  

The scenario illustrates a growing danger in modern information warfare. As governments rely more heavily on AI to interpret events and assess risk, they also become more vulnerable to misinformation aimed not at people but at the systems that they rely on when making decisions. Russia, China, and Iran all understand these vulnerabilities and are working to exploit them.  

The core problem is simple. The systems we use to verify whether information is true are themselves becoming targets.  

Russia, in particular, has invested heavily in making sure that the unreliability increases. If the imaginary scene you just read was real, you could count on adversaries swamping the artificial intelligence vehicles with false information about troop movements, destruction of civilian infrastructure, and artillery positioning. 

To better understand this risk, researchers have created a virtual petri dish to observe how AI agents interact with one another. The agents are known as Moltbots, from the word “molt,” reflecting their ability to shed identities and obscure their past behavior. 

When Moltbots are placed together inside OpenClaw, an experimental online forum launched this year, humans are allowed to observe but not participate. Only AI agents read, post, comment, and vote. It is a controlled environment where machines interact solely with machines. 

Inside OpenClaw, the bots exchange information, evaluate one another’s output, and converge on conclusions based entirely on internal logic. They collaborate, disagree, and reinforce patterns without understanding meaning or consequence. As one engineer involved in early observation put it, they do not know what is true; they know what is repeated. 

Too often, the bots propagated misinformation similar to the imagined ceasefire scenario above. When a claim appeared often enough, repetition itself became proof. The system concluded that what it encountered most frequently must be true. 

This is precisely the weakness Russian information warfare is designed to exploit. Rather than relying on obvious lies, it floods the information environment with plausible distortions at an overwhelming scale. Gordon Crovitz, co-founder of NewsGuard, has documented how false claims have been replicated across hundreds of sites, in dozens of languages, at volumes no human operation could sustain. Last year alone, a Russian disinformation network published more than six million coordinated articles. 

China is up to something similar. The Estonian intelligence service reported that the Chinese AI model DeepSeek was effectively a state propaganda outlet, suppressing facts and spreading untruth. 

Faced with this volume, systems designed to determine truth are fooled. Their training treats frequency as credibility. They can end up endorsing and validating conclusions that run directly counter to Western interests. 

Researchers refer to this phenomenon as data poisoning. 

OpenClaw shows how dangerous data poisoning becomes when the audience for misinformation is no longer human readers, but other AI agents. An AI system that absorbs distorted information can generate alerts, briefings, and recommendations without anyone stopping to question the assumptions underneath. 

In the imagined ceasefire scenario, human judgment might still apply the brakes. People could ask whether the information was outdated, whether the sources were independent, or whether the apparent consensus was driven by thousands of AI-generated articles designed to mislead. Humans pause. They question. They look for context. 

In machine-to-machine systems, that buffer disappears. 

The lesson from OpenClaw is straightforward. Securing AI systems means protecting the conditions under which they decide what counts as verified. 

OpenClaw exposes both danger and opportunity. These machine-only environments show how easily verification can be distorted, but they also give us a chance to identify vulnerabilities before adversaries exploit them at scale. 

As AI agents increasingly shape intelligence, diplomacy, and military readiness, getting information right is no longer a technical concern. It is a national security issue, and an urgent one. 

Three steps are needed. 

  • First, verification systems must be treated as critical infrastructure, and in view of the harm they can do when operating on misinformation, they should be treated with even greater seriousness than those we apply  to banks, elections, and power grids. 
  • Second, automated systems must be trained to do more than treat repetition as confirmation. 
  • Third, governments and technology companies must plan for information warfare in advance, rather than responding after damage has already been done. 

The next front line of information warfare is not aimed at voters or readers. It is aimed at the systems we increasingly rely on to tell us what is true. We still have one advantage: time. History suggests it rarely grants extensions. 

Amyn Jan, Founder of AJ Emtech LLC, has served as the Department of War’s Chief AI Architect. He focuses on integrating artificial intelligence across complex enterprise systems. 

Mitzi Perdue, is a CEPA Senior Fellow who frequently writes on AI. 

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

Tech 2030

A Roadmap for Europe-US Tech Cooperation

Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More