The photo is graphic and disturbing – showing an anguished young Israeli woman in pain. Another image depicts a deadly explosion in Gaza. In yet another, a Ukrainian boy and girl are stranded in a bombed-out cityscape.

All are powerful images – and fake, produced with AI.

AI generates a flood of text and images with minimal effort, providing munition for disinformation campaigns. By blurring fact and fiction, the technology threatens to undermine the credibility of genuine reporting.  After the Washington Post revealed the fake AI war pictures from the Mideast and Ukraine, the software company Adobe hosting them vowed to crack down.

Unfortunately, the danger is mounting. Even with strong controls, AI systems make significant errors. The mistakes range from bias and hallucinating patterns to basic errors of common sense and mathematics. These problems are good reasons to avoid AI when writing articles for a term paper or for a reputable publication.

But for those producing large amounts of misinformation, the mistakes don’t matter. They may even represent an advantage. Disinformation campaigns don’t need to construct credible stories or provide sophisticated evidence.  An effective and economical tactic is to publish fabricated images and text alongside real stories.

Even if individual stories are debunked, voters will accept a false pattern if enough stories repeat the same lies. Or they will begin to mistrust all information, adopting undue skepticism about well-sourced stories. “Alternative” facts spread. Healthy skepticism seeps into everyone having their own facts.

This danger isn’t theoretical, particularly as we enter a New Year full of consequential elections on both sides of the Atlantic Ocean.  Governments and politicians in both democracies and autocracies are leveraging AI to generate texts, images, and videos to manipulate public opinion in their favor and to censor critical online content. In a recent Freedom House report, researchers documented the use of generative AI in 16 countries “to sow doubt, smear opponents, or influence public debate.” 

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Russia is behind much dangerous disinformation. When the cybersecurity firm Symantec looked at the large number of bot accounts created by Russia’s Glasvet, it found that they attempted to fuel the extremes of both Democrats and Republicans.  “The campaign directed propaganda at both sides of the liberal/conservative political divide in the US, in particular, the more disaffected elements of both camps” concludes researcher Gillian Cleary. “The main objective of the campaign instead appeared to be sowing discord by attempting to inflame opinions on both sides.”

European Commission Vice President Vera Jourova recently denounced Russia’s “multi-million euro weapon of mass manipulation” ahead of the upcoming June 2024 European parliamentary elections. “The Russian state has engaged in the war of ideas to pollute our information space with half-truths and lies to create a false image that democracy is no better than autocracy,” Jourova argued.

But our own politicians are also guilty.  During the 2020 election, the Trump campaign pumped out fake or doctored images of a sleeping Joe Biden and the Covid 19 pandemic featured a flood of health misinformation.

The problem will worsen as AI-generated images and stories improve. In the future, AI will no longer draw individuals with hands at impossible-to-achieve angles or with too many fingers. AI-generated gibberish text will become rare.

How can we combat the coming flood of AI-generated misinformation? Traditional fact-checking is too labor-intensive and is bound to fail.  Yet some potential weapons offer a path forward.  Algorithmic detection scanning hundreds of thousands of social media posts identifies AI patterns more quickly than human moderators, making it effective in combating coordinated disinformation campaigns. This technique helped flag Russia’s Glasvet network.

Several social media companies have begun embedding metadata in AI-generated images in order to allow for easy checking. The Content Authority Initiative creates tools that authenticate images through metadata and sets standards of evaluation that can weed out AI-generated photos. Even if generative AI creates images that pass as real to the human eye, metadata identifies them as fakes.

Unfortunately, no standard infrastructure exists yet for identifying and combating AI-generated text. Additional research is required, and governments should consider funding it. In the meantime, voters beware. As election year approaches, expect a significant increase in AI-powered disinformation.

Joshua Stein recently completed a postdoctoral fellowship at the Georgetown Institute for the Study of Markets and Ethics. His work focuses on ethics, technology, and economics.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More