Yas Gaspadar is a Belarus dissident, a parliamentary candidate in last month’s parliamentary election for the Belarussian opposition. His campaign manager Sviatlana Tsikhanouskaya organized and ran his AI campaign. She opposed Belarus’s president, “Europe’s last dictator” Alexander Lukashenko, only to be forced into exile in Lithuania where she runs a government-in-exile and struggles to be heard.

That’s why she turned to Gaspadar — a candidate who does not exist.

Gaspadar is a generative artificial intelligence program, a bot, a ruse to evade the ban on political dissent in Belarus. Like Russia and other authoritarian states, Belarus’ opposition politicians are put in an impossible position of choosing between exile or certain imprisonment, abuse, or even death. While a martyred leader can be a powerful symbol of resistance, Tsikhanouskaya wants to organize and lead a provisional government. She cannot do that from a Belarussian prison, or a coffin. Artificial intelligence provided a workaround. Gaspadar can’t be imprisoned since he is just code. But he can give the opposition movement a voice and draw attention.

AI has entered politics — producing both hope and concern. In democracies, deep fakes are generating fears of disinformation. Before the recent New Hampshire primary, thousands of voters recently received a robocall purporting to be from President Joseph Biden — calling on them to avoid voting, the opposite of what the President wanted. On the Republican side, Donald Trump’s campaign posted an audio clip that made it seem as though Florida Governor Ron DeSantis was speaking to Adolf Hitler.

A fake video of House Speaker Nancy Pelosi in 2020 showed her falsely tearing up Donald Trump’s State of the Union speech as he honored audience members and showed a military family reuniting. This was a “cheap fake” rather than a “deep fake,” less sophisticated and lower-cost fakes that can still drive clicks. Professor Regina Rini notes that the video doesn’t have to fool people to create mistrust in video evidence.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

 At present, the US and other democracies put no restrictions on AI chatbots.

Should they have the same free speech protections as candidates? They can make statements and endorse or even propose positions on public policy; generative AI can generate text, and the content of that text will look like a politician’s position. Do those bots have to disclose (for example) that they are generative AI, or is it a form of fraud if they fail to disclose?

Regulation may be coming. The Federal Communications Commission ruled robocalls using AI-generated voices illegal under federal telecoms law, opening the door to fines and lawsuits against violators. The Federal Elections Commission said it expects a resolution of the issue “later this year,” though this may be only after the upcoming presidential campaign.

In Belarus and other authoritarian states, though, AI can be a useful tool for opposition movements. Generative AI produces statements; if the bot is well-coded, it can criticize the government. A fictitious candidate is impossible to identify, prosecute, and imprison. The campaign can be run by engineers dispersed around the world.

AI chatbots can also aid embattled democratic opposition leaders. Former Pakistan Prime Minister Imran Khan used them to campaign from prison, where he has been convicted and sentenced to 10 years in prison for corruption. Although his party lost, Khan used AI to produce a victory speech.

The genius of the Belarus campaign is how the AI-generated candidate piggybacked on global media interest in artificial intelligence to draw attention to Belarus’s fraudulent election. “Frankly, he’s more real than any candidate the regime has to offer,” Exiled opposition leader Tsikhanouskaya said on social media “And the best part? He cannot be arrested!”

What role should generative AI play in our political life? In authoritarian states, they give a voice to the voiceless. In countries where politicians are free to make political representations without threat of imprisonment, their role could be limited. Either way, Yas Gaspadar and fellow bots look sure to shake up elections.

Joshua Stein recently completed a postdoctoral fellowship at the Georgetown Institute for the Study of Markets and Ethics. His work focuses on ethics, technology, and economics.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More