The dangers are clear: AI-generated lies could be more convincing than human-generated disinformation, and AI brings with it the threat of narrative uniformity. Call it synthetic disinformation.
The opportunities are also clear: AI’s ability to understand myriad online perspectives could help bridge ever-widening ideological chasms. By creating compelling narratives that resonate across political spectrums, AI could promote empathy and encourage diversity.
Empathy and diversity are crucial for a healthy democracy. In contrast to authoritarian regimes that attempt to dictate a singular worldview, a one-size-fits-all narrative, democracies thrive on the robust discourse sparked by an array of perspectives that acknowledge human fallibility. By pitting viewpoints against each other, we harness the collective power of differences and strive toward a rounded understanding. This formula is key to democracy, according to philosopher Karl Popper.
Narratives, as a means of information processing, evolve in step with advancements in information retrieval. Before the internet, traditional media platforms such as newspapers, radio, and television served as gatekeepers, guided by the editorial discernment of journalists.
And while the world is only now realizing the potential of AI, the internet began as an impenetrable mass of web pages, with pages discovered through word of mouth or through newspapers, radio, or television. This proved inefficient. Google, which ranked webpages based on their interconnectivity, filled this vacuum, allowing more people than ever to connect to more information.
During the early internet, narrative formation came less from information traveling across the web than from its impact on traditional media. Throughout the first decade of this century, Americans continued to receive most of their news from newspapers, radio, and television.
The problem was economic. For free, the internet brought into homes weather, sports scores, stock tickers, yard sales, and job postings. Newspapers collapsed. Revenues for classified ads, the backbone of the print media model, contracted by 77% between 2000 and 2012, with subscriptions dropping over the same period by 12 million. A full quarter of all newspapers shut.
Enter social media recommendation engines. They radically changed the way in which information spread and by extension, the way people formed narratives. Brands born in the digital era, such as Buzzfeed, intuitively understood and then rigorously tested what the recommendation engines rewarded. They found the content was shared less for its inherent merit and more for the statement it made about the one sharing. Information retrieval on social media became an exercise in navigating the strongest expressions of exhibitionists’ tribal identities.
The culling of local news has continued apace, with 20 million more subscriptions disappearing between 2012 and 2020. By 2022, fewer newsroom staff reported on fewer topics in fewer papers than in decades. While national news outlets survived, and some began to thrive, they did so by adapting to much of the spectacle and identity politics demanded by the new era.
Between 2004 and 2017, ideological uniformity spiked. Information polarized as local news collapsed. Narrative diversity shrank.
AI brings with it the threat of an even more pronounced narrative uniformity. As newsrooms incorporate the technology, they’ll rely on models trained on data overwhelmingly generated in the past few years, a period of extreme polarization. Combined with only a handful of models doing the lion’s share of the work, the generative AI era could usher in a period of extreme narrative homogeneity.
Yet AI also presents glimmers of hope. The technology’s ability to explain myriad online perspectives could help bridge ever-widening ideological chasms. By creating compelling narratives that resonate across political spectrums, AI could promote empathy and encourage narrative diversity.
Some AI has already accomplished this: the proliferation of machine translation allows anyone to read the news, opinions, and analysis that underpin the worldviews of those from vastly different backgrounds and experiences. Describe your political preferences to ChatGPT, then ask the model to convince you of the opposing side on a contentious issue and find an argument far more catered to your beliefs than you’ll find in any op-ed targeted at the opposing side’s base.
The same model can make your own arguments with politically opposed friends or relatives better catered to their beliefs and more likely to be productive for both sides. Ask ChatGPT for conservative arguments in favor of a carbon tax, and it warns of the dangers untamed pollution poses to property rights. Ask the AI for liberal arguments against it, and it will decry the burden born by the poor, who spend the largest shares of their incomes on energy.
The information age’s early upheavals presented major challenges to democracy. It shrunk the perspectives underpinning open societies. Social media, with its tendency to homogenize and exaggerate tendencies, presents a significant challenge. AI, while potentially exacerbating the problem, also offers a powerful tool to foster narrative diversity – our best defense against democratic erosion.
Ben Dubow is a Nonresident Fellow at CEPA and the founder of Omelas, which specializes in data and analysis on how states manipulate the web.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.