ChatGPT is amazing. Ask a question and it responds with an answer that is as good or better than one composed by the average student. It is able to pass professional exams in medicine, law, and business administration. Little wonder Microsoft is investing $10 billion to bake the technology into its products.
But the revolutionary chatbot also generates plausible tall tales in massive quantities – automating disinformation and producing hoaxes. It spews out bizarre and conspiratorial narratives. Asked, for example, to explain how and why pharmaceutical companies want to control the world, Chat GPT wrote copy worthy of a QAnon channel.
How should regulators respond? Several universities including Stanford are considering a ban for student work. With its AI Act, the European Union is putting the final touches on the world’s first major legislative attempt to reign in artificial intelligence. Under the AI Act proposal, different types of programming are classified as low and high-risk. Low-risk applications face minimal obligations. High-risk requires programmers to take a series of precautions to make sure their plans are safe.
ChatGPT would fall into the category of general-purpose AI. Companies in “high risk” fields such as healthcare, transport, energy, and parts of the public sector, could face new hurdles if they want to incorporate ChatGPT into their products. In contrast, US companies leveraging the new technology would confront few restrictions.
ChatGPT’s obvious limitation (at least for now) is that it lacks a critical spirit: it does not understand what it is writing and how concepts impact society. It works by drawing on the immense number of texts on which it has been trained to generate responses – replicating writing patterns, not thinking up answers.
That’s why it will be so hard for large language models to replace traditional search engines. Not long ago, Google’s former head of search Ben Gomes explained how the company’s algorithm has been changed to favor “authoritative” sources over “relevance.” What this means is clear: rather than get a direct answer to a query from a dubious source, readers are linked with an authoritative website.
Before, the question “Did the Holocaust Happen?” returned a revisionist claim that the murder of six million Jews never happened. Today, it returns a page from the US Holocaust Memorial Museum. “While we never can get rid of all fake news, I think we are now one step ahead of the problem,” Gomes said, estimating that less than one percent of all Google queries are directed to scandal-mongering sites.
ChatGPT poses a giant challenge for Google. The company reportedly decreed a “Code Red” after the chatbot was unveiled. So far, Google has hesitated to inject AI technology, fearing to cannibalize its incumbent ad business and spread dangerous misinformation. Yes, ChatGPT is more advanced: it can perform a search for the user and present the result in a natural way, or even chat with a user unaware that there is an AI on the other side of the screen. But there is no guarantee that what is generated will match reality.
The US is lobbying hard to dilute Europe’s AI regulation, aiming to narrow Europe’s definition of risky AI. In Washington’s view, it is too early to regulate a technology that they struggle to define. Europeans themselves are divided over the text, which is now the subject of negotiations in the European Parliament and the EU Council.
At stake is a key potential pillar of transatlantic tech cooperation. AI is central to the Trade and Technology Council, where EU and US officials are working to reconcile their approaches and draw up a shared rulebook to chart its evolution under democratic values – and to avoid ceding ground to the spread of autocratic AI systems.
ChatGPT throws down a gauntlet to regulators on both sides of the Atlantic. It will not wait for them to devise effective rules to encourage its positive potential while limiting its risks.
Otto Lanzavecchia is an Italian journalist for Formiche.net and Decode39. A City, University of London alumnus, he focuses on international affairs, tech, energy, and the ecological transition.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.