Artificial intelligence-powered ChatGPT generates detailed responses and answers. It can write poetry and rap verses. It can correct software code. But it can also spout dangerous disinformation and libelous lies. 

The tool’s promise and its dangers divide policymakers. Some want to regulate it. Others fear killing innovation and free speech. The debate underlines just how difficult it is to regulate a new, revolutionary product — and could lead to a new transatlantic digital divide, with Europe moving to regulate, while the US hesitates.

In the US, the battle centers on Section 230, the shield enacted in 1996 that limits liability on platforms hosting content. If a user posts something illegal on a website, the user is liable, not the website. 

American analysts are divided over whether Section 230 will cover ChatGPT. Matt Perault, former Facebook executive and now a professor at the University of North Carolina, believes Section 230 will not cover ChatGPT. In his view, the tool produces content and legally becomes the user, not the host. The authors of Section 230, Senators Rob Wyden and Chris Coons share this view. If courts agree, tech companies could be exposed to a flood of lawsuits.

But other lawyers believe that ChatGPT will be considered the host, not the content creator, and benefit from the Section 230 shield. ChatGPT produces content only in response to prompts or queries, they argue. Its responses could be seen as remixing content from third-party websites. “Just because technology is new doesn’t mean that the established legal principles underpinning the modern web should necessarily be changed,” writes Jess Miers, legal advocacy council for the left-leaning trade group Chamber of Progress.

How the US Supreme Court rules in a case under consideration called Gonzalez v. Google could offer clues. Victims of a terrorist attack brought the case against Google’s YouTube, arguing that its recommendation algorithm promoted radical Islamic State violence. 

At a hearing last month, Justice Neil Gorsuch suggested that Section 230 would not cover ChatGPT. “Artificial intelligence generates poetry,” Gorsuch said during the hearings. “It generates polemics today that would be content that goes beyond picking, choosing, analyzing, or digesting content. And that is not protected.”

But most justices expressed skepticism about tampering with Section 230. A ruling on behalf of the Gonzalez family could unleash a wave of lawsuits, they suggested, and it should be up to Congress to recommend alternative legislation. Congress, however, is stalemated and divided along party lines.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

In Europe, the debate on ChatGPT has inflamed negotiations over the Artificial Intelligence Act. The proposed legislation classifies AI systems into four categories, with “high-risk” systems requiring transparency notices and safety assessments at market entry and throughout their lifecycle. An example of a high-risk system would be AI used to verify the authenticity of travel documents involved in immigration and asylum processes. The Act could ban controversial uses of AI, such as unprompted facial recognition.

When introduced, the legislation limited the scope of “high-risk” applications. Tech companies were relieved. But the European Parliament has expanded the number of potential high-risk applications. 

Left-leaning lawmakers are now proposing that ChatGPT systems that generate complex texts without human supervision should be part of the “high-risk” list. Right-leaning groups are skeptical. They worry about slowing innovation and overregulating activities that are low-risk.

The European Commission, the Council of the EU, and the European Parliament are scheduled to negotiate the final version of the AI Act starting in April, but perhaps later. ChatGPT will make finding a consensus difficult.

In the meantime, European politicians are using ChatGPT to insult each other. German Green parliamentarian Daniel Freud asked ChatGPT to rap about Hungary’s Prime Minister Victor Orban and received this provocative verse in response:

“He’s been stacking the courts, packing the press

Making sure his critics are silenced, no less

Using public funds to line his own pockets

It’s time to call him out, let’s unlock it.” 

Bill Echikson edits Bandwidth. Romy Hermans is an intern with CEPA’s Digital Innovation Initiative. Eduardo Castellet Nogués added research in Washington, DC.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More