The EU is racing to respond to the emergence of ChatGPT and other fast-advancing AI language models — putting it on a potential collision course with the US.

The White House has published some principles for keeping AI safe and China has moved ahead with draft rules designed to make sure that the new technology adheres to the country’s strict censorship rules. But the EU wants to set a legally binding global standard. Under the EU regulation, AI applications that pose an unacceptable risk will be banned and strict rules imposed on high-risk use cases.

Where to draw a line on what AI applications should be approved and what should be forbidden was at the center of the final parliamentary debate. The main point of contention came over facial recognition, and how to balance security while minimizing the risk of mass surveillance. Parliamentarians haggled over the real-time use of remote biometric identification, a type of automated surveillance that tracks faces, bodies, and movements.

Conservatives had wanted amendments that would allow the real-time use of biometric identification in three exceptional cases: to find a missing person, to prevent a terrorist attack, or to locate the suspect of a serious crime. In opposition, left-leading parliamentarians tabled an amendment for a complete ban. The final compromise was to ban the real-time use of the technology while allowing it in investigations of serious crimes, after approval from a judicial authority.

The launch of ChatGPT and other large language foundation models that can answer a broad range of questions with text, images, and video turned what was a technical dossier into a highly charged political one. Parliamentarians agreed to introduce specific obligations on ChatGPT-type services, including a requirement to publish a detailed summary of the training data covered by copyright law. The idea is to give rightsholders visibility. Eventually, they could seek payments from AI companies under EU copyright rules. 

Another hot debate revolved around what applications posed an unacceptable risk. When first proposed, the list was limited. During the parliamentary debate, it expanded. High-risk applications now not only include AI systems used in critical infrastructure, but also social media recommender systems. Their developers will need to conduct risk assessments before putting them into use. These risk assessments include not just the impact on privacy and fundamental human rights, but monitoring environmental impact. 

Get the Latest
Sign up to receive regular emails and stay informed about CEPA’s work.

US tech is worried. The tech industry group, the Computer & Communications Industry Association, expressed concern that the parliamentary version of the law would slow down AI’s progress. “Making artificial intelligence work for the people is not just about addressing potential risks,” CCIA said. “It also means that promoting innovation needs to be at the core of the new regulation.”

The final text will now be negotiated between parliamentarians, representatives of the EU governments, and the European Commission. A first trialogue was scheduled to take place right after EU Parliament’s vote and officials hope to finalize a text before the end of the year. Spain, which takes over the rotating EU presidency in July, has made finishing up the AI Act its top digital priority.

As Europe’s parliamentarians voted, the push to sign the world’s first AI treaty is faltering. The Council of Europe, an international body in charge of upholding human rights in its 46 participating countries, plus observers including the United States, Canada, Israel, and Japan, has been drafting a Convention on Artificial Intelligence, Human Rights, Democracy, and the Rule of Law.

But the committee’s plenary session, scheduled for September, has been canceled. The US had requested the drafting of the text occur behind closed doors, among governments. Civil society groups would be excluded. 

Such secrecy runs against the Council of Europe’s internal policy.

Another issue is substantive. With the support of the United Kingdom, Canada, and Israel, the US is pushing to limit the scope AI Convention to only public bodies, leaving out the private sector. By contrast, the committee’s mandate refers to a “binding legal instrument” for both public and private organizations.

Luca Bertuzzi is Tech Editor of Brussels-based

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More