Europeans agree that they want to regulate AI. But they are divided on issues ranging from facial recognition and social scoring to the definition of AI.

Each political group of the European Parliament has submitted several hundred amendments, bringing the total to several thousand. The deluge has come equally from the left and the right – and will now have to be reconciled in a summer of negotiations. 

One of the most controversial topics is on definitions. Left-of-center parliamentarians are pushing for a broad general definition of artificial intelligence (AI) rather than accepting a narrow list of AI techniques. Their goal is to make the regulation future-proof.  By contrast, the center-right European People’s Party insists on the definition agreed upon at the OECD. The international economics organization set out a series of principles in 2019 that conservative MEPs argue would promote international agreement (including with the US) among democracies about how to build trustworthy AI.  

What practices to prohibit remains divisive. Green MEPs want to ban biometric categorization, emotion recognition, and all automated monitoring of human behavior. These include recommender software that suggest disinformation and illegal content, used for law enforcement, migration, work, and education. 

Although these demands go too far for most parliamentarians, the majority look set to prohibit biometric recognition. Liberals have joined the Social Democrats and Greens to advocate a complete ban, eliminating the exceptions included in the European Commission’s original proposal such as the prevention of terrorist attacks or the identification of a missing person. 

Under the proposed AI Act, different types of programming are classified as low- and high-risk. Low-risk applications face minimal obligations. But high-risk requires programmers to take a series of precautions to make sure their plans are safe. 

Conservative lawmakers want to narrow the list of high-risk use cases, excluding software designed to assess creditworthiness, among others. In their view, AI providers should be allowed to self-assess if their programs pose significant risks to health, safety, and fundamental rights. Obligations for high-risk applications should be partially or completely removed if programmers mitigate the risk with countermeasures or built-in features.  

In contrast, Green MEPs extended the high-risk category to media recommendation software, algorithms used in the health insurance processes, payments, and debt collection. 

The Greens demand strict environmental requirements and mandatory fundamental rights impact assessments. 

Futuristic metaverse applications are under the microscope. Liberals have introduced a new article to put them in the scope of the regulation, including a reference to blockchain-backed currencies. They want the rules applied to providers not located or operating in the EU under certain circumstances. 

Advertising software is also controversial. An amendment supported by the Tracking Free Ad Coalition wants to include AI systems delivering online advertising in the list of high-risk systems. The Green group added a paragraph to add a transparency requirement to counter deceptive practices called dark patterns

Enforcement is another crucial question, raising the traditional debate whether centralized Brussels or national authorities should take the lead. Lawmakers from both sides of the aisle seem to agree on giving more investigatory power to central European-wide authority, with the Greens particularly ambitious. Conservatives want to give the European Artificial Intelligence Board additional autonomy in setting its own agenda. The Greens want the European Data Protection Supervisor to provide the Board’s secretariat for the Board – and act as the supervisory authority for large companies.   

How much should offenders pay? Both liberal and conservative MEPs are proposing to lower potential fines, with the conservatives in particular including a carve-out for SMEs and adding factors to consider such as intent, negligence, and cooperation to calculate the fine. By contrast, the center-left is pushing for an overall increase of the sanctions and for removing size and market share consideration from the criteria authorities consider when imposing a penalty. 

Luca Bertuzzi is the technology editor at Euractiv.com.  

 

Credit: Kevin Ku via Unsplash.


logo – wordpress

This article was originally published by EURACTIV. EURACTIV is an independent pan-European media network specialized in EU affairs including government, business, and civil society.

Bandwidth is an online journal covering crucial topics surrounding transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.