The paper proposes to narrow Europe’s definition of artificial intelligence (AI) in the upcoming AI Act while broadening the exemption for general-purpose machine learning and allowing individualized risk assessment of programming projects. Europeans already are divided over the text, which is now the subject of negotiations in the European Parliament and the EU Council.

At stake is one of the key potential pillars of transatlantic tech cooperation. The US and Europe are scheduled to hold the third meeting of the Trade and Technology Council in DC on December 5.

“Many of our comments are prompted by our growing cooperation in this area under the US-EU Trade and Technology Council and concerns over whether the proposed Act will support or restrict continued cooperation,” the US document proposing changes to the AI Act reads.

A spokesperson for the US Mission to the EU declined a request for comment.

Under Europe’s AI Act, different types of programming are classified as low- and high-risk. Low-risk applications face minimal obligations. But high-risk requires programmers must take a series of precautions to make sure their plans are safe.

The US warned that the proposed European definition of AI in the regulation “still includes systems that are not sophisticated enough to merit special attention under AI-focused legislation, such as hand-crafted rules-based systems.”  The US suggests using a narrower definition in the spirit one provided by the Organisation for Economic Co-operation and Development (OECD).

Some US concerns find receptive European voices. The Czech Presidency of the EU Council has proposed a revised, shortened list of high-risk system programming, a strong role for an independent AI Board, and a reworked national security exemption.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Yet the US remains dissatisfied. It warns that remaining risk-management obligations could prove “very burdensome, technically difficult and in some cases impossible.” It also pushes back against forcing general-purpose AI providers—of which the leading providers are large US companies including Microsoft and IBM—to cooperate with their users, including the disclosure of confidential business information or trade secrets.

Instead of classifying a programming project as high-risk, the US administration advocates for a case-by-case assessment. It also would like an appeal mechanism for companies that think they have been incorrectly classified as high-risk. The US wants a substantial role for the AI Board, which will gather the EU’s national authorities, preventing an individual nation from imposing its veto.

In the US view, Europe wants to act unilaterally, shutting the door to non-EU countries on setting AI standards.

A particularly divisive issue concerns biometrics. The US suggests a flexible exemption for the use of biometric recognition when there is a ‘credible’ threat, such as a terrorist attack. The European Parliament has pressed for a total ban on biometric surveillance.

The role of market surveillance authorities is also under scrutiny. Some European policymakers want them to be granted full access to the source code of high-risk systems when ‘necessary’ to assess their conformity with the AI rulebook.

For Washington, what is ‘necessary’ needs to be defined and clarified. A list of transparent criteria should be applied to avoid subjective and inconsistent decisions across the EU, and the impacted company should be able to appeal the decision.

Luca Bertuzzi is the technology editor at Euractiv.com. 

This article was originally published by EURACTIV. EURACTIV is an independent pan-European media network specialized in EU affairs including government, business, and civil society.