Europe is racing ahead. The Artificial Intelligence Act, proposed in April 2021, requires software developers to comply with a detailed list of technical and auditing requirements for “high-risk” applications. These European rules make Washington uncomfortable.
In contrast, the US so far has imposed few concrete regulatory steps and refuses to join international partnerships. When UNESCO’s 193 member countries approved a first-of-its-kind recommendation for AI ethics in November 2021, the US did not sign.
Without a transatlantic partnership, China and Russia will face little opposition to spreading their authoritarian approach, leveraging the technology for mass surveillance. At the recent CEPA Forum, former Google Chairman Eric Schmit noted, ominously, that “China is producing more AI papers than the US.”
Although Brussels and Washington say they agree on the importance of promoting ethical AI – prohibiting software that produces social scoring and facial recognition in public places – they do not seem to agree on how to achieve this goal. The EU’s AI Act labels different technologies that fall under the term ‘AI’ by their risk. “Minimal” and “limited” risk applications, which represent the vast majority of technologies currently employed, will face few restrictions. But high-risk systems will be subject to strict obligations. Unacceptable-risk applications (such as social scoring) will be banned.
In contrast, key American decision-makers believe that it is premature to regulate a technology that we struggle to understand. “Europe’s proposed AI regulation” is “sensible, written in European public policy language,” says former Google CEO Schmidt, who chaired the US National Security Commission on Artificial Intelligence and co-authored the recent book The Age of AI. “But in the middle of it, it says that for critical infrastructure, you cannot deploy it, unless the AI system can explain itself. There is no AI system today that can explain itself. The technology is not there.”
US business and policymakers fear Europe’s regulation will hamper innovation. According to the Center for Data Innovation, Europe’s AI Act could cost the continent’s economy upwards of €30 billion over the next five years. In their recent paper, researchers Mikołaj Barczentewicz and Benjamin Mueller offer a series of concrete examples which could be banned in Europe. A school’s admission office could be blocked from using a Microsoft Excel macro to check a student’s eligibility. A small business would no longer be able to use a computer to check whether job applicants have the correct professional license.
No one doubts the need for reigning in the most dangerous types of AI. President Trump supported transatlantic initiatives such as the Global Partnership on AI and the AI Partnership on Defense. The Biden Administration has pushed forward a National AI Initiative Act in 2020 that mandates the federal government to provide oversight and guidance for a “trustworthy AI.”
In the US, these fears have led to criticism of the government’s go-slow approach. David Edelman, director of MIT’s Internet Policy Research Initiative and former White House advisor, worries about under-regulation. “It paints a false dichotomy for anybody to say that regulation is wholesale good or is wholesale bad for innovation,” adds Terah Lyons, who shaped AI policy during the Obama administration and is currently the executive director of the Partnership on AI.
US inaction risks undermining its global influence. A poor track record at home could make it hard to convince others around the world to embrace ethical principles. The Biden administration wants to construct an alliance of democracies. To do so, it must address the international debate over ethical AI.
China, for its part, is heading international ethics boards and trying to set global (authoritarian-friendly) standards. The US must not let the authoritarians score cynical points. Instead, it must show it is serious about engaging with like-minded democracies. This means speaking with Europe and moving fast to impose an alternative regulatory framework, while also considering regulations of its own.
Europe, on the other hand, needs to slow down and avoid making its quest for “trustworthiness” into a choke on innovation. Otherwise, it risks seeing the most innovative software development flee to the continent and move to North America or to Asia. It must revise its overly broad definition of “risky” AI and limit the requirements for compliance to truly risky operations.
A deal is possible. The US needs partners. Europe needs to avoid overregulation. Both sides must acknowledge that if democracies fight over AI, the ultimate winner risks being China.
Bill Echikson is editor of CEPA’s Bandwidth content stream. David Klotsonis is an intern with CEPA’s Digital Innovation Initiative.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.