The AI revolution was never going to be smooth. Optimists such as OpenAI CEO Sam Altman painted a picture of the new technology solving everything from climate change to cancer. Pessimists painted a dystopian 1984 world beyond anything George Orwell could have imagined.
But few expected so much drama so soon before the technology’s promise and dangers became clear. The upshot could be possible for transatlantic alignment to favor, at least for now, voluntary codes of conduct over strict, legally binding regulations.
Start with Europe. The European Union was the first out of the gate to regulate AI, proposing a legally binding AI Act. It looked like a technical dossier — until OpenAI’s ChatGPT appeared, shocking legislators into toughening the provisions. Negotiators from European governments and the European Parliament seemed to be heading to an imminent agreement as early as December — until it blew up.
France, Germany, and Italy fired the warning shot, signaling their opposition to regulating advanced AI ChatGPT “foundation models,” the large machine learning models that can be adapted to a wide range of tasks. In a two-page non-paper, the three EU heavyweights rejected Parliament’s attempts to impose strict binding rules. The technology is too new and untested, they argued.
“When it comes to foundation models, we oppose instoring un-tested norms and suggest building in the meantime on mandatory self-regulation through codes of conduct,” the Franco-Germanic-Italian non-paper reads. “They could follow principles defined at the G7 level.”
Behind the “bureaucratese,” the statement represents a declaration of war against the Parliament. The reference to the G7 principles is fascinating because it opens up the possibility of a global accord, working with, instead of against, the US. Until now, Europe has insisted on going alone, paying lip service to international efforts, aiming instead to become the democratic world’s top AI regulator.
Even so, transatlantic AI harmony remains far from assured. When France, Germany, and Italy presented their opposition at a negotiating session, parliamentary representatives reportedly walked out. Spain, which holds the rotating EU presidency and has been shepherding the AI Act towards a conclusion, is struggling to find consensus. We “cannot turn away from foundation models,” warned Carme Artigas, the Spanish Secretary of State for Digitalization and Artificial Intelligence.
Thousands of miles away from the Brussels drama, Silicon Valley’s AI drama raised many of the same questions, again without offering clear answers. After OpenAI’s board fired CEO Altman, offering only a vague explanation, rumors swirled about its motivation.
The consensus of reporters was that the board feared Altman moved too fast to cash in on the new technology, despite its potential dangers. He had struck deals worth upwards of $10 billion with Microsoft. From this perspective, it was a classic money versus ethics conflict.
Money ended up triumphing as the board ended up reversing its decision and reinstating Altman. Microsoft had hired Altman to build a new AI subsidiary. More than 700 of Open AI’s 770 employees signed a letter saying they may leave the company if Altman was not reinstalled. Microsoft had assured OpenAI employees of jobs, the letter asserted
Regulators on both sides of the Atlantic will not be able to stop the AI train from accelerating. The money and motivation to build Chat GPT and other AI products like it are available. Interestingly, the country behind Europe’s AI revolt responded to news of Altman’s firing by sending an invitation. “Altman, his team, and their talents are welcome in France,” said French Digital Minister Jean-Noël Barrot.
The OpenAI debacle might yet derail French, German, and Italian efforts to keep ChatGPT out of scope. Negotiators might say, “Look, even the Open AI board shares our fears.” Brando Benifei, one of the two European parliamentarians spearheading the AI Act, said that the Open AI drama “shows us that we cannot rely on voluntary agreements brokered or commitments taken by visionary leaders.”
Yet European and American AI priorities have much in common. Both want to promote the new technology in an “ethical” manner. While concerned about its potential dangers, their priority is to promote innovation and profit, not blow up, the new AI tech revolution. In the US, the executive branch is enacting regulations in a stealthy manner. Europe will end up passing some version of its AI Act.
Both European and US businesses are eager to jump on the AI bandwagon, seeing it as key to their hopes of staying competitive. Microsoft, Google, and others are pouring billions into the technology. Earlier this year, executives from 150 European businesses, including Germany’s Siemens and France’s Airbus, highlighted the risks of tight AI regulation, saying the rules could threaten the ability of European companies to compete.
The AI drama on both sides of the Atlantic underlines the daunting challenge of regulating such a novel, untested technology. If both sides take a step back and engage in a rethink, today’s tensions could open the door for Washington and Brussels to work together, not against each other.
Bill Echikson is a non-resident Senior Fellow at CEPA and editor of Bandwidth.
This article has been updated to include news of Open AI’s decision to reinstate Sam Altman as CEO.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.