Prime Minister Rishi Sunak is taking an entrepreneurial approach to AI regulation: find a gap in the market, and fill it. A summit on November 1 and 2 will be the world’s first to focus on the technology’s “existential risks.”

Whether or not this existential gap needs filling remains controversial. And whether the UK is best suited to fill it remains unanswered. While some argue that AI represents humanity’s most pressing danger, others warn that this rhetoric is overblown.

The AI regulation field is crowded. Since the launch of OpenAI’s ChatGPT last year, a once obscure technology has jumped onto the front pages, with tabloid-style headlines touting “super-intelligent AI” or “Godlike AI.” Commentators and business executives paint panic-inducing pictures of 2001: Space Odyssey nightmares of humans losing control to computer-programmed robots.

The G7, the OECD, the US, and the EU are already hard at work on most fronts of AI regulation and most of these forums have avoided most “existential threat” talk. That’s where the UK jumps in. Its summit aims to address two kinds of threats: “misuse” and “loss of control.” The former refers to humans leveraging AI for dangerous purposes such as biological warfare or hacking, and the latter addresses the long-term possibility of self-proliferating AI taking control of humanity.

To an extent, the UK is well positioned to reclaim a “pioneering role.” The summit will take place outside of London at Bletchley Park, home to the famed World War II codebreakers and computer innovators. Deep Mind, an AI pioneer now owned by Google, remains UK-based.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

In the summit’s lead-up, the UK launched a host of initiatives to solidify its intellectual leadership. It set up a Frontier AI Taskforce, bringing together global experts. It published a report exploring the different threats (and opportunities) artificial intelligence presents, and it recently announced the launch of an AI safety institute to “examine, evaluate, and test new types of AI.”

But opinions diverge on whether this summit is necessary, or relevant.

Start with the guest list and the invitation to China. Although some experts argue that if a global agreement is to be reached, China needs to have a seat at the table, former Prime Minister Liz Truss urged Sunak to reconsider. The Mideast crisis has forced some Western leaders to cancel, including German Chancellor Olaf Scholz and French President Emmanuel Macron. Secretary-General of the United Nations, António Guterres, and European Commission President Ursula von der Leyen will attend.

Critics also argue against exaggerated fears. Although most experts agree that AI presents risks worth monitoring, artificial intelligence is still dumber than cats, says Yann LeCun, Meta’s chief scientist, so doomsday scenarios are “premature.” Large tech companies benefit from such fearmongering, LeCun warns, because they can keep the code and data used to build AI models closed, making it virtually impossible for competitors to enter the market. The UK’s AI summit will only reinforce those who “want regulatory capture under the guise of AI safety,” he predicts. Nonetheless, most experts agree that AI presents some risks worth monitoring, even if those may not lie in the realm of human extinction.” 

The UK is trying to toe a fine line between regulation and promoting business. In March, the country set out a light-touch approach to AI, avoiding any concrete legislation, with Prime Minister Sunak insisting that “the UK’s answer is not to rush to regulate.” In contrast, the EU is racing to finish up its revolutionary AI Act and the US has started talks on AI legislation. President Joseph Biden just issued an executive order outlining the federal government’s first regulations on artificial intelligence systems. 

Will the British summit succeed in bringing leaders together on a common plan? In the best-case scenario, a joint statement signed by global leaders will set out a roadmap for coordination. In the worst-case scenario, the UK fails to chart a clear path to become a key player in AI regulation.

AI leadership remains a contested battlefield. Expect new entrants. The UN recently announced its own advisory board on the risks of AI and plans to host a global summit in September of next year.

Clara Riedl-Riedenstein is an intern at CEPA’s Digital Innovation Initiative.

Bill Echikson is a non-resident CEPA Senior Fellow and editor of Bandwidth.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More