After taking power last year, the UK Labour government committed to a pro-growth and pro-innovation approach to AI, seeking to position the country as an AI “superpower.” Updating the country’s copyright laws was at the top of the agenda. 

But rightsholders objected, viewing it as an existential threat to their livelihoods and accusing the government of cozying up to US tech giants. Criticism from celebrities including musicians Elton John, Paul McCartney, and Dua Lipa turned the proposed reforms into a PR disaster — and clouded the future of UK AI policy. 

Post-Brexit, the UK decided not to implement the 2019 European Union Copyright Directive, which allows commercial text and data mining of copyrighted data as long as rightsholders have the right to opt out. As a result, the UK is currently an outlier, with more restrictive copyright laws than both the US and the EU, only allowing text and data mining for non-commercial research

This framework leaves the UK at a competitive disadvantage in AI. Research shows a strong link between permissive copyright laws and a faster pace of AI innovation. In December, the government announced plans to scale back copyright protections by introducing a rights reservation model, akin to the European model. 

Few were pleased. Large AI firms still see the proposal as a barrier to AI innovation and investment in the UK. In its response to the proposed reform, OpenAI argued for a US model of fair use, with broad copyright exemptions, instead of the EU opt-out system. 

Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA's work.

This recommendation gained little traction, with most politicians siding with the creatives. The House of Lords rejected the Data Use and Access Bill five times – a Bill that became a way to protest against the proposed reform – before it passed last week. The Lords demanded transparency amendments to force AI firms to disclose their training data. 

The government is flailing. UK technology secretary Peter Kyle recently expressed regret over favoring the opt-out option. At the same time, Kyle doubled down on his conviction that the UK copyright regime is “not fit for purpose.” 

Ministers are now attempting to limit backlash by authorizing cross-industry groups to publish “technical reports” on copyright and AI within nine months. The aim is to identify ways for creators to effectively opt out of AI training. Transparency, Kyle emphasizes, will be “the foundation” of the government’s solution. 

How to enforce the opt-out mechanism is a key problem. A recent report by the EU Intellectual Property Office underlines several challenge. As Marcel Mir, an AI and IP expert at the Center for Democracy & Technology in Brussels, explains: “We already see problems with the enforcement of the opt-out right in the EU, including an unclear definition of ‘machine-readable’ means of opting out and insufficient transparency requirements. It is now almost impossible for rightsholders to know if their work was used to train a model.” 

Broad questions also remain unanswered. Transparency requirements alone do not resolve the underlying challenge of balancing innovation and protection for rightsholders. What level of compensation — if any — is owed to creators whose work trains AI models? How can the public interest best be served beyond the demands of narrow interest groups? And what does it mean for copyright to be “fit for purpose” in an age of AI? 

With politicians deadlocked, these questions are being left up to the courts. Getty Images’ lawsuit against Stability AI began this month at London’s High Court. Getty accuses Stability AI of illegally scraping millions of its images to train its image-generation model. In response, Stability AI lawyers say the lawsuit poses an “overt threat” to the AI industry. 

Legal disputes could drag on for years. In the meantime, the question of copyright and AI requires urgent answers if the UK is to achieve its goal of becoming an AI superpower. 

Oona Lagercrantz was a Project Assistant with the Tech Policy Program at the Center for European Policy Analysis (CEPA) in Brussels. She is currently an AI Governance Fellow at the Talos Network. She received a first-class bachelor’s degree and a master’s degree with distinction from Cambridge.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

Tech 2030

A Roadmap for Europe-US Tech Cooperation

Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More