The European Union’s AI Act focused on ensuring that AI products are designed safely. But what happens after AI is deployed? Even if an AI model or system has been developed safely, it can still cause harm.
Unfortunately, the AI Act fails to address this challenge, and the European Commission has withdrawn a proposed AI Liability Directive. New liability rules are required to ensure citizens are compensated for harms. Existing national tort laws and the updated Product Liability Directive remain insufficient.
Tech companies creating AI models and systems should bear the cost for the risk they create. Instead, small businesses and citizens who adopt AI technologies absorb most of the risk. This is wrong. We also must create single, European-wide liability rules, ending the present fragmentation among the 27 EU member states.
Unless we act, the US and Chinese firms that produce most AI models and systems will use waivers and other legal maneuvers to shift exposure to their (often European) users. If a startup wants to make use of an AI model and convert it for a specific use case, for example, it might have to accept terms that state they’re the ones liable if the converted model causes harm.
A credible AI liability regime would end this possible abuse. It would impose guardrails against unfair contractual clauses and require AI developers to accept liability.
Additional legal reform is needed. At present, courts must determine what went wrong, whether it breached a duty, and whether that breach caused harm. Because victims of AI lack access to logs, model documentation, training data choices, or system updates, they struggle. Tech companies can hire an army of lawyers, so valid claims either never get brought forward or become years-long battles over disclosure and expert evidence.
A liability framework could ease these concerns, reducing the marginal cost of proving an AI claim. This would help tackle cases of discrimination, reputational harm, or automated decisions. While these cases are usually low-value and hard to prove, they are also those that matter most politically. AI liability rules that only work for the largest cases are not a sound legal regime.
A final problem for AI liability is fragmentation. At present, AI developers must navigate a maze of varying European standards for fault, causation, recoverable damages, procedural tools, and litigation. The unfortunate result is dangerous legal uncertainty. Companies with the biggest balance sheets and legal teams will cope. Small firms will suffocate.
Without reform, Europe risks losing out. AI products released in Europe could cost more than they do elsewhere. Firms will delay cutting-edge product releases. European AI developers might flee to countries with simple, unified rules.
A regulatory response should address those risks — fast. If Europe’s governments put in their own rules, it will become a steep challenge to impose a bloc-wide standard.
Without credible liability rules, the EU risks an awkward equilibrium: high compliance costs (felt mostly by small EU firms), weak practical redress for many victims of AI, and a liability landscape shaped less by coherent EU rules than by national decisions.
Europe’s new watchword is simplification. We must boost our competitiveness. Critics claim the AI Act — the world’s first attempt to ensure that this powerful new technology remains safe — is premature. But a regulation that leaves too many unanswered questions also represents a danger.
We drew up an AI liability regime only to see companies complain that it represented “more digital red tape.” The European Commission withdrew its proposal. That was a mistake.
A new AI liability regime is needed. It would turn uncertain, hard-to-ensure exposures into predictable, priceable risk and support AI adoption across Europe — not chill it. AI companies should be required to share the liability risk and disclose potential evidence. In most cases, AI developers are best placed to manage the overall risk. As AI accelerates forward, a clear liability framework represents the largest missing piece in an effective European regulation.
Kai Zenner is Head of Office and Digital Policy Advisor to MEP Axel Voss (European People’s Party) in the European Parliament and is heavily involved in the EU’s AI policy. The views expressed in the article are personal and represent neither the position of the European Parliament nor of the EPP Group.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation