“Alex” has been using a generative AI chatbot and has started confiding in it about his anxiety and low moods. The chatbot is empathetic; Alex feels supported. He becomes isolated. The chatbot reinforces Alex’s anxious thoughts. After months of isolation and spiraling mental health, Alex’s family intervenes and gets him professional help. Should the chatbot be held responsible and pay him compensation?

Although “Alex” is fictional, real-life stories of young people harming themselves after prolonged interaction with generative AI chatbots are making headlines. Reports proliferate about chatbots “inducing psychosis” or reinforcing delusional beliefs. Experts warn against relying on chatbots spewing unreliable financial advice and misleading health recommendations.

Who is legally responsible? Neither the US, UK, nor Europe offers clear answers. Each jurisdiction grapples with key challenges: obtaining evidence, proving causation and harms, and determining the standard for “safe” AI. Governments and courts have their work cut out.

All three jurisdictions recognize negligence, the general duty to act with care towards others when a person causes harm. If a person cuts down a tree, they should make sure that it does not land on their neighbor’s house. Other rules allow consumers to receive damages from manufacturers of dangerous products.

To receive compensation, Alex will need to prove that he suffered harm, that the AI company should have taken additional precautions, and that its failure to be cautious caused Alex’s harm.

If Alex lived in the European Union, he could seek redress through the liability rules of his member state or the bloc’s product liability law. Brussels updated its Product Liability Directive in 2024 to include software and AI.

Under these rules, courts may order the AI company to share technical information to help Alex prove how the chatbot caused harm. This is crucial, since the chatbot is a black box, and it is hard for Alex to prove where things went wrong. Which actors were involved? Did they do something (or fail to do something) that led to the chatbot encouraging Alex’s harmful thoughts? If the case is considered too complex, the judge may also say that the AI company needs to prove that they are not liable, rather than Alex needing to prove that they are.

Yet even with those provisions, Alex could struggle to prove that the chatbot fell below the standards of what we can expect from a safe AI chatbot. AI’s newness makes it difficult to define a chatbot’s “reasonable behavior.” In other contexts, we have had decades to develop standards and case law to determine what kind of precautions should be taken by, for example, a reasonable airplane manufacturer.

EU product liability also only covers specific types of harm, like physical injury or property damage. Alex would have to show that his harm amounts to a medically recognized psychological illness — a high bar. Alex could also claim negligence, but would face similar challenges around obtaining evidence and proving causation. The EU has implemented a class action system. Alex could find others who have similarly been hurt by this chatbot and make a claim together, spreading the legal costs and risks.

Get the Latest
Sign up to receive regular Bandwidth emails and stay informed about CEPA's work.

In contrast to Europe, the US has no overarching federal product liability law. Rules vary from state to state. The US “common law” system gives judges more leeway in interpreting legal rules compared to European judges. Under European “civil law,” judges cannot adjust or create rules to fill gaps or fix injustices.

Although US courts previously held that software is not a “product” covered by product liability law, AI is causing a potential rethink. In Megan Garcia v Character A.I., the parent of a teenage boy who committed suicide after prolonged interaction with a Character A.I. chatbot filed a liability claim. In an interim order, the court stated that the Character A.I. chatbot was a product under product liability law. The recent verdict holding Meta and YouTube liable for harming children with the addictive design of their online platforms supports cases that target AI developers’ negligent chatbot designs.

Since the US has a strong class action regime, it is relatively easy for Alex to file a claim along with other affected people. Yet Alex confronts challenges similar to those he faces in Europe: proving the AI company’s actions fell below a reasonable standard of caution, proving causation, and that damages are merited. The Megan Garcia case was settled out of court, but two other AI liability cases remain to be litigated: Texas Parents v Character A.I. and Raine v OpenAI. We will have to wait for the court’s judgments to see whether these legal claims are successful.

Like Europe, the UK has a product liability law, the Consumer Protection Act of 1987, but unlike Europe, it has not (yet) updated it to include software and AI. The law still only applies to “tangible objects,” such as toys and electronics. UK case law on negligence does not help, either. It contains no rules to help Alex prove causation or the chatbot’s standards for reasonable behavior. The UK’s rules for class action lawsuits are less permissive than those in the US and Europe.

Given that AI is so new, UK judges might use the leeway that common law gives them to take a new approach — they might order the AI company to share technical information or lower the bar for Alex to prove that the AI company caused his mental health harms. It is not certain that a UK judge would do this. We would have to wait for cases to be heard.

Alex faces steep obstacles everywhere to seek compensation. We will have to keep a close eye on the courts, and in particular, the US courts, since most cases are brought there.

Julia Smakman is a senior researcher in the law & policy team at the Ada Lovelace Institute, an independent research institute with a mission to make AI work for people and society. Her research focuses on the effective governance of AI in the UK and EU, with a particular interest in legal mechanisms like liability. Julia holds an LLM in Public Law from the University of Amsterdam, an LLM focused on human rights, law, and technology from the London School of Economics.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.

Tech 2030

A Roadmap for Europe-US Tech Cooperation

Learn More
Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More