When horrifying images from the Israel-Hamas conflict flooded the social media platform, X, formerly known as Twitter, the European Commission launched an investigation, online advertisers jumped ship, and owner Elon Musk came under attack for promoting antisemitic hate speech.

On the flip side, Meta “overcorrected.” The company’s own independent Oversight Board ruled that Facebook was eliminating too many posts about the Middle East conflict.

One company is doing too much, the other is not doing enough, and no one seems to be getting it right. That is because what is right remains far from settled. Policymakers need to acknowledge this complexity in attempting to respond to online speech.

Unfortunately, recent regulatory efforts suggest a clear, easy answer. New legislation in the European Union, the UK, and Australia imposes harsh fines and vague obligations. The UK’s Online Safety Act, which became law this October, requires removing hateful speech from social media platforms. Europe’s Digital Services Act (DSA) imposes sweeping, and similarly vague, obligations.

A common thread is the expansion of content moderation beyond illegal content to include “harmful” speech. While understandable, it is tricky.

Previous efforts to combat harmful speech in Western democracies have proved problematic. Germany’s 2018 Network Enforcement Act, threatened online platforms with fines of up to €50 million for systemic failure to delete illegal content. Yet researchers found that NetzDG made little impact. Although critics worried about provoking platforms to over-censor, it did not provoke mass requests for takedowns. Nor did NetzDG force internet platforms to adopt a ‘take down, ask later’ approach. The number of takedowns remained similar before and after the law’s passage.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Of course, almost everyone agrees that some content needs to be policed: Images of child sexual abuse or self-harm remain too common online. Illegal images should be removed swiftly from platforms. Inaction is dangerous. After buying X, Elon Musk dissolved its trust and safety council, welcomed barred users, and pulled out of a voluntary EU Code of Conduct. Anti-Semitism, ethnic slurs, and other hate speech drastically spiked on the platform.

Yet policymakers must recognize that the lines around harmful but not illegal speech are blurry — and, as Germany’s NetzDG shows, enforcement is often lacking. The European Commission will face similar challenges enforcing its new DSA.

Although democratic governments and big tech critics allude to some “common sense” standard, content moderation amounts to a restriction of free speech, the limits of which are unclear. Even academics disagree. Traditional liberals believe in unencumbered free speech to protect democracy. Others contend that governments should be licensed to restrict racist, extremist, or defamatory content.

Democracies reflect this theoretical disagreement. The US leans to the side of traditional liberals, with its broad First Amendment protection. The EU and the UK restrict what they consider dangerous speech. Even within Europe, deep differences proliferate. Nordic countries restrict free speech less than their southern counterparts. History plays a role, too. Germany prohibits Holocaust denial. Other countries tolerate it as part of free speech protection.

All technologies that reconfigure communication raise new questions about free speech. Consider the printing press. After its invention at the end of the 15th century, governments remained unsure of how to regulate journalism. It took until 1766 for Sweden to enact our first modern press freedom legislation. In the US, 20th-century case law cemented our current understanding of the press’s responsibility in protecting democracy and freedom of speech.

A similar process will inevitably play out with social media. Rather than publishing edited articles, social media allows all of us to publish. The sheer amount of content means post-by-post regulation is not possible. What is more, algorithms, bots, and fake accounts litter social media platforms. They require a rethink of how governments regulate free speech. No slew of applicable case law is available yet. That’s to be expected. Two decades ago, Facebook, TikTok, and others did not exist. Who knows where they will be in 10 years?

It’s important to let the process play out. Legislative efforts should avoid being overly punitive. Yet violations of the EU’s Digital Services Act can lead to fines of up to 6% of global annual turnover, and up to 10% for violations of the UK’s Online Safety Act. Platforms rationally could respond by harshly moderating, taking down posts out of caution, as we are seeing playing out with Facebook.

This overreaction hinders rather than helps the process of finding the right balance. We need space for trial and error. Free speech and content moderation pose hard questions. No clear answer exists yet.

Clara Riedenstein is a Research Assistant with the Digital Innovation Initiative at the Center for European Policy Analysis.

Bill Echikson is a non-resident Senior Fellow at CEPA and editor of Bandwidth.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More