Donald Trump has doubled down on a hands-off approach to content moderation, picking a “warrior for Free Speech” to head the Federal Communications Commission. The nominee, Brandon Carr targets the European Union’s Digital Services Act as “censorship.”
Deep transatlantic differences exist about how to regulate online speech. The US First Amendment offers broad latitude and Section 230 of the Communications Decency Act grants online platforms broad legal immunity for user’s content. In contrast, Europe’s DSA imposes strict oversight. Social media platforms must systematically identify, assess, and mitigate content risks, from hate speech to disinformation and election interference. Firms face potentially massive fines, up to 6% of their worldwide turnover.
But does the DSA censor? No. “Nothing in the DSA requires platforms to remove lawful content,” European officials insist. They are right. The DSA tackles illegal or demonstrably harmful activity – terrorist propaganda, child sexual abuse material, and foreign-backed election meddling. It obliges platforms to sniff out and counter systemic manipulative tactics, particularly during elections. These obligations bare no relation to China’s Great Firewall or Russia’s platform bans. The DSA upholds Europe’s e-commerce liability protections which resemble the US’s Section 230. Companies are not required to block all user speech before it is uploaded, only to take measures to minimize illegal content and take it down once identified as illegal. The US allows Holocaust denial. In Europe, most countries make it illegal.
Although the new US administration targets the DSA for stifling opinions, the facts fail to support this charge. Under the DSA, lawful speech remains lawful. A risk remains that platforms may remove edgy but lawful content rather than risk an EU investigation. Free speech advocates worry about these chilling effects.
Yet the DSA discourages over-removal. It mandates that platforms publish transparency reports on takedown requests, justify their decisions, and offer users appeal mechanisms. If regulators conclude that a platform takes down too much, that too can trigger scrutiny.
Since the DSA went into force, European regulators have opened probes into TikTok and Elon Musk’s X. They are investigating how TikTok potentially interfered in Romania’s presidential election, promoting a far-right Kremlin sympathizer. Musk’s X is being scrutinized for purposely allowing illegal hate speech including Holocaust denial to proliferate.
It’s far from straightforward to enforce the rules. No decisions have been handed down on TikTok and X. Small European governments struggle with limited resources. Even at the EU level, the regulators grapple with how to prove that a platform’s algorithms gave undue exposure to malign activity.
Early efforts at enforcing the General Data Protection Regulation, Europe’s major privacy law, exposed how daunting it is to police the world’s most powerful technology companies. Several years on, critics lament the limited number of significant GDPR cases, prompting concerns that the DSA could also fall into a slow-grinding quagmire.
Anticipating that risk, Brussels consolidated DSA enforcement for major platforms rather than relying on national authorities. A specialized Brussels-based team coordinates investigations across all 27 member states. Another partial solution lies in cooperation between the Commission and national Digital Services Coordinators.
Yet the system remains slow. Several EU members still have failed to appoint a Digital Services Coordinator. Since election disinformation can do incalculable harm in days, if not hours, a nimble approach, including early fines or interim orders, is required. Bound by due process, European regulators must conduct a thorough investigation, a formal statement of objections, a period for companies to respond, a final decision, and potential appeals to EU courts. That timeline is no match for social media virality.
Disinformation doesn’t respect borders. A US-based troll farm can spread manipulative political ads targeting a European election. TikTok’s servers might be in one country, its content moderators in another, and its corporate headquarters in a third. Although the DSA addresses this complexity by imposing uniform standards, each state still retains distinct definitions of illegal content. Inescapable tension remains between what is deemed extremist or defamatory in one country, but which remains legal in another.
For all its complexities, the DSA remains a bold experiment in digital governance, neither outlawing lawful content nor handing governments carte blanche to remove speech. Whether it hits the sweet spot rests on how it will be enforced. Donald Trump’s fiery rebukes and tariff threats are misplaced. Europe aims to safeguard democracy and uphold free speech, but achieving this goal requires a delicate balancing act in an age of deep polarization.
Anda Bologa is a Senior Researcher with the Tech Policy Program at the Center for European Policy Analysis (CEPA).
Anda is an artificial intelligence and digital policy expert and one of the ’35 under 35′ tech leaders recognized by the Barcelona Centre for International Affairs. During her tenure at the European Union Delegation to the United Nations, she was responsible for high-level negotiations on artificial intelligence resolutions and the United Nations Global Digital Compact.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
