The UK’s proposed Online Safety Bill would require platforms to screen and monitor all activity and content when uploaded to predict whether it is illegal or harmful. Such a general monitoring obligation is prohibited in the European Union’s Digital Services Act.

Governments worldwide are drafting legislation and approving bills to address online safety challenges and hold technology platforms accountable for harmful content. This is good. But the UK is poised to play a leading role among Western democracies in imposing such broad monitoring that could chill free expression.     

Here’s how: the proposed UK law imposes new duties to “prevent” access to a wide variety of poorly defined content categories. For everyone under 18, a prohibition would be imposed on displaying content that the UK Parliament does not even view as illegal. Such blacklisting requires accurately predicting not just every reader’s location and age, but also whether rapidly modified or uploaded content matches a dizzying range of criteria across dozens of other UK laws (mostly criminal statutes).  This poses an impossible challenge for platforms.     

Facing huge fines and “super-complaints” if they underblock, platforms will be forced to take a conservative approach – taking down legal material and excluding legitimate but hard-to-profile visitors.

For non-commercial, public interest platforms such as Wikipedia, the UK bill threatens to undermine their volunteer-driven governance model. In contrast, Europe’s DSA explicitly recognizes the difference between centralized content moderation carried out by employees, and community-governed content moderation systems.

Let’s be clear: the Wikimedia Foundation, which hosts Wikipedia and other volunteer-run free knowledge projects, supports efforts to make the Internet safe. When people are harassed or feel otherwise unsafe communicating online, their ability to access, create or share knowledge is diminished. But Wikimedia believes online safety can only be achieved when adequate safeguards are in place for privacy and freedom of expression.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

A key issue is protecting children. Unlike commercial services, Wikipedia does not target people of any age with paid advertisements or profile them in order to amplify personalized content. But the UK proposal for mandatory age verification or assurance — “age-gating” — would force platforms including Wikipedia to know a reader’s age, exposing both adults and children alike to new security and privacy risks.

The precedent is alarming. Even the best age assurance tools have been shown to be inaccurate.  If the UK forces us to collect such data about UK users, we can expect that many other governments around the world will impose similar requirements.

From our perspective, the UK should explicitly recognize and support community-governed content moderation systems, which are effective against harmful speech and in protecting human rights. Obligations placed on nonprofit, public interest platforms with decentralized, volunteer-run content moderation models such as Wikipedia should be differentiated from those required for for-profit platforms, which have top-down, centrally directed content moderation supported by advertising-driven business models designed to maximize profit for shareholders.

The UK’s Online Safety Bill presently lacks the strong safeguards and clear definitions necessary to ensure that it does not cause the removal of educational material, medical information — including documentation of the COVID-19 pandemic. This encourages over-blocking and could mean losing an accurate historical record as well as access to reliable information.

What is and what is not considered “harmful”, for instance to children, depends on an individual’s point of view and preferences or, more worryingly, the views of the government. From 1988 to 2003, the UK outlawed teaching children about “the acceptability of homosexuality as a pretended family relationship”. Marginalized voices in particular are at risk of being silenced by top-down removal and content suppression requirements for “harmful content.”

UK policymakers should make significant changes. They should narrow the scope of the proposed bill to carve out “harmful” content.  The duties to predict and filter what is “content that is harmful to children” and “illegal content” should be tightly drawn.  Drafters should remove clauses related to criminal liability, and they must do more to preserve notice-and-takedown, not predictive top-down moderation, as the general foundation for online safety, just as the EU is doing. Any requirements around keeping internet users safe from harm should also protect end-to-end encrypted communications.

Supporters of Brexit once argued that it would free the UK from European overregulation. Unfortunately, the Online Safety Bill in its present form threatens to produce the opposite outcome.     

Rebecca MacKinnon is Vice President, Global Advocacy and Phil Bradley-Schmieg is Lead Counsel at the Wikimedia Foundation.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More