The UK’s Online Safety Act requires websites that host violent or pornographic content or unmoderated forums to implement strict age-verification measures before granting access. The goal is simple: shield children from harmful material.
In practice, the rollout has been anything except simple. As soon as Brits were asked to prove their age by uploading IDs or other proof, many turned to virtual private networks (VPNs) — tools that spoof a user’s location and bypass the restrictions — to mask their location and sidestep the restrictions. On the weekend the rules took effect, downloads of Proton VPN surged by 1,800%.
What’s unfolding in Britain is part of a wider debate, with global implications, calling into question how much a single country can impose rules on the internet, which, by definition, has little respect for borders. It has also sparked transatlantic conflict, with US tech companies voicing opposition, and the Trump administration weighing countermeasures.
Still, the UK law reflects a broader transatlantic trend recognizing the need to protect children from harmful content. At least 19 US states have passed or proposed similar laws or rules requiring age verification or parental consent. Florida has moved to bar children under 14 from Instagram and TikTok. Utah has imposed curfews for minors online. The European Union plans to launch an age verification app, and its Digital Services Act imposes strict guidelines on measures companies must take to protect minors.
These efforts stem from genuine and legitimate concerns over young people’s mental health and online safety. Policymakers often draw parallels to restrictions on alcohol sales: while teenagers find ways around them, such laws still limit their access.
In the digital realm, the challenge is more complex, and the potential solutions risk conflating identification with safety. Effective age verification often means facial scans, AI-powered selfie analysis, or uploading a government-issued ID. Each method raises sharp privacy concerns about privacy and data security. Who controls and stores that data? How can it be kept safe from leaks, hacks, or misuse?
The UK law imposes ongoing responsibilities for platforms to monitor and control the content they host. Industry giants like Meta or Apple can absorb the cost of such compliance; smaller platforms and forums often cannot. An unintended consequence is to shield the large incumbents while squeezing smaller rival companies.
Apple’s case is illustrative. Under pressure from British regulators, Apple has enforced strict age-gating for content-heavy apps such as Reddit, Pornhub, and X. Some platforms rolled out intrusive ID checks; others blocked British users entirely.
For firms, non-compliance carries steep risks. Breaches can bring penalties of up to £18 million or 10% of global turnover. The law requires payment processors and app stores to cut ties with non-compliant sites, even those with no UK presence. Some platforms are responding by pulling out of Britain — notably, the far-right platform called Gab, which hosts Nazi and other extremist content.
This extraterritorial reach has drawn strong objections from US tech and policymakers, some of whom have pressed the White House to raise the UK law in trade talks. In a rare bipartisan move, a delegation of US congressmen travelled to London to warn against “exporting” speech codes that undermine America’s First Amendment.
So far, the political pressure has failed. UK officials say the new online safety rules are not up for negotiation. Britain, like the EU, has signed new trade deals with the US that raised tariffs — while digital regulations remain untouched.
Another option is litigation: challenge the UK rules in a US federal court to make it unenforceable in America. The odds here are long — Yahoo tried and failed with a similar tactic.
Unintended consequences are mounting. By sending minors tunnelling through VPNs, the UK law may have inadvertently exposed them to riskier, less regulated online spaces. Many free VPN services are not privacy shields at all, but data harvesting tools that sell users’ information to unknown operators overseas. In trying to wall-off harmful content, governments may be nudging minors into darker, less-regulated corners of the internet.
Two real-world cases show the dangers that VPNs can facilitate. Cody Kretsinger, a member of the Hacker group LulzSec, used the HideMyAss service to hide his identity while attacking major organization like Sony Pictures. Provider logs allowed authorities to trace and convict him. Similarly, a cyberstalking suspect known as Lin relied on PureVPN and other privacy tools to conceal harassment, only to be unmasked through retained logs. These examples highlight that VPNs offer a false sense of anonymity and even aid criminal activity: providers can retain data, and legal pressures or investigations may compel them to release it.
Regardless, the unfettered access of minors to violent, sexual, or otherwise harmful material online is a serious issue. But blanket age verification, tied to real-world identity, is a crude instrument, with an even more harmful fix.
A more promising strategy would tackle design features that exploit young users — limiting addictive features for young users, curbing autoplay and infinite scroll, making algorithms more transparent, and giving parents meaningful oversight tools. These measures address the root causes without demanding a universal ID check for every click.
Lawmakers may be tempted to see identity checks as safety itself. But Britain’s experience suggests the opposite: the more you try to lock the internet’s doors, the more determined and inventive minors become in finding another way in.
Elly Rostoum is a Google Public Policy Fellow with the Center for European Policy Analysis (CEPA). She is a Lecturer at Johns Hopkins University. You can find out more about her work here: www.EllyRostoum.com
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
