Australia acted first: it ordered social media platforms, including Instagram, TikTok, YouTube, X, Reddit, and Twitch, to stop under-16s from holding accounts. Platforms that resisted risked prohibitive penalties.
Other democracies are now following Australia’s lead. Denmark has proposed a ban for under-15s. In Britain, Keir Starmer has said he is open to an Australian-style approach. In Brussels, the European Parliament has backed a “digital minimum age” of 16 for social media, video platforms, and AI companions.
The US is moving at the state level. Florida has barred under-14s from some platforms. Utah, Texas, and Arkansas have tried to raise the effective minimum age by requiring parental consent. California and New York are targeting “addictive feeds.”
The bans are well-meaning. Child safety represents one of the few tech policy issues that still produces broad political consensus. While speech regulation for adults divides Americans and Europeans, it’s difficult to oppose measures that aim to protect our children.
Yet social media bans are dangerous. If platforms must prove age and identity, it endangers everybody’s privacy. An account ban is easy to announce and hard to enforce. Platforms have to decide what counts as “reasonable” proof of age, how often to re-check, and how to investigate without locking out legitimate users or collecting sensitive data.
Teenagers can migrate to smaller apps, borrow credentials, or stay logged out, shifting the risk rather than reducing it. Since the Australian ban, almost five million social media accounts have been deactivated or removed, the government said on January 15.
Verification at such a scale is intrusive, error-prone, and expensive. Reddit is challenging the ban in Australia’s High Court, arguing that it compels invasive verification and interferes with legitimate speech. Thetrajectory is predictable: the more that enforcement relies on identity, the more speech and privacy disputes follow.
One dispute is over who should conduct age checks. Meta wants them moved to app stores, with parental approval built into downloads. That would replace platform-by-platform enforcement with a single checkpoint run by Apple and Google. But this would concentrate power in two private companies.
Age bans tilt the market. Big platforms can hire compliance teams, buy verification services, and build appeals processes to reduce false takedowns. Small services cannot. “Reasonable steps” becomes a de facto fixed cost of entry, protecting incumbents and pushing teenagers toward a narrow set of platforms — or toward obscure services that regulators barely see.
Despite the account bans, teenagers can still watch social media videos, scroll posts, and click through recommendations while logged out. Platforms still rank, recommend, and shape content based on signals like device data, location, time of day, and time spent on feeds. The result is a policy that reduces the number of teen accounts, but not necessarily the amount of social media content that teens consume. It also makes the problem harder to see, because regulators can count deleted accounts, but they cannot easily measure what teenagers view without them.
Bans also threaten privacy. When platforms become identity referees, the safest corporate response is to collect more evidence than necessary. Put the responsibility on app stores, and governments outsource control to the two firms that already control distribution. Tie it to state-backed digital identity, and access becomes conditional on participating in identity systems.
However it is implemented, age gating creates spillover. If companies must exclude minors, they must classify everyone else. That means verification, retention, and audit trails. It also creates an ID divide. Users with passports, smartphones, and stable connectivity glide through. Others face friction or exclusion. A policy sold as child protection could become, in practice, a barrier that discriminates based on income and class.
Teenagers do not stop socializing. They turn to small services and less-moderated spaces. A strict ban can reduce risk on large platforms and increase it elsewhere, where safety resources are thinner, and regulators have less leverage.
Another risk is contagion. Once lawmakers frame the problem as “harmful digital experience,” the category expands from social media to video platforms, AI companions, gaming mechanics like loot boxes, and the next product that mimics the same engagement loop. A recent European Parliament report reflects that expansion.
Social media and AI are not going away, and children will not stay offline until their 16th birthday. Sooner or later, they will be on these platforms. The real question is what they encounter when they get there. Europe’s Digital Services Act requires the largest platforms to assess risks to minors and mitigate them, and it gives regulators something concrete to supervise: risk assessments, product changes, and compliance reports.
The regulation attempts to mitigate what causes the harm: how feeds are built and amplified. It applies across services without turning enforcement into a game of fake IDs and VPNs. And it avoids concentratingcontrol at a single checkpoint — whether platforms, app stores, or state identity systems. It is slow and less headline-friendly than deleting accounts. The DSA has problems, but it is an effort to reduce harm without building a dangerous permanent verification regime.
Authoritarian governments love identity-linked access for control. When democracies normalize large-scale age verification, they legitimize a dangerous model.
Australia’s ban represents an experiment in building the internet’s next border control. Its reach now may be limited to a far-off continent — but its risks are spreading around the world.
Dr. Anda Bologa is a Senior Researcher with the Tech Policy Program at the Center for European Policy Analysis (CEPA).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
