Advocates of online safety for children and online privacy for people of all ages are at an impasse.  

Europe needs a law that goes beyond the current system, which relies on tech companies to voluntarily report child abuse imagery. Already, US federal law requires Internet companies to report child sexual abuse material (CSAM) to the National Center for Missing & Exploited Children.

But Europe should not go beyond the US approach and require that platforms proactively scan servers. Under the new European proposal, Google, Apple, and others would be obliged to hunt down photos and videos of child abuse or face fines of up to six percent of their global revenue.

This prospect of full-on scanning understandably frightens privacy advocates, who compare it to turning the platforms into policemen. Systematic surveillance of the content on messaging apps would break encryption. “This proposal has a very laudable goal but feigns a complete misunderstanding of encryption,” said Reporters Without Borders tech specialist Vincent Berthier. “It’s simple: scanning end-to-end encrypted messaging services would render them useless!”

No one doubts that child sexual abuse represents a significant problem. Eighty-five million videos and images were produced last year, according to the US National Center for Missing and Exploited Children

There’s much good in the European proposal. Should the legislation pass, messaging apps and app stores would have to try to ensure they know the age of their users. Regulators would be able to request a court order to force an Internet service provider to block a website or link with child sexual abuse material. Companies would be obliged to produce transparency reports that tell how many dangerous items they have found and taken down. A new independent EU agency based in The Hague would be established. It would coordinate a database of digital hash fingerprints of illegal material and coordinate work between tech companies, law enforcement, and victims.

But there’s much to worry about, too. The European Commission press release states that the proposed law “will oblige providers to detect, report and remove child sexual abuse material on their services.” It’s the word “detect” that’s problematic. The US statute  requires platforms to “report” and “remove.” “Detect” means continuous monitoring. “Report” means acting after discovery. 

With algorithmic moderation, the tech industry has already moved past purely reactive approaches to moderating dangerous content such as radicalization, violence, hate speech, and to a degree, harassment and cyberbullying. That’s positive proactivity, even if admittedly far from perfect in preventing all abuse. But constant proactive monitoring of all private content (rather than for specific harmful speech and behavior) or attempts to weaken encryption are direct threats to personal privacy.

We need to pay attention: both child safety and privacy are essential. A balance must be struck. The European Parliament and European Council still must sign off on the continent’s new proposed law. Let’s hope they get the balance right. 

Anne Collier is the Founder and Executive Director of the Net Safety Collaborative.