When Apple “expanded protections for children,” I applauded. After years of doing little, the world’s biggest company was finally taking action. But when more than 90 civil society organizations published an open letter denouncing “plans to build surveillance capabilities,” it was clear Apple had failed to find the right fix – and it’s right that the company announced this week that it was delaying implementation of the new child protection tools. 

Apple has never been a joiner on child online safety. Unlike other tech giants, it never formed a safety advisory board and was conspicuously absent from national task forces and international forums. Although Apple entered the Technology Coalition, formed in 2006 to combat global trafficking in child sexual abuse material, other members – Microsoft, Google and Facebook – contributed technology to that effort. Apple did not. According to National Center for Missing and Exploited Children’s 2020 report, Apple reported 265 images of child sexual abuse last year, compared Facebook’s report of 20.3 million. So I applaud Apple’s newfound commitment. 

To privacy activists, however, Apple crossed a dangerous line. When Apple refused to help the FBI unlock an iPhone of a suspected terrorist in 2016, the company protested that it didn’t even have the means to break into the iPhone, and that to build the capability would open the door to a multitude of requests that were far less pressing in nature and weaken the company’s ability to stand up to foreign governments. Yet this is exactly what the privacy activists say Apple’s new technology does. 

Like many people, I care deeply about both child protection and privacy protection – and I believe a balance is possible to find.  

Apple included significant privacy protections in its new technology. Its software isn’t scanning all of people’s photos, either in the cloud as some companies do, or on Apple devices. The software is limited to “noticing” problematic photos – photos already identified as child sexual material. Only if a certain number of them are detected are they flagged for human analysis. If someone opts out of using iCloud, nothing gets flagged. Apple’s software can’t “see” any new sexually explicit or nude images, those that aren’t already hashed or identified as known child sexual imagery.  

The problem is, Apple’s technology does not move the needle in the urgent work of finding and rescuing child victims quickly. The images it finds are already sitting in the database of child sexual exploitation material (CSAM) at the National Center for Missing and Exploited Children.  

When privacy activists use the word “surveillance” in reference to this system, they go too far.  Apple worked hard on making the technology accurate and to minimize the number of falsely flagged accounts. Users can opt out of uploading photos to iCloud and those who feel their accounts have been deleted in error can appeal.  

Where privacy activists get it right is their criticism of a different iPhone and iPad feature. In a forthcoming update on family iCloud accounts, Apple will alert parents to any sharing or receiving of intimate images happening on their children’s devices. For the child, a detected photo “will be blurred and the child will be warned, presented with helpful resources, and reassured it is okay if they do not want to view this photo,” Apple says. Unfortunately, it appears the child won’t be notified that the parent knows about the blurred photo.  

This doesn’t represent safety for all children. When a child is at risk of sexual exploitation or physical harm, monitoring can be justified. But monitoring itself can be harmful, not just to trust in parent-child relations but also to children themselves. An abusive adult may be the organizer of the account, and the consequences of parental notification could threaten, for example, LGBTQ+ youths on family accounts with unsympathetic parents. Privacy is safety for some children. 

Apple’s balancing act illustrates how hard it is to increase child safety without decreasing other rights. To get the right balance, companies must work with experts in all relevant disciplines to get it right. It is crucial for Internet companies to figure out how to get law enforcement information so young victims can be found and rescued, while also committing to protect the privacy of people who could be victimized by law enforcement. This includes political dissidents as well as children.  

While acknowledging the tightrope, we can work together to improve child safety. Earlier this year, the UN Committee for the Rights of the Child went official on children’s digital rights, Australia’s eSafety Commissioner’s Office released their Safety by Design tools and the UK began enforcing its Age-Appropriate Design Code. Internet companies are making significant improvements for young users. Alex Hern’s column in The Guardian provides an excellent summary of the latest child safety improvements on YouTube, Instagram and TikTok.  Let’s keep up the good work – while protecting privacy. 

Anne Collier is the Founder and Executive Director of Net Safety Collaborative.