We need to rethink our approach to digital privacy. It is no longer about preventing government surveillance or abuse by corporations. AI opens new opportunities for leveraging data, which can only be realized if we loosen the present rules.
In the digital age, the first privacy laws emerged in the 1970s as a reaction to large-scale computing and state data collection. They targeted the danger of governments collecting data from different agencies and databases to profile citizens. What we today celebrate as “one-stop shop e-government” — now considered the gold standard for public service delivery — was then viewed as a dangerous concentration of state power. The concern was not hypothetical fearmongering. It reflected genuine worry about what an authoritarian regime or rogue civil servant could do with comprehensive citizen data.
My native Sweden became the first nation to adopt a data protection law on May 11, 1973, regulating the state’s right to collect data about its citizens. As similar laws proliferated across Europe, the need for international coordination became clear. Data did not respect national borders then any more than it does now. The ability to trade, work, and innovate became dependent on exchanging data across borders. This reality prompted the Paris-based Organisation of Economic Cooperation and Development (OECD) to intervene in 1980 with a set of privacy guidelines designed to create global standards. The OECD principles, heavily influenced by early privacy law, focused on the citizen-state relationship.
This foundational perspective shaped two core principles that continue to dominate data protection. Data minimization holds that as little data as possible should be collected. The purpose specification principle requires that data only be collected for a specific purpose, defined before gathering the data. Both principles rest on an implicit assumption that data collection represents a potential harm to be constrained.
Legislative uncertainty persisted despite the OECD’s efforts. During the 1990s, the European Union entered the privacy debate with an ambitious goal to simultaneously create an internal market for data and provide privacy protections for European citizens. The 1995 Data Protection Directive set the EU on a path toward a single data protection regime.
It marked a conceptual shift. By 1995, the commercial Internet, large corporate databases, and powerful business interests had emerged. Regulators veered away from focusing on the citizen-state relationship and zeroed in on the consumer-company relationship, targeting credit reporting agencies and commercial databases. As the Internet exploded, online advertising became the dominant concern. This consumer-advertiser relationship remained the primary focus throughout the decade-long process that culminated in the General Data Protection Regulation (GDPR), adopted on April 14, 2016.
Today, we stand on the cusp of another fundamental change: artificial intelligence and machine learning redefine our understanding and use of data. In their excellent book Prediction Machines, Avi Goldfarb, Ajay Agrawal, and Joshua Gans argue that prediction is the key to privacy in the AI era since AI makes predictions both better and cheaper — potentially threatening individual autonomy by making us easier to predict and manipulate.
AI does not merely make us predictable. It accelerates learning, deepening our understanding of the world, and synthesizes complex information into actionable insights. It improves decision-making. Deployed correctly, AI can generate deeper insights and more knowledge in the next decade than humanity has accumulated in the previous century. This potential spans every field from fundamental science to personal health care.
Privacy law no longer must balance individual autonomy and societal benefit; it must also balance knowledge and ignorance. If we agree to use deeply personal data, AI could unlock giant potential benefits. Genetic information represents one obvious example. How do we weigh the value of AI-generated knowledge that improves our health against individual fears of insurers targeting us or employers discriminating against us? Should we have unrestricted freedom to use our own data, or should society limit us?
Privacy regulation in the AI age will need to calibrate individual autonomy. Knowledge increases our autonomy in meaningful ways. Data that produces knowledge creates a demonstrable good. By contrast, data collected to predict and anticipate our behavior in order to manipulate or distort our choices is harmful. This is true whether the manipulation comes from the state or from private companies.
This analysis brings us back to an old debate about whether data protection legislation should regulate all uses of data, or whether it should instead focus narrowly on preventing clear and demonstrable harms. Europe’s 1995 directive, and all subsequent European regulation, including the GDPR, put in scope all collection, all processing, and all uses of data, regardless of purpose or effect.
European data protection law continues to treat data collection itself as inherently suspect. Future regulation must challenge this assumption. A new, AI-adapted privacy framework should focus on preventing harms to individual autonomy. Europe’s principles of data minimization and purpose specification fail to acknowledge that data is where we discover new, empowering knowledge. We need the ability to collect data for exploration and discovery.
The task ahead is to craft a regulatory framework that distinguishes beneficial knowledge creation from harmful manipulation. In order to harness AI’s transformative potential while protecting human autonomy, such a rethinking is not optional. It is essential.
Nicklas Lundblad is a non-resident Senior Fellow at the Center for European Policy Analysis. He has spent more than 20 years analyzing, shaping, and debating technology policy, most recently leading Google’s AI subsidiary DeepMind’s work on public policy. His writings can be found at unpredictablepatterns.substack.com.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
