In the wake of Russia’s invasion, Europe’s plans to regulate technology look like a relic from a bygone era. An urgent need exists to rethink the assumptions and the goals of the Digital Markets Act (DMA), the Digital Services Act (DSA), and the AI Act (AIA) – indeed, Europe’s entire digital sovereignty agenda. 

The problem? These European plans and proposed tech regulations sideline security.  

Although serious security concerns have been raised before, European policymakers found them easy to dismiss as exaggerations or as mere excuses for ‘Big Tech.’ The spirit of ‘something must be done’ steamrolled security. This cost may appear justifiable in a world where a large-scale war in Europe was unthinkable. But this is not the world in which we live anymore.  

Security protections in the DMA and the DSA amount to little more than handwaving and hoping for the best. The new regulations impose legal duties that somehow expect tech companies will ‘nerd harder’ and address the risks. If something goes wrong, the tech companies can always be blamed. 

The inconvenient truth is that the level of protection offered by the likes of Google, Apple, or Facebook is hard to match.  Public officials, members of the military, and influential journalists use publicly available services such as social media, mobile messaging, and e-mail. If their accounts are compromised, it could be a great prize for a hostile power. This is not speculation, as witnessed by attempted attacks on Facebook and e-mail accounts of Polish officials.  When faced with a cyber onslaught, it is not a coincidence that the Ukrainian Embassy in London migrated to Gmail accounts. 

What would then happen if large social media platforms or messaging services are legally forced to ‘interoperate’ with competing services, including potentially Russian rivals such as Yandex and VKontakte? Under Europe’s new DMA, platforms would be compelled to provide them with query click, view, and ranking data of Ukraine users. 

It will be trivial for Russian agents to establish more or less real ‘rival’ services, even using EU-based servers or even EU-registered businesses. They will be able to lure users into ‘consenting to interoperability.’  The attackers will be able to rely on a broad arsenal of well-established methods from dark patterns to phishing. Once they receive consent, they will have access both to the information already in the account and to the capacity to use that account to attack others or to spread misinformation.  

While interoperability can be done safely, it requires means exclusion of unreliable actors. In other words, safety is costly. Hence, the DMA faces a trade-off between reducing the cost of market access for everyone and preserving user security. None of the publicly available DMA amendments comes close to getting the balance right.  

Another risky idea in the DMA is the limitation on combining personal data sourced from different services. Effective cybersecurity requires combining information from numerous sources. For example, third-party security services may be required to detect security concerns in incoming e-mails. While simply looking at an e-mail may provide some clues, attackers can generate bait mails that will not be spotted. 

Like the DMA, the DSA treats security as an afterthought. Large platforms will be required to allow researchers access to their data. Even assuming that no bad actors will be granted such access, it is unrealistic to expect academic or other researchers to uphold the same level of data security as that enforced internally by the largest online platforms. Platform transparency may be ‘nice to have,’ but it is unclear whether the benefits of this solution will outweigh the risks.  

‘Digital sovereignty’ may seem like a security-enhancing proposal. If EU data is kept within the EU and not stored with non-EU services, it could appear as strengthening the EU’s resilience. But the Internet builds resilience through decentralization: a network that continues to operate even if some part of it succumbs to an attack.   

By forcing data localization, this key security feature is negated. Data localization is tantamount to putting all of one’s (data) eggs in one basket. While it may comfort French users to know that they are protected from US intelligence accessing their data, a comprehensive attack on French networks could compromise all their data.  

Taken to its logical extreme, data localization means that virtually no digital services can be provided to the EU from the US and from other rights-respecting democracies. That includes state-of-the-art cybersecurity services of which the US is a major supplier. Denying access to those services is irresponsible.  

The largest American tech companies have so far shown much more security resilience than even some providers of critical infrastructure (witness the Colonial Pipeline attack). Due to their size and market penetration, they are convenient partners for national security authorities. If EU policies target ‘bigness’, then we need to have an open conversation about the security costs. A competitive but fragmented market is not necessarily a more resilient one.  

We should reconsider whether the EU policies aimed at promoting the competitiveness of European businesses are proportionate given what we know now about the level of risks to our security. The new assessment must recognize the overwhelming security benefits of exchanging technology and information with our democratic allies. Apart from pure economic protectionism, this exchange is now threatened by imminent European tech regulations. Russia’s invasion of Ukraine shows that we cannot be nonchalant about these risks. 

Dr. Miko?aj Barczentewicz, Fellow at the Stanford Law School, Research Associate at the University of Oxford, Senior Scholar at the International Center for Law and Economics, Research Director of the Surrey Law and Technology Hub.