Artificial intelligence (AI) has come a long way since 1968 when science fiction imagined the “2001: A Space Odyssey” villain HAL 9000 computer uttering “I’m sorry Dave, I’m afraid I can’t do that.” Today’s chatbot ChatGPT explains complex concepts and generates intriguing ideas. AI is transforming our societies and economies. It powers personalized, precision medicine; gene therapy; vaccine discovery; drug design; and cancer screening. It is revolutionizing crop management. 1 It reduces plastic waste. It is turning futuristic fusion energy into a reality.
The European Union has responded with a broad and sweeping legislative proposal to regulate AI. 2 While the EU’s proposed AI Act represents a legitimate attempt to ensure that technology serves society well, European legislators are overlooking its potential security and strategic consequences. 3
Nations that set the rules of the road for our global digital life will be the leaders in mastering AI. The Chinese Communist Party has made its ambitions clear: It wants to dominate emerging technologies, increasing the world’s dependence on China while reducing its own dependence on the outside world. Beijing is devoting enormous state resources to accomplish this objective. 4 Democracies find themselves in a competition with China over AI leadership, which will be one of the defining features of our global politics.
What would a world dominated by Chinese technology look like? Our Special Competitive Studies Project report, “Mid-Decade Challenges to National Competitiveness,” offers a startling snapshot. 5 China would control the global digital infrastructure, enjoy the dominant position in technology platforms, and harness biotechnology and new energy sources. AI is a key battleground that the transatlantic alliance cannot cede.
If leadership in AI and other technologies ultimately shapes the international order, will the future be one of shared beliefs and democratic values — especially with regard to individual privacy and free speech? Or will we face a future of state surveillance and control? We need to ensure democracies stay ahead to meet these challenges. The United States and the EU must come together rather than drift apart on AI.
The EU’s Proposed AI Regulation Has Negative National Security Implications for Democracies’ Position in the Technology Competition
Europe risks putting regulation and innovation on a collision course. Its proposed AI Act might be the next regulatory miss. “The road to regulation hell is paved with the EU’s good intentions,” says Oren Etzioni, the founding CEO of the Allen Institute for AI. 6
The proposed AI Act is based on several assumptions: that it will spur the “right kind” of innovation because of legal certainty and increasing public trust in AI, that companies will be able to implement it, and that the European Commission will be able to enforce it. All three assumptions are misguided.
The EU has yet to demonstrate that its regulatory approach, one seeking to be all-encompassing rather than adaptable, generates innovation. The EU’s regulatory history points in the opposite direction. Consider the landmark General Data Protection Regulation (GDPR). Held as a gold standard by many European legislators, the GDPR regulates first and works out the details later. 7 Large, complex compliance requirements hurt European innovation, data shows. 8 Small and medium-sized enterprises (SMEs) are hit hardest.
Compliance with the proposed AI Act will be difficult. Many requirements, especially in regard to the explainability of AI systems,risk being impossible to achieve. 9 A timely example of this is ChatGPT; it is unclear how such large language models will fit in the EU’s AI Act risk framework, or how the explainability requirements can apply to neural networks. 10 By the time the proposed AI Act is finalized and enforced, one can only suspect that other new AI applications will run into the same issue.
Purist thresholds entrenched in law for emerging technologies are detrimental. They thwart the use of exciting technologies on technicalities. We need to ask ourselves: Do we want to accelerate the use of cutting-edge AI applications or wait for greater AI explainability?
European legislators will struggle to make a broad piece of legislation such as the proposed AI Act “future-proof” or adapted to the fast-paced world of AI innovation. 11 The already wide gap between theory and practice will only increase over time and as technology evolves. Consider cookies. Europe’s ePrivacy Directive requires that almost every time a European opens a web page, or anyone opens a web page hosted in the EU, they are confronted with a request to accept or decline cookies. 12 This well-intentioned tool has turned into a time-consuming annoyance. Most users end up clicking “accept all” and sharing their data because it is faster to do so than to consider all their options. 13
The proposed AI Act risks downplaying its impact on innovation and, by extension, on Europe’s ability to host the next technology breakthroughs. This has larger security implications, particularly since autocratic nations will face few similar restraints.
ChatGPT offers another striking example of the way values are inscribed in technology by way of innovation rather than regulation. This revolutionary tool has already raised concerns around bias in its answers. 16 The EU will undoubtedly have a harder time regulating and aligning on values-related matters with a Chinese Ernie Bot than a US ChatGPT.
This is not to discourage AI regulation. The United States offers no positive example of harmonized guardrails for AI. The United States falls on the end of the spectrum of dangerous laissez-faire. National security is tied to achieving and enforcing proper governance and cannot be overlooked.
But the biggest security threat to democracies ultimately does not come from autocratic states generating their own AI regulations. Yes, China has been drafting laws to define how AI can be used in its own society. 17 But there will not be a “Beijing Effect” on AI governance in the same way the EU is counting on a “Brussels Effect” out of the proposed AI Act. 18
The fundamental threat to democracies is to fall behind in AI innovation. If we lag, we will not get to dictate how data and algorithms are developed and used. The question before us is: How do democratic societies stay ahead and use AI technologies for the betterment of our societies while staying true to our ideals?
Policy Recommendations: The Democratic Path Toward Tech Leadership
Promote Pro-innovation, Responsible Governance
Technology has become the organizing principle of the contest for the future of the global order. How we govern AI, and how we leverage AI to strengthen our economies and defenses represent important elements of the competition between democracies and autocracies.
We have reached a point across the democratic world where we agree upon the basic principles of what AI should be allowed to do — and not allowed to do. The Organisation for Economic Cooperation and Development (OECD) has developed global, high-level principles; the White House has also released a Blueprint for an AI Bill of Rights that captures the spirit of the EU’s proposed AI Act. 19 Both sides of the Atlantic recognize the legitimate concerns about privacy, bias, trust, and reliability that flow from poorly designed AI applications. 20 As a dual-use technology, powerful, well-designed AI systems can harm our societies without a proper set of guardrails.
When it comes to implementation and legislation, a perfect democratic alignment is unreasonable. Its absence should not be the basis for division. The United States and its allies were rarely in perfect alignment during the Cold War or post-Cold War on complicated issues involving tech, trade, and governance. But this did not forestall strategic alignment or deep economic ties.
Both sides must recognize not only the need for AI regulation, but also the dire need for democratic AI innovation. There is a balance to strike between the two. We must build and use AI systems safely, responsibly, and ethically. As well, time and again, with shorter and shorter time between discoveries, we are witnessing examples of AI breakthroughs in areas critical to our health and well-being. AI’s shortcomings should not prevent us from pursuing the opportunities and the progress AI holds. Our approach to AI governance should focus on competitiveness, harnessing the new geometry of innovation, and it should put at its core the strategic stakes of the global tech competition.
The private sector represents an invaluable partner. When Russia invaded Ukraine in February 2022, the tech sector stepped up — providing cyber defenses to safeguard Ukraine’s infrastructure, mobilizing cloud services to store Ukrainians’ data, and keeping Ukrainians connected to the web. 21 As we develop our governance model, we cannot overlook the role of our private sector in supporting democracy.
Do Not Let the Trade Tree Distract us from the National Security Forest
While a timely and necessary transatlantic forum, the US-EU Trade and Technology Council (TTC) has yielded modest deliverables on AI. 22 Last December’s TTC meeting rendered a new joint AI road map, which helpfully begins to chart a course on joint standards. 23 However, progress in the meeting was overshadowed by conversations surrounding the US Inflation Reduction Act, which subsidizes domestic production of electric vehicles against European imports. 24 The work of the transatlantic alliance on tech and joint strategic objectives has been hampered by economic competition and tensions.
Democracies must marshal the resources and diplomatic will to collectively build the digital apps, software, and platforms that support everyday governance, commerce, and life. Concretely, this requires government-supported investment in global digital ecosystem projects. It means aligning on maintaining standards based on technical, not political, criteria. Human rights-abusing regimes should not get to benefit from technologies designed and built in our free societies.
Diplomatically, democracies need to build a new “DemTech alliance” to address the opportunities and risks we face in this new competition. We need a novel alliance framework to outpace our dated legacy institutions. 25 We should invest in our collective comparative advantages for technologies such as AI but also 5G and chips and build our strategic partnerships to keep control of the digital infrastructure of the future.
After Russia’s invasion of Ukraine, it is clearer than ever that the AI partnership must include a strong security dimension. We need European AI companies supporting European security initiatives, working with US companies undertaking similar work.
The rising power and ambition of authoritarian regimes to harness AI and other technologies of the future present a common threat. If we fall behind autocratic states in AI development, it will be bad for our collective security, companies, and economies.
The strategic landscape has changed, and so should the balance between partnerships and competition.
The transatlantic alliance should spend more time on cultivating our rich ecosystems of universities, companies, and innovators, rather than belaboring the risks of innovation. We need to shift our mindsets toward an optimistic view of AI and look forward to harnessing its benefits, rather than automatically hitting the regulation button. How can we unlock data for the good of society while upholding our values? How can we encourage AI researchers to solve our big societal problems? How can we get more of the youth excited about studying to become AI engineers?
The best way for us to engrave our democratic values in technology, and enforce regulatory frameworks that support them over time, is to be innovation leaders. It is time for Europe, hand in hand with the United States and other democratic allies, to lead the way again in these groundbreaking technologies.
This report was published in partnership with the Special Competitive Studies Project.
All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.
- Mark Minevich, “How to Fight Climate Change Using AI,” Forbes, July 8, 2022, https://www.forbes.com/sites/charlestowersclark/2023/04/18/should-only-the-rich-be-allowed-purpose/?.
- The proposed AI Act and recent draft AI Liability Directive directly touch on AI, but other texts such as the Digital Services Act, Digital Markets Act, and Data Governance Act also have implications for AI technologies.
- European Commission, “Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts,” COM(2021) 206 final, accessed April 19, 2023, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex:52021PC0206.
- Beijing has been increasingly explicit about its intentions through initiatives such as the National Medium- and Long-Term Program for Science and Technology Development (2006-2020), Made in China 2025, the New Generation Artificial Intelligence Development Plan, and China Standards 2035.
- Special Competitive Studies Project, “Mid-Decade Challenges to National Competitiveness,” Virginia: SCSP, 2022, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.
- Kyle Wiggers, ”The EU’s AI Act could have a Chilling Effect on Open Source Efforts, Experts Warn,” Tech Crunch, Yahoo, September 6, 2022, https://techcrunch.com/2022/09/06/the-eus-ai-act-could-have-a-chilling-effect-on-open-source-efforts-experts-warn/.
- Margaret Taylor, ”Data Protection: Threat to GDPR’s Status as ’Gold Standard,‘” IBANet, International Bar Association, August 25, 2020, https://www.ibanet.org/article/A2AA6532-B5C0-4CCE-86F7-1EAA679ED532.
- Chinchih Chen, Carl Benedikt Frey, and Giorgio Presidente, ”Privacy Regulation and Firm Performance: Estimating the GDPR Effect Globally,” Oxford Martin Working Paper Series on Technological and Economic Change, No. 2022-1 (2022), https://www.oxfordmartin.ox.ac.uk/downloads/Privacy-Regulation-and-Firm-Performance-Giorgio-WP-Upload-2022-1.pdf.
- Ashish Kumar Sen, ”Eric Schmidt on Confronting China: Stop Regulating and Invent,” CEPA, Center for European Policy Analysis, September 30, 2021, https://cepa.org/article/eric-schmidt-on-confronting-china-stop-regulating-and-invent/.
- Hadrien Pouget, “The EU’s AI Act is Barreling Toward AI Standards That Do Not Exist,” Lawfare blog, Lawfare, January 12, 2023, https://www.lawfareblog.com/eus-ai-act-barreling-toward-ai-standards-do-not-exist.
- Iskander Sanchez-Role et al., ”Can I opt out yet?: GDPR and the Global Illusion of Cookie Control,” Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security, Asia CCS ‘19 (New York: Association for Computing Machinery, 2019), 340-351, https://doi.org/10.1145/3321705.3329806.
- European Data Protection Supervisor, “ePrivacy Directive,” accessed April 19, 2023, https://edps.europa.eu/data-protection/our-work/subjects/eprivacy-directive_en.
- ”Safer Internet Day: Are you Restricting Cookies,” Eurostat, European Union, February 8, 2022, https://ec.europa.eu/eurostat/web/products-eurostat-news/-/edn-20220208-1.
- Jamil Anderlini and Clothilde Goujard, ”Brussels moves to ban Eurocrats from using TikTok,” Politico, POLITICO, February 23, 2023, https://www.politico.eu/article/european-commission-to-staff-dont-use-tiktok/
- Davey Alba, ”Open AI’s Chatbot Spits Out Biased Musings, Despite Guardrails,” Bloomberg, Bloomberg, December 8, 2022, https://www.bloomberg.com/news/newsletters/2022-12-08/chatgpt-open-ai-s-chatbot-is-spitting-out-biased-sexist-results.
- Ylli Bajraktari, ”2-2-2: Where are we on AI Regulations?,” Special Competitive Studies Project, SCSP, February 16, 2022, https://scsp222.substack.com/p/scsp222?s=r.
- Anu Bradford. The Brussels Effect: How the European Union Rules the World. New York: Oxford University Press, 2020.
- OECD, “OECD Principles on Artificial Intelligence,” accessed April 19, 2023, https://oecd.ai/en/ai-principles; White House Office of Science and Technology Policy, “Artificial Intelligence Bill of Rights,” accessed April 19, 2023, https://www.whitehouse.gov/ostp/ai-bill-of-rights/.
- Cameron F. Kerry and John Villasenor, “Protecting Privacy in an AI-Driven World,” Brookings Institution, December 10, 2019, https://www.brookings.edu/research/protecting-privacy-in-an-ai-driven-world/.
- Special Competitive Studies Project, “The First Networked War: Eric Schmidt’s Defense Innovation Board and the Future of U.S. Military Technology,” Substack, May 22, 2021, https://scsp222.substack.com/p/the-first-networked-war-eric-schmidts.
- US Department of State, “U.S.-EU Trade and Technology Council (TTC),” accessed April 19, 2023, https://www.state.gov/u-s-eu-trade-and-technology-council-ttc/.
- European Commission, “TTC Joint Roadmap: Trustworthy AI and Risk Management,” accessed April 19, 2023, https://digital-strategy.ec.europa.eu/en/library/ttc-joint-roadmap-trustworthy-ai-and-risk-management.
- European Parliament Research Service, “The EU’s New Digital Compass and the Way Forward,” European Parliamentary Research Service Blog, February 2023, accessed April 19, 2023, https://www.europarl.europa.eu/thinktank/en/document/EPRS_BRI(2023)739336.
- Special Competitive Studies Project, “Mid-Decade Challenges to National Competitiveness,” Virginia: SCSP, 2022, page 103, https://www.scsp.ai/wp-content/uploads/2022/09/SCSP-Mid-Decade-Challenges-to-National-Competitiveness.pdf.