The authors of Europe’s AI Act billed their creation as the next example of the much-vaunted Brussels Effect, where the bloc’s tough regulatory approach spreads around the globe. Countries copied Brussels’ GDPR privacy rules. They duplicated its Digital Markets Act antitrust rules to reign in large tech companies. European regulators now aim to force AI developers to impose algorithmic transparency, risk-based scrutiny, and tight compliance rules.
Yet, so far, Europe’s ambitious AI rules are inspiring few others. Peru stands out, boasting two AI Laws that echo Brussels’ blueprint and seventeen bills, but analysts warn the frenzy lacks the oversight muscle to make its promises stick. In my study, I found only Canada and Brazil are drafting similar frameworks, and both of those initiatives remain mired in legislative limbo. The UK, Australia, New Zealand, Switzerland, Singapore, and Japan are taking a pro-innovation, less restrictive track on AI regulation. The Trump White House has scrapped the previous Biden administration’s AI safety initiative, instructing agencies to “remove barriers” and fast-track high-impact applications. This darkens prospects for transatlantic cooperation on AI.
Europe faces a blunt question of how to respond: will the AI Act flex its muscles if no other major powers adopt similar rules?
Several explanations suggest why the AI Act is generating little global impact, scholars say. The legislation is complex and confusing, argues Ugo Pagallo of the University of Turin. It welds together product‑safety audits, fundamental‑rights tests, and voluntary codes in one statute, yielding what Pagallo calls a “patchwork effect.” Even EU lawyers struggle to parse the rules — let alone regulators in other jurisdictions looking for a plug‑and‑play text.
Brussels also sprinted ahead of multilateral tracks—the G‑7 Hiroshima process, the OECD’s AI‑risk matrix, and the Council of Europe treaty, according to Marco Almada and Anca Radu of the University of Luxembourg. While almost all governments agree with the need to protect privacy and ensure competition, many see more benefits coming from AI than risks.
The European regulation front‑loads hefty conformity‑assessment costs. Start‑ups must pay auditors and generate reams of technical documentation before launching a single line of code. In contrast, the UK or Japan let firms release and monitor iteratively. Complexity, poor timing, and an up‑front price tag tilt non-European policymakers toward a light, sector-specific path—borrowing the EU’s rhetoric on “trustworthy AI,” but shelving, for now, its rulebook.
Of course, the AI Act still could gain momentum. It only officially came into force in June 2024 and details still are being hashed out. If scandals around AI technology create fears, governments might look to the European model. A single headline-grabbing failure—such as an autonomous‑vehicle death—could hand politicians the “off‑the‑shelf” template, casting Brussels from outlier to first‑mover advantage. Likewise, if a few tech giants decide it is cheaper to build once to EU specifications, suppliers downstream will inherit those requirements by default, potentially recreating the dynamic of Europe’s GDPR privacy rules or its DMA antitrust initiative.
For now, Canada and Brazil stand out, inching closer to the EU’s risk-based model than any other major economies. Since 2022, Canadian lawmakers have worked on an Artificial Intelligence and Data Act that mirrors Europe’s push. Yet, after years of parliamentary wrangling, the bill’s final shape remains uncertain.
Brazil is flirting with the EU’s model. In 2023, it proposed a bill classifying AI systems as excessive, high, or lower risk. After two years of debate and Senate approval in late 2024, the bill now heads to the lower house, but industry pushback has already watered down some high-risk provisions. The final shape remains uncertain, fueling debate over whether it will genuinely enforce oversight or become toothless in a fast-expanding AI landscape.
Although South Korea’s AI Basic Act, passed in December 2024, borrows the EU’s talk of “risk” and “transparency,” it skips the EU’s heavy pre‑launch inspection requirements. Developers of “high‑impact” systems simply file a risk review with the Ministry of Science and ICT and may, if they wish, seek outside certification; no third-party audit or thick technical dossier is mandatory. Most policing happens after a system is on the market: officials can demand fixes or levy fines that top out at ₩30 million (about €21 000). That contrasts with the EU Act’s outright bans on social profiling and other techniques. There are no EU “megafines” of up to 7% of worldwide turnover, either. Instead, South Korea’s Act is paired with tax breaks and research funds designed to grow national AI champions—a light, growth‑minded playbook.
Elsewhere, the AI Act finds few takers. The UK is steering clear of any single legislative overhaul, preferring sector-specific guidance. Australia and New Zealand rely on existing consumer and data laws, only exploring minor tweaks. Japan, often seen as a policy ally, focuses on voluntary frameworks instead of binding obligations. Singapore champions flexible governance models.
Switzerland has rejected the AI Act’s heavy approach and is opting for targeted fixes. Rather than replicating the AI Act, it will integrate the Council of Europe’s AI Convention through fine-tuned updates to existing laws. This aims to safeguard fundamental rights, bolster trust in AI, and keep Switzerland an innovation hub. Swiss authorities emphasize compatibility with the EU to avoid penalizing domestic firms but remain wary of layering on EU-style requirements. Fellow EFTA member Norway is taking a different approach and is incorporating the EU AI Act in its domestic legislation.
The lack of global buy-in raises sharp questions. If Europe alone imposes strict rules, it risks deterring investment and overwhelming smaller firms that lack compliance resources. Capital and talent may then shift to more permissive markets. With other major economies unconvinced, the AI Act could remain a distinctly European measure—ambitious on paper but limited in global influence.
Anda Bologa is a Senior Researcher with the Tech Policy Program at the Center for European Policy Analysis (CEPA).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
