When AI opened a new frontier, the European Union responded by focusing on the technology’s potential dangers. It raced ahead with a new regulation aimed to head off a potential machine apocalypse, killer robots running wild, surveillance tools destroying civil liberties, and automation eviscerating jobs.
The result was a broad, binding AI Act — and a backlash.
Unlike previous European tech regulations about privacy and competition, which spread around the globe, few other countries have followed on AI, considering Europe’s regulation premature. A new European Commission took office committed, unlike its predecessor, to boosting competitiveness and regulatory simplification.
Is this the Washington effect? Even after the Senate’s 99-1 vote to strip a proposed decade-long moratorium on state AI enforcement from President Donald Trump’s One Big Beautiful Bill, the administration continues to nudge federal policy toward light-touch AI oversight. Trump’s January executive order, Removing Barriers to American Leadership in AI, revoked the Biden-era safety mandate and instructed agencies to avoid rules that might impede innovation, just as Brussels debates whether to slow the rollout of its own strict regime.
If the US rushes ahead, will it leave the continent in the dust? European leaders fearing this outcome are calling for a pause and rethink. Although the AI Act’s first bans have been in force since February, deadlines are looming: a voluntary code of practice for powerful AI models by August 2, followed a year later by binding rules for every high-risk system, from AI-driven hiring tools to border-control algorithms.
Several capitals warn that the timetable is no longer realistic. Core technical standards remain mired in draft form, while several member states have not even appointed the national watchdogs needed to police the new regime. The Commission now faces an uncomfortable choice: press on and court chaos, or pause and regroup.
Support for hitting the brakes is gathering pace across the bloc. Swedish Prime Minister, Ulf Kristersson, became the first EU leader to call for a formal timeout. He branded the fledgling rulebook “confusing” without common standards. Czech deputy minister Jan Kavalírek argued that companies need breathing space to comply. Spain’s digital transformation minister, Óscar López Águeda, backed streamlining while rejecting a full rollback: “It’s not about stopping the clock, it’s about synchronizing our clocks,” Agueda said.
European regulators are attempting to show flexibility, while insisting that they will uphold the AI legislation. “The August 2 deadline will stand and be enforced,” insisted Lucilla Sioli, the head of the EU AI Office at a recent conference on AI governance. At the same time, she added that officials are drafting a simplification package so the later deadlines “don’t bury companies” — particularly small and medium-sized firms — “in red tape.”
Junking the entire regulation remains a step too far. Margrethe Vestager, the former competition chief who steered the AI Act through three grueling years of negotiation, says reopening the text “way too soon” would drain public trust. German European parliamentarian Axel Voss makes the same argument: endless chatter about rewrites breeds uncertainty and chips away at Brussels’ reputation as a serious rule-setter.
Under the most probable scenario, the February bans would stay put, but the obligations for general-purpose AI models due in August and the full high-risk regime (due in August 2026) would slide by 12 to 24 months. High-risk systems decide who is offered a job or a loan, keep the lights on, or influence a doctor’s diagnosis – situations where a flawed model could upend a person’s livelihood or put lives in danger. The AI Act mandates rigorous testing, traceability, and human oversight on such high-risk systems before such applications reach the market.
A delay would allow the new European AI Office to hire inspectors, give time to finish detailed standards, and permit capitals to establish their own supervisory authorities. Politically, it will be credible only if it mandates visible progress on these fronts; otherwise, critics will see it as a concession to industry pressure rather than a genuine bid for improved enforcement.
Europe bet that drafting the world’s first AI law would set global rules. The bloc now must decide: hold the line or call a timeout in hopes of allowing the continent to focus on innovating with the new technology. Neither path offers certainty.
Anda Bologa is a Senior Researcher in Brussels with the Tech Policy Program at the Center for European Policy Analysis (CEPA).
Elly Rostoum contributed reporting from Washington.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
