It’s a telling contradiction. The US National Security Agency is reportedly using Anthropic’s Mythos model, while the Pentagon has designated the same company as a supply chain risk and banned federal agencies from using its products.
Governments are struggling to form a coherent approach to an emerging, powerful technology that they cannot do without, particularly as they see China challenging Western AI leadership. Existing frameworks, from financial regulation to cybersecurity law and AI legislation, don’t seem to fit, as some companies want to put restrictions on how their technology is used. Who should set the terms?
Also unanswered is how to meet the largest challenge with AI: how to deal with China as a competitor? Beijing is conducting “industrial scale” campaigns to crib frontier AI models from US companies, Michael Kratsios, White House Director of the Office of Science and Technology policy, charged in a recent memo.
Anthropic’s Mythos underlines this conundrum. The software can find and exploit hidden flaws in the systems that run the world’s banks, power grids, and other critical infrastructure. Anthropic has held back on releasing the new technology, preferring instead to work with select companies to first patch security vulnerabilities.
World leaders have rushed to figure out the scale of the security risks. The President of Germany’s Federal Office for Information Security announced that it was in “active dialogue” with Anthropic, bracing for a “paradigm change in the nature of cyber threats.” The Governor of the Bank of England pressed for access to ensure banking security, saying Mythos could “crack the whole cyber-risk world open.” The European Commission opened talks with Anthropic to discuss whether Mythos qualifies as “high-risk” under the EU AI Act.
The Mythos crisis comes only weeks after Anthropic refused to give the US government access to its AI for mass surveillance and autonomous weapons. The Pentagon responded by designating the company as a supply chain risk.
The designation represented a reach. The “supply chain risk” label indicates vulnerability to compromise, coercion, or disruption. Historically, Washington has reserved the term for foreign adversaries.
In recent years, it targeted Chinese and Russian companies. The Federal Communications Commission banned Huawei and ZTE networks because Chinese law could compel them to assist in espionage. The Commerce Department banned Kaspersky Lab because the Russian cybersecurity firm was obligated to cooperate with the Russian secret service. In both cases, the logic was the same: a foreign government could weaponize the company against American interests.
That logic does not apply to Anthropic. When naming the AI company a national security risk, the government is complaining that it is not a dependable part of the defense supply chain. In this definition, reliability is not technical performance. It is a matter of compliance — the willingness to provide full functionality, without exception. That is a different kind of designation entirely, less a security judgment than a negotiating tactic.
Anthropic argues that today’s frontier AI models remain in their technological adolescence — still emerging, too unpredictable, too powerful, and too poorly understood to be trusted with autonomous lethal authority, or to conduct mass surveillance. Perhaps the company has concerns about liability for error or misuse.
Given the supply chain risk designation, federal agencies were directed to cease using Anthropic’s technology. Defense contractors — including Amazon, Microsoft, and Palantir — must certify that they do not use Anthropic’s models in their work with the military.
Applying the supply chain risk label to an American AI firm shifts the designation from a tool for managing external vulnerability to one for enforcing alignment.
The question is not whether Anthropic is a supply chain risk. It is not. The question is what the US position is on the implications of deploying AI systems for mass surveillance, autonomous weapons, and cybersecurity.
What should the relationship between governments and AI companies look like? Should governments set the conditions of access as a matter of sovereignty? Or should they negotiate with companies that hold capabilities they cannot easily replicate or replace?
CEOs running frontier AI companies control something that democratic governments need but cannot simply mandate, nationalize, or credibly threaten to do without. That’s different from the relationship between governments and defense contractors, pharmaceutical firms, or telecoms — all industries where regulatory leverage is considerable, and alternatives exist. With frontier AI, the leverage is more unevenly distributed than governments appear comfortable admitting.
At the same time, AI companies are eager to be first in embedding their foundational models within prized classified governmental systems. Google recently joined OpenAI and xAI in allowing the Department of War to use their artificial intelligence tools in classified settings, essentially allowing “AI to be used in all lawful scenarios.”
In contrast, Anthropic is using its leverage to pause, question, and push back. Silicon Valley’s founding myth is “move fast and break things.” Anthropic seems to have read that line and asked: What if the things you break can’t be fixed? And what if they are too valuable to break?
Applying a supply chain risk designation to an American company for declining a commercial arrangement risks diluting a designation that serves genuine national security purposes. The NSA’s use of Mythos suggests that the capability is considered indispensable regardless of what the label says.
The overarching question of whether AI systems should be deployed for mass surveillance or autonomous lethal targeting remains unanswered. It requires debate. The choices are hard; the tradeoffs are real; the public may not like the answers.
What is striking is that the conversation is being forced, at least in part, by Anthropic. That may be a sign of how important the questions are. It is also a sign of how much has changed in the relationship between those who govern and those who build the technologies that governments depend on.
Keeping pace with China is an urgent strategic priority. But the pressure to compete cannot become a reason to abandon the values that make democratic power worth preserving in the first place. How democracies navigate this tension between the urgency of strategic competition and the integrity of their own principles will help define what kind of democracies they remain.
Elly Rostoum is a Senior Resident Fellow with the Center for European Policy Analysis (CEPA).
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation