The war in the Middle East represents a new model of conflict: the priority objective is no longer to destroy military units or conquer territory, but to identify, locate, and eliminate specific people within the chain of command, the scientific apparatus, and political leadership. Although “decapitation” has been in military and intelligence doctrine for decades, what’s different is the possibility of industrializing it through data, sensors, ubiquitous surveillance, and algorithmic systems capable of cross-referencing information at speed and scale.
The Middle East has been the most visible laboratory. Israel uses AI to process intelligence, intercept communications, and conduct surveillance to generate targets “faster.” This is not a minor detail: when Microsoft, Google, Amazon, or OpenAI technology enters the operational cycle of a war, the distance between Silicon Valley and the battlefield ceases to be conceptual and becomes contractual.
Battles over contracts are erupting, most notably the recent standoff between Anthropic and the Pentagon. Anthropic signed a $200 million contract with the Department of Defense last summer, making Claude the only AI model available on classified US military networks.
When Anthropic executives asked that Claude not be used for fully autonomous weapons systems or mass surveillance of US citizens, the Pentagon refused. It designated the company a “supply chain risk,” a label normally reserved for suppliers linked to companies of enemies. Translated into real language: expulsion from the defense ecosystem and indirect pressure on any company working with the Pentagon to stop using Claude.
The question is not whether countries should use AI as part of their legitimate right to self-defense. The question is who decides the limits of that use. If the answer is the military apparatus itself, without effective external oversight, then the problem is not about technology. It’s about the rule of law.
The political message is clear: “responsible” AI will be used to wage war, and the Pentagon will decide what is lawful. This is not about some futuristic scenario. A system capable of processing millions of communications, cross-referencing financial, biometric, and geolocation databases, and extracting behavioral patterns in real time not only identifies suspects — it can anticipate behaviors, map entire social networks, and tag emotional states or political trends.
The key piece is individual identification. The expansion of facial recognition and biometric systems in Gaza fits this logic: it is no longer just a matter of policing a territory, but of turning every body that crosses it into an exploitable piece of data. When a war learns to see faces, to correlate them with histories of calls, movements, kinship, or patterns of behavior, the enemy ceases to be a force and becomes a concrete identity. The battlefield is no longer just on the map — it’s in the database.
Lebanon showed another derivative of that same transformation: the ability to penetrate supply chains, communications networks, and circuits of the adversary’s trust until they become weapons. Israel’s demolition of Hezbollah’s pagers demonstrates that contemporary technological superiority consists of contaminating the enemy’s material ecosystem. You don’t have to imagine a dystopia of hacked cameras around every corner to understand the logic: a sophisticated combination of intelligence, infiltration, surveillance, and precise attack capability is enough to turn the adversary’s routines into lethal vulnerabilities.
Iran brings the next step: the extension of this logic of decapitation. Israeli strikes in June 2025 decimated the leadership of the Revolutionary Guards and hit scientists and critical infrastructure. The systematic elimination of leaders can offer immediate tactical successes, but it rarely resolves the underlying conflict and often ends up reinforcing dynamics of radicalization, martyrdom, or succession by even more extreme figures. That is the central paradox of this precision war — technically, it impresses; strategically, it does not always work.
The most important consequence of AI-powered war is to compress time. Detecting, validating, and attacking a target used to require relatively slow processes. AI reduces it to milliseconds. The problem is no longer just autonomous weapons, but the hasty deployment of AI systems to support the selection and attack of targets. Even if human oversight is still required, the environment in which that decision is made is increasingly pre-configured by machines.
This has obvious political effects. Once leaders understand that they can become personal targets at any time, the temptation will be to shield themselves, hide, disconnect, and delegate to increasingly small and opaque circles. You don’t necessarily get more deterrence: you often get more paranoia. And paranoia, in politics and in war, rarely leads to moderation.
Another, perhaps even more important, consequence is proliferation. Today, the US and Israel run these campaigns. They enjoy air superiority, privileged access to satellites, cloud, and advanced analytics. But the technology that makes them possible is, to a large extent, commercial, modular, and increasingly accessible, so the simple dynamics of technological diffusion mean that the advantage does not remain in the hands of the few for long. More and more states, and eventually non-state actors such as terrorist groups, will attempt to turn people into coordinates and coordinates into targets.
AI-powered weapons are capable of eliminating people, decapitating regimes, and turning assassination into a simple target shot. When that happens, war will cease to be a dispute over territory and will become a systematic hunt for identities. That is the real historical leap that we have in front of us. It is not appetizing.
Enrique Dans is a Senior Fellow with the Tech Policy Program at the Center for European Policy Analysis (CEPA). He is one of the most prominent Spanish academics in the fields of technology adoption, entrepreneurship, and innovation.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation