This article is part of CEPA’s “Age of Autonomy” series, which looks at the growing use and implementation of autonomous technologies on the battlefield and its implications for transatlantic defense and security.

Autonomy and autonomous are now buzzwords being used — and abused — to describe the growing utilization of artificial intelligence (AI)-enabled functions, processes, and systems in the military (as well as civilian) sphere. The Russian invasion of Ukraine and the high-intensity conflict that ensued have been a test bed for a vast array of novel technologies as well as the innovative use of old ones, ranging from AI-powered voice recognition and situational awareness software to ubiquitous commercial drones to commercial satellite communications and imagery and the large-scale use of 3D-printed components, to name a few.

The images of highly maneuverable explosive-laden first-person view (FPV) drones or loitering munitions using computer vision to lock on a target and destroy it have made headlines and alimented a wave of anxiety about the dawn of an autonomy revolution in warfare, with killer robots dominating the battlefield and upending the international order.

While nobody can realistically deny the structural impact of AI on warfare and its future evolution, the current debate over autonomy has generated an often distorted and inaccurate perception of its actual military implications, which in turn has produced an irreconcilable binary approach typically characterized by either a total rejection of, or adoration toward, autonomous weapons systems (AWS). As a result, any policy decisions and mechanisms regarding the use and regulation of autonomy for military purposes risk being ineffective or even detrimental for both exploiting the technology and preventing its misuse.

A key reason behind this inconsistent approach is the lack of understanding of what autonomy really means, exemplified by the frequent confusion between autonomous and automatic weapon systems. The latter operate with varying degrees of automation to perform specific and sequential functions against selected categories of targets and cannot in any way deviate from their intended purpose. As such, their behavior is deterministic or, in simpler terms, predictable. Different types of mines and artillery shells with airburst proximity fuses, for example, work without human control once they are activated.

More broadly, traditional homing munitions fall into this category and have been in service for several years. Anti-radiation missiles like the AGM-88 utilized by Ukraine, for instance, are designed to automatically detect and home in on an enemy radio emission source, typically a radar station or air defense system, by way of their passive radar seeker. What matters here is that these weapons are designed to strike very specific and predefined sets of targets and cannot divert from their original sequence of commands.

Conversely, AWS are characterized by the ability to sense and adapt to the operational environment. This means they can autonomously adjust their course of action based on the behaviors resulting from the interaction of their computer programming with the external context, generating a range of different outputs for a given input. This ability derives from on-board AI-enabled computing and sensors and can reach different levels of proficiency depending on the algorithm’s sophistication, the system’s design trade-off, the task’s complexity, and the context.

In general, there are systems with limited autonomous capabilities that are programmed to detect, track, cue, and engage targets without human intervention, although their use is intended for specific target sets and typically restricted to uncluttered environments, thus producing a largely predictable behavior. Furthermore, a human operator can modify the weapon’s governing rules or activate/encode a mission abort option in case of potential risks. Humans, therefore, are still in or on the decision-making loop in the targeting process. Air-defense systems, cruise missiles, and loitering munitions or other platforms equipped with a variety of sensors and onboard data processing capabilities are some of the most common examples.

However, the integration of more advanced machine learning capabilities allows autonomous systems to learn and make decisions autonomously, including picking their targets, constantly updating the available range of outputs based on the inputs received from the environment. As such, their behavior is non-deterministic and potentially unpredictable. At present, however, there is no evidence of “self-learning” fully autonomous weapons systems being employed on the battlefield.

The Israel-made Harpy anti-radiation loitering munition, which can hover over a specific area for up to nine hours in search of radar-emitting targets and autonomously engage them without human supervision, is often mentioned as an example of a fully autonomous weapons system. Yet, while all the Harpy’s functioning stages are indeed automated, the system cannot learn and change its mission’s rules and outputs, because it will pick and engage only targets within a specific holding area and radio frequency range preselected by a human operator. Therefore, Harpy’s behavior remains predictable in nature as its mission parameters and governing rules are defined by a human, although there could still be the risk of collateral damage.

Another example often alluded to in recent debates over lethal AWS is the Turkish Kargu-2 tactical rotary wing loitering munition, which according to a United Nations report may have been used to autonomously target forces affiliated with Libyan warlord Khalifa Haftar during his failed siege of Tripoli in 2020. The UN panel of experts, however, could not provide any evidence to corroborate the autonomous mode claim, and the Kargu’s manufacturer, the Turkish company STM, later specified that the system uses AI-enabled computer vision to identify and track targets but requires a human operator to engage them. In a recent private discussion, an engineer directly involved in the Kargu’s development confirmed to this author that the system is not programmed to attack targets without human supervision.1

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

An increasing number of weapon systems — from UAS to loitering munitions to air defenses to uncrewed ground vehicles and others — use onboard computer vision and other AI-enabled functions to automate and expedite processes for which human natural abilities are too slow or limited for the accelerating tempo and skyrocketing amount of data that characterize decision-making in present (and future) military operations. These capabilities, however, do not make a weapons system fully autonomous by default. Instead, they optimize specific sub-processes and tasks (i.e., navigation, target identification, situational awareness, etc.) to address human fatigue and diminish the cognitive overload on the human operator. Aside from that, autonomy is generally meant to reduce the risks to personnel by limiting the number of soldiers and crew involved in operations.

The targeting and engagement cycle is one of those tasks, but it will likely remain confined to platforms conceived for specific target sets or missions for which reliable training data abound, where the distinction between military and civilian targets is simpler, and the risk of collateral damage is limited. These include the suppression of enemy air defenses (SEAD), air-to-air engagements, and layered precision fires at tactical range or against an enemy’s second-echelon elements, to name a few. Aircraft systems (i.e., UAS, loitering munitions) are the prime and natural candidates but we will likely see targeting tasks for autonomous ground systems as part of their fire support roles in certain tactical engagements.

At the same time, there are tasks that will undergo a major and more structural shift toward autonomy. Examples include intelligence collection, surveillance and reconnaissance through high-fidelity sensors, stand-in electronic warfare, decoys, communication nodes, and resupply. Once again, the most promising results in these areas are coming from the use of networked and diverse autonomous aircraft with swarm capabilities, although advancements are to be expected in the maritime and land domains as well.

Therefore, the anticipated transformational impact of AWS on warfare will be slower and patchier than often assumed because autonomy is mostly intended as an enabler and its integration is typically limited to sub-systems or specific functions rather than the entire military enterprise. Several reasons need to be considered.

First, physics still matters. AWS rely on substantial computing power and working memory at the edge, and typically require much higher battery capacity especially for long-range applications. This implies obvious trade-offs between size, range, speed, endurance, payload, and, ultimately, costs, with inevitable operational implications for the foreseeable future. The use of jamming-resistant ultra-wideband connectivity, for example, mitigates energy consumption but has inherent range limitations. When it comes to airborne systems, the use of autonomous “mother ships” releasing palletized swarms and air-launched effects from intermediate staging areas could help increase the operational reach of swarms and air-launched effects but, while promising, these larger mother ships would still be vulnerable to enemy interdiction and countermeasures and are hardly the sole solution.

Second, even as the technology matures, without proper assimilation into robust concepts of employment (CONEMPs) and operations (CONOPs), AWS will provide only marginal advantages to the warfighter. Establishing CONEMPs and CONOPs, however, does not happen overnight and becomes more challenging in multinational environments like NATO, due to interoperability issues, heterogeneous capabilities and training, human-machine interface challenges, and different approaches to AWS at the national level. In addition, each military arm may look at autonomy in its own way, introducing further layers of complexity.

As the war in Ukraine demonstrates, the effective integration of novel technologies is just as important as the technology itself.

Third, it would be hasty — to say the least — to assume states have the appetite (and interest) to deploy fully autonomous weapons systems (i.e., self-learning platforms) without carefully weighing the risks of undesired escalation and the potentially catastrophic costs associated.   

Against this backdrop, some words of caution are necessary.

Obviously, the abovementioned incremental and irregular adoption of AWS does not rule out the hazards associated with these platforms, from ethical and legal considerations to the issue of collateral damage. Yet, it seems fair to say that their likely delimited use for direct target engagement purposes partly mitigates these risks, weakening some of the arguments in favor of an outright ban on AWS.

Unfortunately, the lack of an internationally agreed definition of autonomy is a major obstacle to accurately assessing the impact of AWS and regulating their use. This problem is further compounded by the fluid nature of AI’s evolution and how it reverberates on the notion and practicalities of “meaningful human control.” Having the “human out of the loop” may not necessarily be an issue per se if the AWS relies on the same information, parameters, and firm rules of engagement that would have been available to the operator, provided it cannot override them. As expert Jovana Davidovic put it, “any moral difference arises from empirical fact about what works better [in terms of safety and avoiding collateral damages], not how far removed the operator is from the final decision.”

In another thought-provoking contribution, scholar Andree-Anne Melancon argues that “the more fundamental issue with the development of automated weapons does not regard the technology or artificial intelligence. Instead, the issues stem from the way targets are selected.” Perhaps, the debate should first address the underlying processes and rules (e.g., target identification, etc.) that are used to program autonomous systems rather than the technology itself.

At the same time, depictions of a seamless use and exploitation of autonomy and AI in perfectly planned and conducted operations will be proved wrong by reality. AI can help see through the fog of war but will not dissolve it completely. The best object detection model currently available, for example, only reaches 65% mean average precision when tested on the most popular benchmark. While these representations have first and foremost marketing goals, both the resulting bias for AI’s virtues and neglect of its limits can be detrimental to a constructive and balanced debate on the issue of AWS. But assessments that dismiss AWS (or other AI military applications) out of hand based on terrifying or inaccurate takes are not helpful either.

The recent adoption of the first-ever resolution on AWS by the UN General Assembly shows that states do see a need for action to regulate AWS, and should inspire a less militaristic but also soberer approach to this technology.

Federico Borsari is a Leonardo Fellow with the Transatlantic Defense and Security Program at the Center for European Policy Analysis (CEPA). He is also a NATO 2030 Global Fellow and a Visiting Fellow at the European Council on Foreign Relations (ECFR). His main research interests include security and defense dynamics, transatlantic security relations, and the impact of new technologies on warfare.

Europe’s Edge is CEPA’s online journal covering critical topics on the foreign policy docket across Europe and North America. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Europe's Edge
CEPA’s online journal covering critical topics on the foreign policy docket across Europe and North America.
Read More
  1. Discussion with a Turkish IT engineer and former high-ranking military officer, Istanbul, October 2023. []