Lethal autonomous weapon systems will come to dominate warfare in the coming years. NATO needs to harmonize its approach to their development and use, or risk being left behind.
The rapid weaponization of artificial intelligence, “big data,” social media, robotics, and a host of other technologies presents a clear competitive challenge to NATO, an alliance with members that exist on a wide spectrum of military-technological capabilities. The future effectiveness of NATO will be driven in large part by how it handles these challenges from hobbling its ability both to act in unison and to prevail in a contest of wills. While there are numerous potential technology gaps, one that will likely only increase is partner nations’ ability and willingness to employ lethal autonomous weapon systems. These systems will inevitably grow more capable, and more necessary, in the coming decade.
Technological gaps are inevitable considering the disparities in GDP and military budgets. The United States accounts for over 70 percent of NATO’s overall military spending, while the next three largest contributors (the United Kingdom, France, and Germany) provide approximately half of the remaining 30 percent. And with most NATO nations continuing to fund their militaries at under the 2 percent GDP goal, technological gaps will continue to grow. For perspective, the 2021 United States Department of Defense research and development budget is approximately equal to the entire defense outlay of France and Germany combined. With such a large differential, what can be done to help enable effective investments in autonomous weapons by smaller nations? Even more specifically, how can smaller nations provide capabilities that can integrate into, and contribute to the alliance? To better invest limited funds, now is the time to look at a NATO standard for lethal autonomous weapons and their ethical use.
While there is no agreed-upon international definition of lethal autonomous weapons systems, the U.S. Department of Defense defines them as “weapon system[s] that, once activated, can select and engage targets without further intervention by a human operator.“ While these are not Schwarzenegger-style Terminators and still have a degree of human control over them, the technology enabling these systems is maturing rapidly, and military necessity will increasingly demand that these systems gain broader parameters of autonomous action. Yet despite the complexity of these systems and the inevitability of their proliferation, NATO does not currently have a common standard for their use or development. In fact, some NATO countries even have opposing views of how to handle them.
NATO standards are designed to ensure compatibility among weapon systems, communication architecture, and a host of other warfighting systems. The 7.62mm small arms round is a good example of this. But what is the 7.62mm equivalent standard for the development and employment of autonomous weapon systems? This opens a host of related questions regarding the employment of these systems: What Identification – Friend – Foe (IFF) capability should ground and air units require to prevent fratricide? What degree of certainty does a lethal autonomous weapon system require before final engagement? What level of collateral damage is acceptable? What degree of compatibility between systems is required? Should all these parameters (and others) be adjustable, and if so, at what command level?
The attendant ethics also need to be addressed. NATO’s experience in Afghanistan was a case study in the challenges of coalition warfare. Differing risk tolerances, legal requirements, ethical views, domestic political concerns, and at times simply combat capability, all combined into to complex policy cocktail that impeded the effectiveness of combat operations.
While modern militaries have accountability, legal, and ethical systems incorporated into their command structures, they are not uniform and leaders in differing militaries have varying degrees of authority. The key questions hinge on two issues: Who gets to decide to employ an autonomous weapon, and who is responsible should things go wrong? The Kunduz hospital strike in October of 2015 was driven primarily by human error. Responsibility was fixed on the chain of command and 16 leaders were disciplined. Who will be responsible if a member nation conducts a NATO-authorized strike and it goes terribly wrong? If this framework is not thoroughly established ahead of time, not only is it likely that commanders may hesitate to use this capability, the risk-aversion inherent in bureaucracies may limit the development of autonomous weapons that will be needed in future conflicts.
In the emerging field of lethal autonomous weapons, establishing a common NATO standard for the development and use of autonomous weapons will help address the gap in capabilities among NATO member nations. By establishing these standards, nations can ensure that their defense expenditures on autonomous weapons will create systems that are interoperable, able to contribute to NATO’s capability, and can be employed within defensible ethical guidelines.
February 23, 2021