The recent fighting between Israel and Palestinians in Gaza, involving airstrikes and rocket launches, resulted in more than 200 deaths. Before the 21 May ceasefire, militants fired thousands of missiles at Israel, with as many as 130 launched in a single attack, possibly the largest barrage seen over the years.
To protect itself, Israel has a layered approach to missile defense, which includes deterrence, attacks on missile launchers, active interception by the Iron Dome anti-missile system, early warning systems, and damage minimization. The Iron Dome is a counter rocket, artillery and mortar (C-RAM) and very short-range defense (V-SHORAD) system. Although it has proven highly effective over the years, and was again this time, the latest violence has shown the challenges of deploying automated systems against self-learning (i.e. human beings, for now) opponents.
Iron Dome, like any artificial intelligence (AI) system, acts within a set of pre-programmed parameters to achieve the interception of missiles, rockets, or mortars that are en route to high-value targets. These parameters are the foundational rules that the system uses to differentiate between a rocket and other objects, and to establish whether a high-value target is threatened, the correct interception trajectory, and so on. As new data accumulates during an engagement, the system matches this to its training to draw conclusions. But Iron Dome can be pushed from highly effective to much less effective when overwhelmed by too many rockets within these parameters, or by rockets being fired outside the parameters (e.g. rockets with such short flight times they cannot be intercepted). Iron Dome can also be pushed beyond its parameters by increasing the accuracy of rockets, or increasing the volume needing to be intercepted. Tactics employed by militants, including the firing of the largest single barrage of rockets, use of loitering munitions, and short flight times, show that they can identify and exploit these limitations.
In order to ensure the continued effectiveness of the Iron Dome, Israel makes technical updates as new information is gathered, and moves batteries to prevent militants from destroying them, or learning defensive projectile trajectories. However, at the peak of a conflict, where forces are engaged in high-intensity close-combat, the system updates are too slow. Understanding the rigid parameters of AI systems and the challenges they present for military deployment is vital to ensure reliance on these machines does not undermine the ability to effectively fight on the battlefield.
As militaries around the world employ increasing numbers of automated systems, sometimes replacing humans, the rigidity of such technology will lead to a race between the speed at which opponents can identify the breaking points versus the speed at which the systems can be updated. Should militants successfully identify and achieve saturation of Iron Dome during the most intense engagement, critical updates to the system may not be achievable. The impact will not be gradual, it is anticipated that it would cause an exponential increase in deaths. Furthermore, with Iron Dome rendered ineffective, the Israeli Defense Forces will have to rely on other measures, such as the deployment of ground forces — possibly into contested terrain like Gaza — and bomb shelters for civilians. However, there are problems here too: complaints from civilians during the current outbreak of violence have been rising and the investment in Iron Dome has not been matched by new spending on bomb shelters. This compromises Israel’s overall resiliency.
After 10 years of the Iron Dome, these lessons (and those still emerging) provide vital insights for military planners focusing on the role of AI in future wars. It can generally be understood that when these systems fail, we will always be reliant on human soldiers to step in. Investment in AI systems cannot come at the cost of a flesh-and-blood soldier’s ability to effectively operate on future battlefields, where systems may be unavailable. Furthermore, training and investing in armed forces personnel must take into account their needs should systems failure occur, with redundancy not only planned for but also included in operational procedures and personnel training. The value of servicemen and women must not be underestimated, they are still the only “self-learning” systems, and are unmatched when it comes to flexibility, creativity, and ability to adapt their parameters to suit battlefield developments.
Investment in AI should not and cannot come at the cost of overall resilience of the whole of society in the face of conflict. Strategically and operationally, it must remain clear that AI is but one tool. No planning system should place all its eggs in one basket, no matter how powerful that tool is.