It is 2014. ISIS has taken Syrian territory and is heading into Iraq, towards the major city of Mosul.  US Centcom believes Iraqi Security Forces will repel an attack of a few thousand fighters. But its data is wrong.  Iraqi Security Forces count ‘ghost soldiers’ who received half of their salaries in return for failing to report for duty. This exaggeration is one of many that led to ISIS’s capture of Mosul. 

Could artificial intelligence have prevented this disaster? In short, no. Even the most powerful algorithms generate poor outcomes from poor data and poor logic. When either the battlefield data or logic is flawed, no current AI tools can plug the gap to prevent the loss of life, destruction of infrastructure, or ceding of territory.

When introducing AI into the military, it is crucial to understand its limitations. AI can help humans engage in combat in familiar and predictable situations. But the technology struggles to deal with novel, new battlefield situations and it cannot replace human decision-making.

AI requires large quantities of reliable data. The longer AI programs are exposed to a problem set, the better they get. But in the early stages of a novel and complex situation, the data is either absent, unreliable, or both. Simple changes can be enough to disrupt a data-centric approach for long enough to create an opportunity for the enemy.

Explosive-laden kamikaze drone ships provide a good example. While Ukraine has used them to dramatic effect to attack the Russian fleet in Crimea, these have been remotely rather than autonomously AI-piloted. AI can make fast and effective decisions to pilot a drone ship toward its target while in the open sea. But the automated ship may go off target if it encounters the unexpected: a waterfall, a dam, or a congested marina.

If the situation is different from anything on which the AI was trained, machines fail to interpolate or extrapolate reliably or fast enough, at least for now. They fail. If AI-powered vehicles encounter an unexpected incident, a so-called “edge case,” they often freeze and say, “I don’t know what’s going on.” Alternatively, uncrewed vehicles powered by AI might not recognize that the situation is unusual and carry on. Both responses are dangerous. 

In contrast, trained, experienced humans would adapt. They deploy mental models built up from an array of different encounters, both personal and received, to cope with and even succeed in novel situations. We call this skill “creativity.”

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Nevertheless, digital tools, including AI, can enhance human-centric decision-making. They can help harness the positive human qualities that underpin creativity: wisdom, knowledge, instinct, and experience. They can counteract negatives, such as personal bias, complex social dynamics, and weaknesses in an organization’s culture. 

AI-powered digital tools that include modeling and simulation allow contributors to collaborate in headquarters, on the frontline, and in different time zones. Causal relationships and mathematical probabilities can be embedded in these models, allowing sophisticated calculations. While these calculations cannot predict the future, they do predict the likely outcomes. This human-machine teaming makes the most of our collective wisdom and applies it to new, uncharted situations.

There may come a time when AI has so much data to draw on that few novel situations emerge. There also may be a point when next-generation AI models will be able to combine and exploit this data to innovate in novel situations, as humans do. 

But we are not there yet. Our efforts to improve decision-making with investments in data-centric tools and technology must also support a human-centric approach. In the wake of their victory in Mosul, ISIS continued towards Baghdad. It would take nine months, the loss of tens of thousands of lives, the displacement of more than a million civilians, and the devastation of its infrastructure to dislodge them from the city. Today’s AI would not have saved Mosul. If we rely only on AI, we will likely make similar data-related mistakes in the future, with similarly tragic consequences. 

Rob Solly and Daniel Tarshish are co-founders of Cosimmetry, a decision-support research consultancy based in the UK. Rob Solly is a modeling specialist. Daniel Tarshish is an expert in national security and global affairs.  Both have extensive UK public sector experience. 

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More