In October 2020, Deputy Secretary General of NATO Mircea Geoană highlighted the benefits of establishing a “transatlantic community cooperating on Artificial Intelligence (AI).” The Deputy Head of NATO’s Innovation Unit followed with a commitment to its responsible use. The US Department of Defense (DoD) adopted Ethical Principles for AI in 2020 and has committed to bringing together NATO member and partners to operationalize these principles. Despite these statements and developments, more work is required to tackle the very real challenge that ethical AI will pose to future interoperability within NATO.

Without a NATO-led initiative focused on aligning these ethical principles across the Alliance, the interoperability risk of nations fielding AI-based systems that hinder joint operations is high. As the foremost security framework for Europe and North America, as well as the leading defense alliance for promoting and protecting democratic values, NATO is able to facilitate alignment on this issue. As part of a broader strategy on emerging and disruptive technologies, NATO must prioritize ethical AI if it wishes to promote the shared values upon which it was founded, play a key role in facilitating innovation across the Atlantic, and ultimately retain the ability of its members to undertake joint operations.

Establishing NATO ethical AI principles is the first step toward both technical and political alignment, in turn enhancing and fostering interoperability, which is the foundation for NATO to respond to emerging threats as an Alliance, in a flexible and timely manner.

A key challenge for NATO is raising awareness that the answers to ethical questions can no longer be left to later stages of the development and procurement cycle. Decisions made at the political and legal level will have a significant impact on the engineering practices used to develop AI, as well as the technical characteristics of the AI-based systems. The answers to questions such as respecting human dignity, human control, and accountability will be the foundation upon which many technical elements are programed. Systems developers need to make a number of calls throughout the development cycle informed by the answers to key questions, including:

  • how to label data
  • what data to use, and
  • what is an acceptable outcome?

These answers will also impact how AI systems are evaluated and ultimately deployed.

If individual nations or groups are left to develop their own ethical principles without wider alignment to NATO, the result will be a number of AI-based systems with varying technical specifications based on the legal and policy decisions made by individual governments when answering the key questions. As has been demonstrated in areas such as facial recognition and policing algorithms, the assumptions made by those developing the tools and answering the key questions have a significant impact on the real-world functioning of the tool and societal acceptance of its ethics. The risk of tools failing to gain acceptance depends on the legal and ethical decisions made by governments. For the military, this may mean one state using an AI-based system that is seen as unacceptable by another, and in a joint operation one state fielding a system that cannot be used by another. Or worse yet, this could render a joint operation impossible. Without the ability to interoperate across NATO, the inability to effectively and efficiently respond to future threats would undermine the Alliance.

The role of the private sector is another aspect of ethical AI development that has proved a challenge to governments and the transatlantic relationship. Within states, governments have struggled to adequately regulate Big Tech firms, which has led to these companies encroaching on government responsibilities to protect and uphold the public interest. This encroachment permeates all aspects of government, including defense and security. As Deputy Secretary of Defense Kathleen Hicks discussed during her confirmation hearings, the lack of competition is also a challenge to innovation in the private defense industry. This, along with a lack of regulation, feeds into the power imbalance between the sectors. Consequently, private sector companies building the AI and AI systems that are or will be deployed on the battlefield are deciding the ethics policies for themselves.

The transatlantic partnership must focus on coordinating these core principles and systematic governance to ensure AI systems development aligns with the rule of law and democracy. In particular, this must ensure answering questions about human dignity, human control, and accountability. NATO is the ideal defense and security forum for this alignment. Given the US lead on adopting ethical principles for the entire DoD and the EU’s drive to assert checks and balances for private-sector tech companies, NATO remains the organization that can bring these two together and establishes the ethical bottom line. These will then ensure the diverging legal and ethical stances towards Big Tech do not lead to an interoperability barrier in the future. If developments surrounding the General Data Protection Regulation (GDPR) and the challenges it brought for U.S.-based, data-driven companies are any indication, a strong transatlantic led initiative is needed in order to ensure the same challenges do not hinder NATO.

The solution to the challenge that ethical AI poses for the future of interoperability within NATO is for the Alliance to establish shared transatlantic ethical principles, informed by the US DoD, the EU, and others. Establishing these principles will not only strengthen transatlantic political relations; more technically, it will allow for the establishment of standardization agreements and inform training and education initiatives of the Alliance in the future.