When hackers first sent phishing emails in the 1990s, their technique was laborious, requiring them to click over and over to deliver their fake emails. Messages included requests for users to enter information on a webpage that delivered the victim’s login credentials. Today, AI-enhanced phishing attacks increase the speed and scale of cyberattacks, searching out targets, automatically dispatching millions of customized emails within minutes — and dangerously, searching for new targets in the US and abroad.
AI personalizes. The software analyzes social networks, breaches, and public records to generate convincing messages that appear to come from trusted colleagues, friends, or reputable organizations.
While this AI-powered security threat is immense, AI also offers an opportunity to strengthen cyber defenses. A strong legal framework is required to respond. Surprisingly, the US is ahead of Europe in regulations and policies to govern cyber operations around national security.
AI-enhanced cyberattacks represent an evolution in the long history of cyberattack automation. AI disseminates malicious software across networks or devices, expediting the theft of sensitive data from compromised systems. Automated credential stuffing tests millions of stolen usernames and password combinations against multiple online login pages, enabling rapid account takeover at speed and scale.
The same power that allows the machine to execute actions or learn by themselves makes them difficult to control. Consider the ‘paperclip maximizer,” a thought experiment introduced by philosopher Nick Bostrom. A hypothetical AI-powered computer is given the sole objective of manufacturing as many paper clips as possible. It pursues this narrow goal, allocating all available resources to it, including those necessary for human survival, leading to catastrophic consequences.
The thought experiment underlines the dangers of AI cyber automation: a seemingly harmless objective could lead to an unintended outcome. COMPAS, a software used by US courts to assess the likelihood of a defendant committing another crime, suffered significant bias in its predictions because it is built on biased historical arrest and conviction data. AI not only perpetuated the danger. It amplified it. A program designed to reduce the number of repeat offenders disregarded the complex history of racism.
Automated cyberattacks can backfire. NotPetya, a 2016 cyber-attack attributed to Kremlin-linked hackers, not only crippled multinational companies, including the global shipping giant Maersk. It also found its way into the Russian state oil company Rosneft.
At the same time, AI-enhanced security tools can mitigate these dangers in near real-time. Behavior-based analytics detect abnormal activities that deviate from a network’s baseline “normal” behavior. Automated threat intelligence sharing speeds up the process of supporting partners and allies in increasing the network’s overall security posture. The US’ Cybersecurity & Infrastructure Security Agency already operates an Automated Indicator Sharing platform.
AI has great potential to improve the skills and knowledge of the cyber security defenders. Cybersecurity company Crowdstrike uses AI to enhance detection capabilities and reduce response times. In partnership with Amazon AWS, the company’s Charlotte AI reduces the skills gap between less experienced analysts and senior security professionals, automating repetitive tasks such as data collection, basic research, and detection.
AI and automation in offensive and defensive operations present a range of ethical concerns. How do we maintain accountability without human oversight? How can we mitigate unintended consequences? Western democracies must ensure cyber operations incorporate and abide by legal frameworks and reflect our values. This means retaining adequate human oversight, ensuring training data that reflects intended outcomes and values, and periodically reviewing for quality control.
The US has made significant progress. Back in 2018, the US directed the US Department of Defense to coordinate the integration of AI into operational use, subject to appropriate ethical policies. The new National Security Commission on Artificial Intelligence makes AI recommendations to the executive branch and Congress.
Europe is ahead of the US in enacting a binding law. Its AI Act, now in its final round of approvals, will prohibit or limit practices such as biometric scanning, facial recognition, and social scoring. But exceptions are written in for law enforcement and national security.
In contrast, China has eliminated most restrictions. Under President Xi Jinping’s “holistic view of national security,” the technology serves the Communist Party’s goals. “AI and data regulations in China are less about consumer protection than about social control and the projection of power, putting the party in charge with unrestrained power,” argue Benjamin Qiu and Dennis Kwok, partners at the Elliott Kwok Levine & Jaroslaw law firm.
How we regulate the use of AI will do much to determine our cyber success or failure – and do much to determine our success in defending democracy.
Emily Otto is a non-resident fellow at CEPA. She is an Army Cyber Warfare Officer transitioning out of military service after a decade of threat intelligence and cyber operations.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
