The world’s largest and most powerful tech companies have recently launched a wave of deep learning “Chatbots” which could radically transform the way we live and work. OpenAI’s Chat GPT, Google’s BARD, Microsoft’s BING, and Chinese web giant Baidu’s Ernie have changed our understanding of AI and its potential to transform society. Global regulators are scrambling to keep up, with some, mostly in Europe, considering outright bans, and others, notably the US, paralyzed by uncertainty. 

By sifting billions of online datasets and learning the contextual patterns and structures of language, these generative machine-learning AI programs leverage predictive algorithms to produce original, creative, and human-like content, from writing poetry to programming code. Researchers are even finding that these systems can effectively train themselves to complete new tasks, and their enormous multifunctional output has the potential to disrupt entire education systems and labor markets. 

Yet because these systems rely on making probabilistic guesses, they are far from reliable, trustworthy, or even what we might think of as genuinely “intelligent.” This has led to both policymakers and tech leaders sounding the alarm. In March, Twitter owner Elon Musk and thousands of fellow Silicon Valley signed an open letter demanding an immediate six-month “pause for the training of AI systems more powerful than ChatGPT-4,” until such time that effective safety protocols can be enacted which are “safe beyond a reasonable doubt.”  

The tech leaders propose increased oversight and transparency of powerful AI systems developed under commercial secrecy. Effective “provenance and watermarking systems” should be required to distinguish what is real from “deep fakes,” from AI-generated images of Donald Trump in handcuffs to more consequential fake videos depicting President Zelensky surrendering to Russian forces, or President Putin falsely declaring peace in Ukraine. Given that these AI systems are capable of generating fictitious content using any conceivable scenario, deep fakes have the potential to manipulate, misinform and obfuscate vast swathes of the global population, as well as being weaponized in times of war. 

While governments are starting to wake up to some of the perceived dangers, they are heading in opposing directions. On one end of the spectrum, Italy has lurched towards a Luddite outright ban claiming that chatbots fail to respect Europe’s GDPR privacy rules. The US, in contrast, appears content to leave the powerful tech companies to their own devices, ignoring even traditionally anti-regulatory organizations like the US Chamber of Commerce, who call for new “risk-based regulatory frameworks.” On April 5, President Biden urged Congress to limit the personal data that tech companies can collect, but it remains uncertain whether this enjoys bi-partisan support. The UK has opted for light touch “principles” to be enforced by a smorgasbord of existing regulators. 

China appears to be taking a more direct approach and has implemented new laws regulating how tech companies use algorithms to target consumers online while seeking to curtail Tik Tok’s powerful algorithms from ending up in the hands of international competitors. And in keeping with tradition, the European Union looks set to take a middle way with new legislation under negotiation that would require oversight of “high-risk” AI systems, although it remains to be seen whether “chatbot” AI systems would meet the EU’s “high risk” criteria.  

Such piecemeal approaches reveal the absence of agreed international standards.   

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

The reality is that governments are hopelessly behind the curve when it comes to understanding the disruptive scale and application of these technologies, let alone effectively regulating and integrating them into their own policy-making. Governments should start by leveraging expertise-training cadres of tech-savvy officials with the knowledge and skills to understand and operate alongside large tech companies. This may require incentivizing (or even mandating) two-way exchanges, staff secondments, and regular information flows between policymakers and the changemakers of the industry.   

Such an approach would ensure alignment between regulation and innovation. It would also better equip governments to utilize these powerful AI technologies by exploiting vast datasets to guide complex policy decisions and delivering significant efficiencies across public services.  

In December 2022, the nascent EU-US Trade and Technology Council agreed to a joint roadmap. However, both sides remain far apart in how they ultimately govern and regulate the most “high-risk” AI technology. The Council appears more focused on market competition than on grappling with AI’s transformative implications. Yet these implications are likely to manifest very quickly across every branch of government, the economy, and society. 

Western governments must dial up their response to the AI revolution. Those unable or unwilling may well find themselves overwhelmed by the relentless rise of the machines

Joel Hickman is a Non-resident fellow with the Transatlantic Defense and Security Program at the Center for European Policy Analysis (CEPA.) He was previously a British diplomat posted to Pakistan, where he led the UK government’s serious organized crime strategy across South Asia. Before that, he worked as a senior policy advisor in the UK Home Office, Ministry of Defence, and Foreign, Commonwealth, and Development Office across a range of national security issues. 

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More