From ELIZA, created in 1964 at MIT, to popular modern tools like Siri, Alexa, and Cortana, the experience of holding conversations with chatbots has changed little. While new AI chatbots are more flexible than Eliza, interactions still feel stilted, and responses canned.
This is now changing. AI-powered ChatGPT and other large language models enjoy the capacity to hold genuine bidirectional conversations, keeping track of context over time and even probing the user for elaboration. A new age of “conversational computing” is emerging. In the near future, we will all be talking to our computers, and our computers will be talking back — intelligently.
The new technology creates new risks of disinformation and manipulation. Regulators must address Conversational AI systems that will soon be deployed at scale to pursue targeted influence objectives. Conversational AI is personalized, interactive, and adapted to individual users. By accessing stored profile data and probing the user for additional information, the AI system can adjust in real-time to maximize its persuasive impact on individually targeted users. Call it the AI Manipulation Problem.
Although the AI Manipulation Problem may sound like an abstract series of computational steps, it follows a familiar scenario. When a human salesperson wants to influence a customer, he doesn’t hand over a document with a fixed set of arguments. The salesperson engages the customer in real-time conversation, adjusting his tactics when confronted by resistance or hesitation. Thanks to this interactive process of probing and adjusting, the salesperson persuades.
That is because conversational AI systems will enjoy unfair advantages. Their software could access personal data about the target’s interests, values, personality, and background. It could craft a friendly dialogue designed to draw the user into conversation. Once engaged, the AI system can push, eliciting responses that reveal sentiments and sensibilities.
In the wrong hands, this ability could be dangerous. Conversational AI systems could be deployed to influence through custom-crafted dialogue or individual users. They could adjust their arguments in real-time to counter any resistance or hesitation expressed by the user, continuously adapting their tactics for maximized impact. The AI system could learn in real-time whether a target user is more easily swayed by factual data, emotional appeals, or playing on a user’s insecurities. If allowed to store a record of prior interactions, it could develop “persuasion profiles” on a user-by-user basis.
Within years, computers will become better and better at manipulating individual users, learning how to draw them into conversations, guide them to accept new ideas, convince them to buy things they don’t need, believe things that are untrue, or even support extreme ideologies that they’d normally reject. This ability creates unique risks.
Unfortunately, regulators are so far overlooking these risks. Europe’s pathbreaking AI Act only addresses some of these issues peripherally, for example, by limiting real-time biometric sensing. But there is no direct mention of “Conversational AI” or its unique challenges.
That’s a mistake. Admittedly, it’s difficult to regulate such a fast-moving and developing technology. At a minimum, regulators should ban AI systems from using real-time human feedback for persuasive purposes, barring them from using verbal and emotional reactions to target users. It’s crucial that we understand the real risks and implement guardrails.
Louis Rosenberg is a pioneer in the fields of augmented reality, virtual reality, and artificial intelligence. He earned his Ph.D. from Stanford, has been awarded over 300 patents for VR, AR, and AI technologies, and founded a number of successful companies, including Unanimous AI, Immersion Corporation, Microscribe, and Outland Research.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.