“How many rocks should I eat, Google?” The search engine answered, “Eat one small rock per day.” That’s not intelligent. Google’s new AI-powered Overviews also repeated conspiracy theories about former President Barack Obama, identifying him, falsely, as the United States’ first Muslim President. Obama is a Christian. These startling errors led Google to vow improvements.
The debacle underlines the challenge of balancing AI innovation with safety. Companies face commercial and competitive pressure to rush to market. They are cutting corners, rolling out these technologies before testing them. Governments are rushing to reign them in, but are slower to act than private companies. Other solutions exist for technology companies themselves to take. One way to minimize and correct errors is to engage and empower users.
AI systems make things up and have biases. Google Overviews couldn’t distinguish between credible and unreliable sources; the advice to eat a rock came from the satirical publication The Onion. OpenAI recently began training its next-generation artificial intelligence software, just after several senior safety researchers quit, accusing the company of prioritizing profits and growth and promoting a culture of “recklessness and secrecy.” OpenAI then hastily appointed a new safety team.
The fast pace of innovation and the promise of vast riches incentivize speed. Many technology companies quickly bring products to the public, then identify problems and address them afterward. Instead of catching bugs before the release, companies offload the process of identifying bugs onto users, fixing them when users complain. Facebook’s pre-2014 motto “Move fast and break things” is a common attitude.
Governments are responding, though slowly. The EU’s AI Act, passed at the end of May, will only take effect in stages, starting at the end of 2024, and running through 2027. It takes time to build and organize resources to enforce regulations. Companies also need time to comply. But, in the meantime, those same companies will continue to release products without those regulations in force.
The US isn’t even at the regulatory stage. While the Biden administration rolled out the AI Bill of Rights, this “blueprint” lays out recommendations. It isn’t legally binding. The Biden administration has clarified how existing laws deal with AI. For example, the Justice Department can hold companies accountable for discrimination, even when that discrimination is the result of AI. A US AI Act requires action from Congress.
Another option is to turn to the courts. Lawsuits around AI copyright and competition will influence how companies develop products. Newspapers have sued Open AI and Microsoft alleging that the technology companies have been “purloining millions” of copyrighted news articles without permission or payment to train their artificial intelligence chatbots. US, UK, and EU regulators are also investigating AI leaders including OpenAI, Microsoft, and Nvidia, for antitrust violations.
But US courts, including the Supreme Court, so far have avoided sweeping rulings on key Internet issues.
Without effective regulation, technology companies must act by engaging their users as content moderators. In October 2023, YouTube launched a program to fight disinformation by making money available for training and development in short-form video content. Google and OpenAI should allow users to flag bad results and blind spots. Sites such as Wikipedia and Reddit succeed by allowing users to update information and engage in moderation. Both have moderators and algorithmic tools that flag articles that are being maliciously edited.
Google’s mea culpa blog post addresses some of these concerns. The company now restricts “nonsensical queries” and certain areas of sensitive material, such as politics and public health. This is a useful stopgap. But Google shouldn’t have rolled out a product without adequate testing. Google search remains the Internet’s most powerful engine.
AI safety is challenging. The technology, still in its infancy, is accelerating. Safety requirements and regulations are failing to keep up. Expect more hallucinations and mea culpas.
Joshua Stein recently completed a postdoctoral fellowship at the Georgetown Institute for the Study of Markets and Ethics. His work focuses on ethics, technology, and economics.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
Tech 2030
A Roadmap for Europe-US Tech Cooperation