When electricity and steam were first introduced, it took a while to see the impact of their widespread adoption. Regulators didn’t rush. They waited until the dangers of specific applications became visible – allowing the new technologies to boost productivity, which in turn increased income, leisure, and life expectancy.
Like steam or electricity, AI represents the next wave of general-purpose technology. It offers the potential to lift the last two decade’s lackluster productivity growth. Pre-emptive and premature legislation could forego benefits. It could also cost lives and perpetuate discrimination, given that existing human decision-making – unaided by AI – is far from perfect.
Steam engines were initially inefficient. It took time for complementary innovations such as factories and railways to be developed and reveal steam’s true benefits. Instead of passing widespread controls, regulators focused on confronting specific problems, many of which could not have been initially predicted. They regulated rail safety, for example, and such regulation remains relevant with the transition from steam to diesel and electric-powered rail.
The same holds true for AI. Over the past year, we experienced the release of general-purpose ‘foundational’ AI models and a proliferation of open-source variants. We should use 2024 as an opportunity to reflect.
The key is to avoid the potential trap of attributing existing (and new) harms to a disruptive technology. Competitors invent or exaggerate dangers. During the “war of the currents” in the late 1880s and early 1890s, Thomas Edison and George Westinghouse fought over adopting alternating or direct currents for electricity. The rivalry included not only technical arguments but also a series of public relations stunts and demonstrations aimed at showcasing the safety of one system over the other.
Similarly in relation to the internet, sci-fi author Douglas Adamswrote in 1999 that “newsreaders still feel it is worth a special and rather worrying mention if, for instance, a crime was planned by people ‘over the Internet.’” No one bothered to mention “when criminals use the telephone or the M4 or discuss their dastardly plans ‘over a cup of tea.”
Capable technologies can, and will, be used for good or bad. It is likely that the greater the capability for good, the greater the capability for harm. A sharp knife is both more useful and more dangerous than a blunt knife. Indeed, a safe knife would be useless.
General-purpose technologies are most useful when they are widely accessible and open. Although public libraries met with opposition from some when first proposed, few would argue today that it was a mistake to ‘open source’ knowledge. Instead of ‘licensing’ libraries and limiting access, we focussed on promoting literacy and mitigating any harms that arise from accessible knowledge.
A neutral stance is essential to allow progress. Regulation should be to the extent feasible, be technology agnostic, for two reasons.
First, it allows the most efficient technology – human or machine or a combination – to be chosen. If the new regulation is applied only to AI systems, then the potential for productivity gains would be limited. The adoption of efficient AI systems would be discouraged.
Second, technology agnosticism allows the safer option to be chosen. If the new regulation is applied only to AI systems, the adoption of less risky AI systems would be discouraged.
Paradoxically, regulation based on an assessment of high-risk categories – as proposed under the EU’s recently adopted AI Act – could increase risk relative to a permissive approach.
Regulation should target specific harms associated with specific applications. AI could facilitate increased surveillance and ‘social scoring’ of citizens. It is right that we debate and decide on limits. However, the rules should apply irrespective of whether databases are managed by humans or augmented by AI. If AI enables mass surveillance at a low cost, we do not need an AI law. What we may need is a surveillance law.
Otherwise, a danger exists of impeding AI’s beneficial application. If only authorized individuals were permitted to prescribe medicines, we could never benefit from allowing ‘intelligent’ systems to improve access and lower costs, or to improve safety. AI systems might be permitted as an alternative, required as a complement, or required instead of expert humans – depending on the relative risks.
Just as steam, electricity, and computing were not regulated (though some applications were), we should let AI flourish. We should focus on removing undue barriers to adoption and ensuring that existing rules protecting safety and preventing discrimination are enforced – irrespective of whether AI is used or not.
Brian Williamson is a partner at Communications Chamber. Brian has advised clients in relation to telecom and technology policy and strategy. He has developed new ways of approaching policy problems, including the anchor product regulation concept adopted in Europe. Brian has an MSc in Economics from the London School of Economics and a BSc in Physics from the University of Auckland.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions expressed on Bandwidth are those of the author alone and may not represent those of the institutions they represent or the Center for European Policy Analysis. CEPA maintains a strict intellectual independence policy across all its projects and publications.
2025 CEPA Forum Tech & Security Conference
Explore the latest from the conference.
