Each year, more than a million people die around the world in motor vehicle accidents, with 20 and 50 million suffering non-fatal injuries. Alphabet’s Waymo’s driverless cars are 6.7 times less likely than human drivers to be involved a crash resulting in an injury and 2.3 times less likely than humans to be in a police-reported crash. The company simulated the history of fatal human driver accidents in Arizona over almost a decade and found that replacing either vehicle in a two-car collision with autonomous vehicles would eliminate most deaths. 

Medicine is another area where AI will offer safety benefits. AI has already helped discover a new class of antibiotics to kill drug-resistant bacteria, Google DeepMind hopes to reduce the time needed to discover new drug from the average of five years to two. AI-powered diagnoses will improve on human ones.

The lesson is clear: policymakers should aim for technological neutrality, irrespective of whether silicon or biological neurons are used. Yet governments are moving instead to place specific requirements on AI systems. The EU AI Act includes a range of mandatory compliance requirements for ‘high-risk’ AI systems which will slow down the time-to-market and may be particularly burdensome for smaller AI companies and open-source models, reducing competition and innovation.

This is dangerous. High-risk applications are already regulated. By doubling-down on AI ‘safety,’ we will discourage and delay the application of AI in areas where it could improve safety. That would result in death and injury. So, to promote safety, we should not discriminate against the use of AI.

Just like new human drivers, the ‘licence’ for an autonomous ‘driver’ might come with restrictions on when and where it is allowed to drive, until it has proven itself. Autonomous vehicles offer improved access to mobility, both for those who are unable to drive and those who cannot afford the mobility they would like. Autonomy is not just about safety, but affordability and access.

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

No one can doubt that speedy drug discovery, approval and use of new pharmaceuticals would bring huge benefits for humanity. A delay due to AI regulation will involve costs. There’s no need for special ‘AI drug safety’ since ‘drug safety’ requirements, including clinical trials, apply irrespective of how a candidate drug is discovered.

AI will help improve overall medical treatment. Clinicians will consult AI, just as they consult colleagues and draw on other sources of information. Patients too will consult AI, say by using it to frame questions they wish to ask their doctor.

These developments should be permitted without impediment. What is more complex is whether in well-defined circumstances an AI based ‘opinion’ might be required as a complement to clinical decision making, and whether AI would be permitted to take a decision to prescribe certain medicines without human intervention.

The use of AI in such situations may increase safety, while lowering costs and improving access to medical care. Delay can be deadly. An eight-week delay in breast cancer surgery increases the risk of death by 17%, according to a recent study. A 12-week delay increases the risk by 26%. If AI can lower costs and improve access the result will also be improved safety overall.

Rather than ‘doubling down’ by regulating specifically for AI, we should ensure that the same existing safety standards apply irrespective of whether AI is used or not.  We should not delay safety improvements powered by AI. Doing so will cost lives.

Brian Williamson is a partner at Communications Chamber. Brian has advised clients in relation to telecom and technology policy and strategy. He has developed new ways of approaching policy problems, including the anchor product regulation concept adopted in Europe. Brian has an MSc in Economics from the London School of Economics and a BSc in Physics from the University of Auckland.

Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.

Read More From Bandwidth
CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy.
Read More