Algorithms are increasingly used for decision-making. But as they penetrate everyday life, there is a worrying absence of insight among individuals, governments, and researchers into how they work.
This article is a joint contribution by the Center for European Policy Analysis and Institut Montaigne.
Only now are governments in Europe and the United States paying attention to the potential consequences of algorithmic decision-making, such as inherent bias. Unfortunately, however, there is little collaboration between transatlantic governments on how to inject greater transparency into algorithmic decision-making without infringing on developers’ intellectual property rights. As the European Union (EU) prepares to release its strategy on artificial intelligence at the end of this month (April), greater collaboration is urgently needed to avoid conflicting regulatory models and compliance frameworks.
In Europe, an active debate on digital regulation more broadly and algorithmic transparency in particular is ongoing between the European Union and member states (perhaps, as some Europeans would say, the EU is taking the lead on regulation to compensate for falling behind in innovation). In contrast, in the United States tech regulation has languished in Congress, as for example the Algorithmic Accountability Act of 2019. But among EU member states, France has made algorithmic transparency a priority.
The European Parliament is currently debating the European Commission’s proposal on the Digital Markets Act (DMA) and the Digital Services Act (DSA), which both raise questions about the accountability and transparency of digital systems. Before EU-level initiatives are finalized, countries such as France are already starting to build digital regulatory policies to anticipate future regulations (which are anticipated before 2023). In a recent bill, French legislators included several amendments on information regulation from the European DSA. In parallel, actors such as the Superior Audiovisual Council (CSA) are taking powers to look into platforms.
Such national agencies are drawing on the work of the newly created Expert Center on Digital Regulation (PEReN), which provides expertise and technical support to administrators on data analysis, computer programs, algorithmic processing, and auditing of algorithms used by digital platforms. The PEReN also offers technical input and contributes to the evaluation, investigation, or studies carried out by digital platforms.
So, what about the view of the private and public sector actors across the Atlantic? CEPA and Institut Montaigne organized a discussion between American and French experts on these issues to foster an exchange of ideas and encourage transatlantic partnerships.
It seemed that both sides of the Atlantic agree on some points. For example, there is still no baseline technological understanding of how algorithms work — expertise levels vary dramatically between researchers, policymakers, and the industry. Knowledge on how algorithms and artificial intelligence function is limited among government officials, in particular. This knowledge gap generates a sense of hopelessness about the future of policy among technologists and tech firms, who want to build a strong and thriving innovation ecosystem. To them, regulatory suggestions, especially when it comes to ethics, often fail to address core issues on technological capabilities and collaboration.
Researchers are trying to bridge the gap by advocating for greater transparency and encouraging private companies to do more to address the potentially harmful impact of their systems on human rights.
But the overriding concern remains: lack of knowledge on how such systems work. A baseline of expertise is critical not only for good policy — it also creates mutual trust. Trust is a crucial element in the use and regulation of technology, which is deeply intertwined with the notion of transparency (one requires the other). International collaboration — especially between the U.S. and the EU – is needed to build shared standards and rules of the road, but a residual suspicion between citizens, companies, governments, and researchers continues to undermine collaborative efforts.
Trust cannot emerge in a vacuum. If greater transparency is one solution, it is not the only one. Some have proposed auditing those tech companies that develop and use potentially harmful algorithms (it was also suggested in Institut Montaigne’s 2020 report, Algorithms: Please Mind the Bias!). But now that algorithms are embedded in everything from search results to financial decisions, identifying which companies or even public institutions may be liable will be problematic, if not impossible. Another challenge is developing a properly tested audit framework applicable to a wide variety of technologies. Controlling an algorithm’s input and output at different stages of its development will introduce a form of traceability and testing capability through a series of consistent tests, but this will also inevitably slow the time between research and development and implementation. Given existing knowledge gaps, establishing a proper vetting and review process will prove difficult.
But one thing is crystal clear: we must have a rapid increase in collaboration between governments, researchers, and industry, or Europe and the United States will fall behind in the race for tech innovation. Sharing and developing the French and U.S. experience of digital regulation is a good starting point, since both face similar challenges in building greater trust.
April 7, 2021