Since the sudden emergence of Chat GPT, AI has jumped from being a technical topic to front-page news. AI-powered large language models analyze large data and generate human-sounding text, offering to improve our lives, from suggesting movies to making sophisticated medical diagnoses. But AI also presents potential dangers, reinforcing discrimination and spreading disinformation. This essay maps global efforts to respond to the AI risk challenge. While we recognize the importance of frameworks in Japan, China, and other countries, we focus on efforts in the United States and the European Union.
Broad agreement exists on the need to ensure that AI products remain safe. Disagreement focuses on methods to achieve this aim. Europe is pushing ahead with hard laws. The US opts for voluntary commitments.
Goals
A consensus among democracies is emerging that AI must be “trustworthy.”
According to the EU, Trustworthy AI means AI should be:
- Lawful, ensuring compliance with all applicable laws and regulations.
- Ethical, demonstrating respect for, and ensuring adherence to, ethical principles and values.
- Robust, both from a technical and social perspective, since, even with good intentions, AI systems can cause unintentional harm.
According to the US, Trustworthy AI must be:
- Valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with their harmful biases managed.
At the EU-US Trade and Technology Council, EU and US policymakers agreed on 65 terms as part of a Joint Roadmap for Trustworthy AI. These include “AI bias,” “discrimination,” or “safety.” But since the two sides differ on how to achieve trustworthiness, no concrete commitments have emerged from the transatlantic council.
OECD Principles
International organizations such as the Paris-based OECD are attempting to fill the gap. The OECD Principles on Artificial Intelligence aim to ensure AI systems are robust, safe, fair, and, to use the keyword in vogue, trustworthy.
The OECD’s five guiding principles for trustworthy AI systems are:
- Benefit people and the planet.
- Be designed to respect the rule of law, human rights, democratic values, and diversity.
- Have transparency and responsible disclosure around systems.
- Function in a robust, secure, and safe way.
- Hold those developing, deploying, or operating AI systems accountable.
Although the OECD represents only wealthy democracies, previous OECD initiatives have helped set global standards. For example, the OECD rules on government access to personal data helped engineer the recent EU-US Data Privacy Framework to encourage data flows across the Atlantic Ocean.
G7 Summit
The G7 Summit in Hiroshima in May 2023 agreed to align AI international standards. In their communique, the leaders of the seven leading industrialized democracies agreed on a risk-based approach to this technology, hoping to “preserve an open and enabling environment” for the technology, while acknowledging differences of vision across member states.
European Union
EU Artificial Intelligence Act
The Artificial Intelligence Act (AI Act) aims to set a bar for AI legislation. It’s hard law.
The European Commission, European Council, and the European Parliament are negotiating a final text in the trilogue stage of the EU’s legislative process. They hope to finish before the upcoming 2024 European Parliament elections.
In its present version, the AI Act characterizes systems by their level of risk under four categories — low, minimal, high, and unacceptable. Different requirements and limitations depend on the category, with high risk requiring the largest compliance burden.
The original Commission proposal contained only a narrow set of high-risk designations. After ChatGPT appeared, parliamentarians expanded the list. The Act also addresses open or closed-source systems, establishing what types of foundational models, such as large language models, can be open-source and which ones must be closed-source.
Enforcement remains one of the key unresolved issues. Parliament proposed centralizing oversight in one national surveillance agency per member state overseen by an EU office. The Council and European Commission want to allow member states to create as many surveillance authorities as they choose without overarching EU coordination.
Small businesses and startups worry the legislation will put too high of a compliance burden. Other critics fear that the law could be ineffective. By limiting AI systems to rigid risk classes, the AI Act could fail to reduce risks from the newest foundation models that are capable of diverse tasks.
United States
Although no hard law on AI is close to completion in the US, the Biden administration and Congress have put forward several non-binding proposals. These include the NIST Artificial Intelligence Risk Management Framework, Congress’ SAFE Framework, the White House AI Bill of Rights, targeted at automated systems, and the White House’s Private Sector Voluntary AI Commitments.
While enforcement is limited, the Federal Trade Commission can hold AI companies accountable for deceptive or unfair practices.
NIST Risk Management Framework
In January 2023, the National Institute for Standards and Technology (NIST) released its “AI Risk Management Framework,” creating the first US voluntary roadmap on how to create, monitor, and use AI. This risk management framework aims to balance the need for the preservation of individual rights while promoting innovation. It provides criteria for “trustworthy AI,” breaking down metrics for “valid and reliable, safe, fair and bias managed, secure and resilient, accountable and transparent, explainable and interpretable, and privacy enhanced.”
While Europe’s AI Act distinguishes four categories of risk (unacceptable, high-risk, limited, or low), the RMF bases the assessment of risk on its likelihood and is situation specific. NIST has published a Playbook to assist companies in implementing its proposals, but remains entirely voluntary per its mandate.
SAFE Innovation Framework
Senate Majority Leader Chuck Schumer (D-NY) SAFE Innovation Framework stands for “Security, Accountability, protections for Foundations, and Explainability.” It proposes holding “insight forums” to build collaboration between AI developers and executives. In his presentation, Senator Schumer emphasized that innovation remains the north star of AI development.
Section 230
Senator Josh Hawley (R – MS) introduced legislation to “Protect consumers and deny AI companies immunity from Section 230.” Although recently challenged, Section 230 designates internet platforms as distributors of free speech rather than publishers, protecting them from most liability from any content or speech they host. Hawley’s bill would strip generative AI companies of Section 230 immunity in civil claims. Critics argue that excluding AI companies from Section 230 coverage would hurt innovation in generative AI, as companies would be subject to a high risk of civil litigation.
Global Technology Leadership Act
Senators Mark Warner (D – VA), Young (R – IN), and Michael Bennett (D – CO) have proposed a Global Technology Leadership Act, which would establish an Office of Global Competition Analysis to assess and monitor how the US compares to other countries in key emerging technologies, including AI. The proposal does not address the overall issue of how to regulate AI.
Private Sector Commitments
At the White House in July 2023, seven leading US AI companies, Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI, agreed on voluntary commitments to ensure the development of safe AI.
Eduardo Castellet Nogués is the Program Assistant for CEPA’s Digital Innovation Initiative.
Marielle Devos was a Summer Intern for CEPA’s Executive Office.
Bandwidth is CEPA’s online journal dedicated to advancing transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.