Social media shapes and drives public debate. In some cases, this has been of very clear societal benefit, for example when women are able to mobilize to protest against sexual harassment and drive a social movement (#metoo). But social media platforms have now too often become the tools of authoritarian states and their proxies and are being used to target democracies. Hostile information operations are not limited to elections either. Recent coordinated attacks, for example, targeted Belgian government decisions to limit access of high-risk 5G suppliers like Huawei to its network.
Platforms have come under intense criticism for the way they handle content ranging from illegal hate speech to dis- and mis-information. As a result, time and resources have been devoted by parliaments, technology companies and civil society to limiting the challenges such systems pose for democracies everywhere. European parliaments, in Germany and France for instance, have adopted legal texts to tackle content-related issues (for example, the French law against the manipulation of information, or the German Network Enforcement Act, or NetzDG). In parallel, civil society organizations have built competencies to address some of the problems related to content circulation, with associations offering helplines for children who are victims of online hatred, private companies developing tools to monitor online activity, or media organizations building fact-checking services.
Given this profusion of activity, a strong coordination plan between the European Union and the United States, focusing specifically on strengthening cooperation between platforms, civil society and governments on questions related to content circulation, would be very welcome. This transatlantic partnership would complement the Paris Call for trust and security in cyberspace and should have two goals.
The first is to foster international research on online content and aggregate the knowledge it produces to make it actionable. In the United States, the 2016 U.S. presidential election provided evidence of foreign involvement, and as a result, agencies such as the FBI and the CIA have built competencies to monitor foreign actors’ online activity. At the same time, many universities have financed research programs to look at online manipulation. This dynamic ecosystem has allowed for the emergence of initiatives such as the Election Integrity Partnership (EIP) which played a central role during the 2020 presidential election, bringing together think tanks, private organizations and universities.
In European countries such as France, this ecosystem is still growing. Private companies are getting better at tracking coordinated behavior, sometimes in partnership with governments on security issues. In September, France will create an agency to monitor information manipulation, led by the prime minister’s national security office (the SGDSN). On Tuesday June 8, the presidential party, La République en marche, made a series of recommendations to deal with political interference, indicating the importance of the issue ahead of campaigning in the 2022 presidential campaign. Over recent years, the EU has also created organizations to monitor online content, such as the three task forces of the European External Action Service, amongst them the East Stratcom Task Force, which monitors Russian online activity in eastern Europe.
Creating a framework to aggregate information collected by services on both sides of the Atlantic, including these EU and French organizations, or initiatives such as the EIP, is vital to improve the understanding of foreign operations and lessen their impact on politics in the West. EIP is able to understand and apprehend threats in real-time and mitigate risks. It can also create the infrastructure to run initiatives such as the G7 rapid response mechanism on Russian disinformation, which is still struggling to emerge.
The second objective of the partnership should be to create regulatory standards. The European Parliament is currently reviewing a text proposed by the European Commission, the Digital Services Act (DSA), which aims at making online intermediaries, including content providers, accountable for some of the problems arising from their services. The text would ask very large platforms (those exceeding 45 million users) to evaluate the risks that a misuse of their services would represent for fundamental rights, as well as the impact such misuse could have on things such as public health, public debate or electoral processes. Very large platforms would then be asked to take action to mitigate those risks. Given that most of the big platforms are American, the United States’ participation in discussions surrounding the regulation is paramount for the legislation to have international consequences.
This regulation would give national regulatory bodies the authority to examine private companies and to check their systems to monitor such risks. This is a crucial step in making digital platforms more transparent, which has been a leitmotiv of digital regulators for many years. It is also essential to engage private actors in the thorny process of making the internet safer.
However, such measures will only be successful if they strike the right balance between control and freedom to innovate, and if they are applied globally. The question, then, is whether its adoption can be extended beyond the EU, as with other texts such as the GDPR. Efforts must be undertaken on both sides of the Atlantic to develop these shared regulatory standards, with the aim of protecting citizens of democracies everywhere.