There is no perfect solution but the U.S. government is critical to keeping social media safe.
This past week, in issuing its first decisions on cases of removed content, Facebook’s new Oversight Board overruled the company four times—in public. At a time when social media platforms have been removing accounts and content without much transparency, it was a significant step in the right direction.
Until now, Facebook, Twitter, and others have made decisions about what or who has a place on their platforms as private companies often do: behind closed doors. The result has been a flurry of content, app, or account removals in an opaque and disjointed attempt to reduce “harmful” information online.
While companies should assess the content on their platforms, banning one account (or one app), even for cause, won’t solve the problem. In fact, as Russian opposition leader Alexei Navalny has noted, such moves may make things worse for democracy advocates while risking the appearance of inconsistency as corporate leaders make private decisions affecting public speech.
Consider what’s occurred in just the past few weeks. Twitter banished now-former U.S. President Donald Trump from its platform following the riot he inspired at the Capitol. Other platforms followed, banning Trump and Parler, the social media platform that had become a hub for extremist groups. Many Americans applauded these moves. An anti-extremism group then filed a lawsuit against Apple demanding it bans the secure-messaging app Telegram for providing a platform for right-wing extremism. In Russia, however, Telegram is one of the main platforms that Russian democratic dissidents use to evade censorship. Without it, and other social media platforms, movements led by Navalny and other pro-democracy activists would have failed.
Democracies should instead fight back against extremism and disinformation online using democratic methods: clear rules to promote authenticity and transparency while restricting extremist or violent content. Companies need guidelines, citizens need privacy protections, and governments need to assess in accountable ways what constitutes fair access, grounds for removal, and the proper way to deal with extremist content. The U.S. government can lead this effort. There’s no perfect solution. But a set of measures, however flawed, can make a difference.
Let’s start with the basics. Twitter’s ban on Trump was not “censorship.” Private companies can choose who gets access to their platforms. But social-media companies are not newspapers. They claim to be, more or less, bulletin boards—providing free access for anybody to post a message. In 1996, Section 230 of the Communications Decency Act enshrined this “bulletin board” model in law. The resulting model of the virtual public square is what has set up social-media companies for such profitable success. But that model is too open to abuse. Section 230 has never been reformed. There is thus no standard for defining the responsibilities of a modern social media platform. That needs to change.
Content restrictions have a limited place. Certain content is illegal in the United States, and First Amendment protections are not absolute. Child pornography and terrorism-related content such as beheading videos are already restricted online. There’s a good case for restricting calls for violence. But what about the broader category of harmful content such as anti-vaccine disinformation, calls for “wild” demonstrations with only a subtext of calls for violence, fake allegations of election fraud, and so on? Once one account is removed from a platform for content that falls within this more amorphous category, new ones will emerge to take its place on the platform and use coded language to say the same sorts of things. Or they’ll operate on smaller, ungoverned platforms, creating a sort of black market of platforms for extremist groups.
Enforced content moderation, where governments or companies decide which words or phrases are acceptable or harmful, can be a slippery slope. Instead of content removal, government regulation could include mandates for platforms to provide context for viral content. Twitter, YouTube, and others are already doing this with links to reputable sources or warning labels, but the policies are inconsistent across the platforms and therefore confusing.
Ultimately, transparency should be the guiding principle for regulatory actions by government and voluntary actions by social-media platforms. The U.S. government should work closely with platforms and advocacy groups to establish public guidelines for what constitutes harmful content. Platforms should be mandated to establish clear, consistent, and publicly-accessible, review policies for taking down accounts, apps, or content. Democratic governments should collaborate with companies and independent groups to establish technically and legally sound rules with authority backed up by law, as is the case with broadcasting. Europe and the United States should make such a common agenda a top priority and aim to enlist other democracies in the effort. The Paris Call for Trust and Security in Cyberspace, which already has wide support, could be the platform to carry out this agenda.
Authenticity of content should be the second pillar of good policy. Inauthentic accounts that assume fake identities or impersonate real people for malicious intent, as well as automated “bot” accounts, have no place in the information ecosystem. Twitter, Facebook, and some others have made inauthentic behavior a basis for account removal, but for smaller apps and platforms the resources for identifying such content can be prohibitive. This is where independent researchers can help: Groups such as the Atlantic Council’s Digital Forensic Research Lab, the Brussels-based EU DisinfoLab, and the U.K.-based Bellingcat have identified Russian, Iranian, and Chinese disinformation networks as well as QAnon groups and accounts. Social media platforms with limited resources can enlist the assistance of these independent groups by providing them with greater access to their data. Again, the U.S. government has a role to play here in establishing standards for what constitutes removable behavior or even rules for identification online.
The regulatory standards we suggest will be complicated to implement and imperfect in their execution. But surrendering the virtual public sphere to private power has been tried and found wanting. We can learn as we go.
U.S. President Franklin Delano Roosevelt, whose portrait now hangs in Joe Biden’s Oval Office, believed in the power of government to make capitalism work for a democracy. He made it so by regulation, an approach so successful that the reformed American democracy he presided over recovered, prospered, and outlasted the fascist and communist competition it then faced. Let’s recall that tradition and do what’s needed for our time.
Daniel Fried is the Weiser Family distinguished fellow at the Atlantic Council. He was the coordinator for sanctions policy during the Obama administration, assistant secretary of State for Europe and Eurasia during the Bush administration, and senior director at the National Security Council for the Clinton and Bush administrations. He also served as ambassador to Poland during the Clinton administration. Follow him on Twitter @AmbDanFried.
Alina Polyakova is the president and CEO of the Center for European Policy Analysis. She was previously the founding director for global democracy and emerging technology at the Brookings Institution. She is also professor of European studies at Johns Hopkins School of Advanced International Studies. Follow her on Twitter @apolyakova.
Photo: Witnesses Amazon CEO Jeff Bezos, Facebook CEO Mark Zuckerberg, Google CEO Sundar Pichai, and Apple CEO Tim Cook are sworn-in before a hearing of the House Judiciary Subcommittee on Antitrust, Commercial and Administrative Law on “Online Platforms and Market Power”, in the Rayburn House office Building on Capitol Hill, in Washington, U.S., July 29, 2020. Credit: Mandel Ngan/Pool via REUTERS TPX IMAGES OF THE DAY
WP Post Author
Daniel Fried
WP Post Author
February 1, 2021
Bandwidth is an online journal covering crucial topics surrounding transatlantic cooperation on tech policy. All opinions are those of the author and do not necessarily represent the position or views of the institutions they represent or the Center for European Policy Analysis.