The Open Internet turned out to be an illusion. Tech giants created a commercial space, not a public commons, more like a shopping mall than a public park. In this “space,” the public discourse is owned by corporations accountable to a different set of interests than democratically elected governments.

In countries such as Myanmar, Sri Lanka, India, and Ethiopia, social media has had bad, and in some cases, tragic side effects. The industry’s response was far from proportional to the genocide and communal violence that occurred.  Since these dangers became visible, the platforms have made growing investments in human and algorithmic content moderation. But this is only baseline – it’s harm mitigation like applying a bandage to a gaping wound.

Until recently, the largest platforms were only reactive; they relied on users’ abuse reports to decide what content could and couldn’t remain online. Without “real world” context, the vast majority of the reports were not actionable. Companies could not keep up with all the reports, and the public became inured to the lack of response. There was never going to be the sort of “customer service” people in many countries had become used to from consumer-facing corporations.

Governments are also guilty. They started splintering the Internet long ago. China’s Great Firewall began in the late 1990s. In 2010, representatives of countries including Syria and Russia approached the UN with the idea of extending national sovereignty onto the Internet. Since then, we’ve seen China’s development of digital infrastructure in African countries and Russia’s growing, now near complete, online censorship.

Concerns about digital sovereignty have now extended to democracies, with growing regulatory pressure from Europe, the creation of Internet-focused statutory bodies such as Australia’s eSafety Commissioner’s Office and New Zealand’s NetSafe, and a law aimed at digital sovereignty in the works in India. 

Tech companies are right to be alarmed. Meta global affairs president Nick Clegg recently published a long article about how the misunderstanding of data is “fracturing the Internet.” Unfortunately, Clegg ignores the negative consequences of open data-sharing, such as the unprecedented access that disinformation creators have to voters in many societies. Elected governments aren’t allowed to ignore such large-scale problems. Unlike publicly traded corporations, they are expected to acknowledge and protect their citizens from harm and hold violators accountable. They cannot surrender this responsibility to the private industry, which is accountable only to shareholders. 

Get the Latest
Sign up to receive regular emails and stay informed about CEPA's work.

Industry – particularly Meta – has invested billions of dollars in new safety features and measures, even in experiments in “platform democracy.” This is all good. But not enough. As advisers to platforms, we’ve seen too many examples of large companies prioritizing user engagement over safety and smaller companies, such as apps and data brokers, selling users’ data indiscriminately for their own profit. This is the business model, after all. Safety is not a profit center.

We understand the value of free-flowing data, but its downsides must be addressed. What can Meta and other companies do to turn around this downward spiral, to grow credibility and support for a unified open Internet wherever possible? 

Here’s what we suggest:

  • Acknowledge the negative as well as the positive effects the open Internet has had on individuals and societies as a necessary start to improving things.
  • With independent experts, review the internal and external ecosystem of user harm mitigation and care and identify where internal systems, such as content moderation, fall short in each of your markets. 
  • As an industry, identify, work with and fund external providers of user care that complement industry efforts in every country, or at least regionally – providers such as Europe’s Internet helplines and Meta-supported NCII.org now in India as well as the UK.  
  • In addition to responding to statutory requirements around the world, as we know many companies are doing, listen to representative samples of users in every market where your services are used – for product and policy development (more is needed along the lines of Meta’s experiment this year). 
  • Even before building safety by design into new products and services, conduct a thorough analysis of potential unintended consequences to ensure wellbeing.
  • As for the business model, obviously, there is nothing wrong with corporate profit, but acknowledge that profit is prioritized over safety and co-create with stakeholder groups – including users, researchers, and policymakers – innovative ways to measure, strike and continue to maintain a balance between profit and user protection.
  • With those stakeholder groups, openly and collaboratively consider how to structure your companies for that balance and demonstrate to shareholders how user wellbeing supports profitability. 

If the tech industry wants to keep the Internet as open as possible for the good of all, it must not only acknowledge and air all the problems contributing to its splintering. It must also demonstrate for us – stakeholders in democracy worldwide – that the benefits outweigh the risks. The industry must step up to the power it has for good and help reduce the harms, substantively, collaboratively, and for all to see.

Dr. Ranjana Kumari is the founder and director of India’s Center for Social Research. Anne Collier is the founder and executive director of The Net Safety Collaborative in the US. Both have advised tech companies, including Meta, on safety issues for many years.