Facebook whistleblower Frances Haugen is the information source policymakers have been waiting for: An expert insider, armed with data and documentation, who can explain what’s been going on inside a company that helps shape public discourse around the world.

In the US, though, pundits and some policymakers have reduced Haugen’s nuanced and fact-driven message to a simple one.  “Platforms aren’t responsible for what their users say,” the theory goes, “but the law should hold them responsible for messages that they themselves choose to amplify.”

This framing provides a seemingly simple hook for regulating algorithmic amplification—the sequence of posts in the newsfeed on platforms such as Twitter or Facebook; the recommended items on platforms such as YouTube or Eventbrite; or the results of searches on a search engine such as Google or within Wikipedia.

Of course, Haugen herself never suggested anything so simplistic. It strikes me as unlikely that Haugen (who, like me, formerly worked at Google) would assume that Facebook’s unique issues could be extrapolated as the basis for regulating the rest of the Internet. It remains to be seen what her revelations about Facebook can teach us about intermediaries that serve far different functions in our information ecosystem. Jumping to simplistic policy conclusions, and regulating every platform as if it were Facebook, would squander the opportunity for evidence-driven analysis.

Policymakers still may be drawn to the idea that expanding platform liability for amplified content is a silver-bullet solution for complex problems. As I detailed in this essay, though, focusing on amplification won’t make life any easier. Issues about amplification and fundamental rights are particularly thorny. The same problems that plague platforms’ existing processes to remove content — including false or erroneous legal notices, over-removal by platforms seeking to avoid legal risk, and disparate impact from sloppy enforcement mechanisms — would affect their efforts to demote or stop amplifying content, too.

Users who found their lawful posts excluded from news feeds as a result would rightly blame lawmakers, as well as platforms themselves. As the European Court of Justice’s Advocate General recently said, “The legislature cannot delegate such a task and at the same time shift all liability to those providers for the resulting interferences with the fundamental rights of users.” Additional legal questions would arise based on platforms’ own freedom to conduct business under the European Union’s Charter.

The point is not that lawmakers can’t regulate amplification. It’s that doing so while avoiding unintended consequences is hard. Algorithmic regulation is harder to get right than the platform hosting liability rules in Europe’s upcoming Digital Services Act — and even those rules are complex, and were rightly the subject of lengthy consultation and fact-gathering by the Commission.

Lawmakers could, in theory, make platforms “turn off” their algorithms, prohibiting amplification.  One common proposal would require or incentivize them to show user posts in reverse-chronological order—putting the newest posts at the top of a newsfeed.  For the Twitter or Facebook newsfeed, this could eliminate some specific problems created by engagement-based content ranking.

At the same time, it would open up new possibilities for mischief, including through coordinated inauthentic action. One problem has to do with repetition: A chronological newsfeed can be spammed by people or bots posting the same or similar things every few seconds.  A purely chronological system is also likely to show more “borderline” content—material that almost, but not quite, violates whatever speech prohibitions a platform enforces. Without algorithmic demotion as an option, platforms would have only the binary options of completely removing content, or leaving it up.

Another possible non-content-based law would be a “circuit-breaker” rule, permitting amplification only up to some quantified limit. That limit might be defined by metrics like the number of times an item is displayed to users, or an hourly rate of increase in viewership.

The problem here is that breaking news causes sudden spikes in user engagement and interest. So novel or newsworthy posts, including extremely important material like the videos documenting the deaths of Philando Castile or George Floyd at the hands of police in the US, would be disproportionately affected.

Options based not on content regulation but on increasing users’ autonomy and diversifying their options may offer better paths forward. More granular control over how our personal data is used to target content, for example, could let us choose less polarizing material. Competition changes that produced dozens of competing algorithms for social media could ensure that no single company shapes the information diet of such enormous audiences.

But these models have their own problems. Among other things, they would not prevent individuals from actively choosing lawful but harmful or polarizing content online, any more than current law prevents similar choices in consumption of books or music. But by increasing users’ choices, they could alleviate other important problems and improve our online experiences. We could have a version of Twitter that is safer for female journalists, a version of YouTube vetted by anti-hate-speech groups, or a version of Facebook optimized for those who love — or loathe — sports.

The Facebook whistleblower highlighted major problems with today’s Internet. Those problems are serious, and they deserve serious solutions. Hastily crafted rules regulating amplification — especially rules not firmly grounded in evidence — do not provide the right way forward.