View from
The Center

A Question for Frances Haugen: Who Decides?

(Jabin Botsford/Pool via AP)

“The question, however, remains: Would politically motivated government officials make better decisions than executives at these companies, and would these decisions be fairly applied to all users?”

Frances Haugen, the whistleblower who worked at Facebook and who recently released tens of thousands of the company’s internal documents, has identified problems related to how profit-driven executives make content moderation decisions. These decisions aimed at increasing user engagement often result in harmful content remaining on the website. Much of the media attention surrounding Haugen’s October 5th testimony before the Senate Commerce Committee has focused on the harm to children caused by content such as this and the accompanying clamor for greater government regulation of online content. The question, however, remains: Would politically motivated government officials make better decisions than executives at these companies, and would these decisions be fairly applied to all users? While First and Fourteenth Institute (FAFI), the organization I co-founded earlier this year, agrees that company executives should not make content moderation decisions driven by profits, we believe that government actors would similarly allow their potential content moderation decisions to be driven by political considerations. As such, we propose an alternative, non-governmental solution.

Social media platforms have policies against propagating misinformation, yet they simultaneously claim not to want to be the arbiters of truth. These companies also understand that government’s preferred policies regarding social media platforms are likely to change materially as election results typically swing control of the federal government from one party to the other every four to eight years.

The question, all the while, remains: Who decides which information is true or false? To resolve this dilemma, the platforms currently rely on outside sources—typically government agencies or handpicked fact-checkers to discern whether posted content is true or false. However, this leaves unanswered questions: Who determines which content needs to be fact-checked; who determines the basis for the fact-check; who determines whether the fact-checkers are fair or simply disagreeing with the viewpoint of the content; and who adjudicates disputes between content creators and fact-checkers? Many so-called fact-checkers are actually businesses, funded by partisan backers, who leverage their opinions and biases to attract clicks to their fact-checking stories. Government agencies cannot censor content, and media companies cannot censor explicitly on behalf of government. To do so would infringe on Americans’ constitutional protections guaranteeing free speech.

The Supreme Court has affirmed through rulings such as United States v. Alvarez (2012) that false speech is protected free speech. While the possibility exists for real harm from falsehoods propagating across society, there is far greater harm from centralized censorship. Such censorship quickly results in dampening of citizens speaking “truth to power” against the government and self-censorship of unpopular minority opinions. This is the first step toward authoritarian rule in other countries.

Haugen, in her criticisms of Facebook, correctly focuses on harm—such as bullying or hate speech—when it comes to whether speech or content should be censored. However, censorship can just as easily extend to calls to restrict certain opinions, to clamp down on sharing the results of scientific studies, or to ban certain types of political advocacy under the guise of reducing harm. In our view, unless controversial content is likely to cause imminent harm, platforms should allow counter-speech (and thus, debate) to ensue. By adding the criterion of imminent harm to content moderation standards and transparently publishing these standards, social media companies can improve their fact-checking and simplify the expertise needed by third-party fact-checkers. Fact-checking effectively becomes harm-checking.

Given that current content moderation standards and enforcement actions by Facebook, Twitter, and Google/YouTube are typically opaque and that it is difficult for users to appeal enforcement actions against their content, transparency will dramatically reduce the widely-held perceptions that content standards are enforced unfairly and unequally across all users. Haugen identified Facebook’s previously unknown program called the X-Check Program, which white-listed VIP users who are allowed to post harmful content that would normally be blocked.

FAFI has been developing and road-testing a solution that uses a non-government entity to sidestep the pitfalls of either profit-driven technology executives or politics-driven government actors making unfair or unconstitutional content moderation decisions. A model we are looking to emulate is that of the Financial Industry Regulatory Authority (FINRA), the organization that ensures the integrity and ethical functioning of brokers and broker-dealer firms in the financial industry. Congress, in our view, ought to establish a FINRA-like regulatory entity to regulate the media content moderation of the enormous online media companies. This entity would mandate a public, transparent listing of content rules and concomitant enforcement actions from the online media companies. At the same time, Congress should tweak Section 230 to ensure legal incentives are in place to protect good-faith efforts to moderate content that is imminently harmful or dangerous while also protecting the constitutional free speech rights of users.

Why would these companies buy-in? These platforms are eager to escape the public relations controversies and associated costs that come from scrutiny to their current content moderation practices, as well as the perception that such moderation actions are not enforced equally or fairly. There is also a financial incentive to get out in front of potential, future heavy-handed government regulation. Recall that Internet companies were similarly motivated to enact privacy policies in previous years. In FAFI’s view, this FINRA-type solution can address the concerns raised by Haugen and similar critics of these platforms without introducing a new set of problems that could be caused by enacting a direct government regulatory solution.

Mike Matthys is a co-founder at First and Fourteenth Institute (FAFI), which he formed with John Quinn and Brian Jackson. He has spent his career in the technology industry and currently lives in the San Francisco Bay Area.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.