View from
Op-Ed

Texas and Florida Content Moderation Laws Would Open Pandora’s Box 

(The United States Supreme Court/zacklur)

A transparency mandate passed by Congress can avoid the controversial and difficult work of legally defining the terminology and specific guardrails for online safety and viewpoint neutrality.”

The United States Supreme Court is currently reviewing controversial “free speech” laws in Texas (House Bill 20) and Florida (Senate Bill 7072). While well intended, these laws would allow the states of Texas and Florida to interfere with—and even override—the content moderation decisions of online platforms. In effect, they may compel the platforms to enable the broadcast of specific content in order to satisfy the desires of these state government to achieve viewpoint balance, even if said content may be imminently harmful to specific persons or groups. With insufficient guardrails to ensure both online safety and viewpoint diversity, future government decisions on acceptable content risk being politically motivated and partisan, resulting in arbitrary conclusions that depend on which political party is in power at a given time.

Additionally, if these state laws are upheld and enforced, other states will likely pass similar laws. This would result in a jumble of potentially 50 different state-level rules that would make it nearly impossible for platforms such as Facebook, Instagram, YouTube, Google, and others to operate effectively. Further complications would arise because users move around constantly throughout the states. The proper place for any laws pertaining to content moderation rules is clearly at the federal level. 

Rather than introducing a direct role for Congress or state government to influence or direct the content moderation policies and actions of social media companies, a transparency-focused approach can achieve many of the same goals without opening the door for potential future government control over online content.   

Social media companies today generally publish their content rules, but transparency must go further. Transparency should include detailed reports showing all enforcement actions, including bans, labels, warnings, suspensions, and blocked content, as well as harder to detect actions such as de-amplifying or de-promoting content. For each enforcement action, the user affected should be clearly informed of which specific content rules were broken and which specific content broke those rules. These links between specific enforcement actions and related content rules should be included in the published reports as well.  

Online platforms such as Instagram, Snapchat, Discord, Facebook, Google, YouTube, and others have demonstrated they can publish generalized reports on quantities and general categories of harmful content they discover. These reports can be found on company websites (e.g., here, here). They appear like spreadsheets with general categories such as nudity, self-harm, violence, or government request. These existing reports simply need to be expanded, with far more details. 

To ensure viewpoint neutrality, these detailed reports must also identify the specific content categories in which any enforcement actions were taken. Transparency should also include the usernames of those affected as long as they are media sites, media entities, or individuals who opt out of remaining private. Such transparency will show whether the enforcement actions are consistently and fairly applied across all content categories and users. The glare of publicity provides a powerful incentive to ensure that online platforms do not systematically discriminate based on viewpoint or worldview, which is the very goal of the Texas and Florida laws currently under review.    

A transparency mandate passed by Congress can avoid the controversial and difficult work of legally defining the terminology and specific guardrails for online safety and viewpoint neutrality. It will create an atmosphere of public scrutiny that encourages both online safety and viewpoint neutrality based on public reports showing how each online company performs and how it performs relative to its peers. Companies that regularly take enforcement actions on content that is not imminently harmful nor illegal but is instead considered misinformation or disinformation will be required to publish their actions on all content categories and users. This would allow the media and public to make their own judgments about the biases of each online company.  

This transparency mandate should not require companies to share their proprietary algorithms and trade secrets with academics or government researchers. One reason is that making public the secret sauce of how the platforms work is not necessary to measure and evaluate the content moderation performance in reducing harmful content. A second reason is that the risk of academics and researchers publishing or sharing the research and development trade secrets with potential competitors would make this requirement an incredibly intrusive non-starter. There is a reason we have intellectual property laws—namely, to protect and encourage innovation.We do not require automakers to publish valuable trade-secret technologies underlying their gas, hybrid, and electric power systems. Instead, we only require that the miles-per-gallon or miles-per-battery performance of each vehicle be published.  

The most important aspect of any transparency policy is a requirement that online companies disclose any communications they receive from government or government-funded entities. Specific government officials, employees, and contractors, as well as their specific content moderation requests, should be identified, with the exception of well-defined law enforcement or national security priorities.  

It is one thing to have social media companies making their own decisions about what content may be true or false and what content may be simply a matter of disagreement, debate, or opinion. But allowing the government directly to influence online content moderation directly violates the First Amendment’s guarantee of the right to free speech. Government is a political entity that often conflates what is true or false. It is inherently motivated to promote the partisan narratives of those who control it and to minimize the spread of inconvenient or opposition narratives, as evidenced by exposure of the Twitter Files and other reports (here, here). Transparency mandates and related media spotlighting of government-led efforts to influence which content is available and widely promoted for the public to access will dramatically reduce partisan incentives to censor inconvenient facts or narratives.

The goals of the Texas and Florida laws are laudable, but they introduce the further potential for government to exercise direct influence over what news and information can be widely accessed by the public. While Congress finds it challenging to overcome partisan struggles with basic goals and definitions related to Section 230, focusing on transparency should appeal to both sides by ensuring efforts to minimize harmful content; doing so in a manner that is consistent across all content categories and users; and minimizing government influence on the content and news that people can access. Transparency and public scrutiny can be powerful influences on future content moderation policies and related enforcement actions.

Michael Matthys is a co-founder of the Institute for a Better Internet. The organization can be found on X @4BetterInternet

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.