“There is no debate about whether social media might fuel genocidal violence. It already has. The question is how best to meet our moral responsibility.”
common response to concerns about our digital darkness—Russian astroturf campaigns, psychographic targeting, and “fake news”—is that social media is just a noise machine with limited impact. But a massacre in Burma shows that the opposite is true. If we are going to combat extremist violence, we need to acknowledge that a new front has opened in the information war.
Last summer, Burmese government troops and militia members swarmed into Rohingya villages on the country’s western coast. Aid giant Doctors Without Borders reported that several thousand ethnic Muslims were murdered, including over 700 infants and young children. More than 700,000 Rohingya Muslims have fled their homes and become refugees since the attacks. New reporting shows that this violence was fueled by poorly-monitored Facebook posts.
Facebook should have known better. In fact, in 2014, Facebook performed extensive testing to prove that what its users see and read influences how they think and feel. The social network famously worked with researchers to manipulate over half a million of its users’ news feeds, later tracking how the tone of their unwitting subjects’ posts and online conversation shifted.
The study proved “emotional contagion”: people who saw happier posts tended to say happier things on the platform, and those exposed to what the study called “negative emotional content” tended to say more pessimistic things.
In poor countries, this problem is exacerbated by the social network’s monopoly power. As the world’s largest social media platform with a market cap in the hundreds of billions of dollars, Facebook relies on acquiring new users in developing countries. Already, the platform is the dominant communications and news channel for much of the developing world. In Burma, for instance, more people have Facebook than have regular access to electricity.
Terrorist propaganda has proven difficult for Facebook, Instagram, Twitter, and YouTube to weed out. And Burmese terrorists aren’t the only ones fueled by social media: groups like Hezbollah and ISIS continuously post content on major social media platforms. Terrorist Sayfullo Saipov, who killed eight people in Manhattan last October, followed attack instructions on ISIS social media channels.
There is no debate about whether social media might fuel genocidal violence. It already has. The question is how best to meet our moral responsibility. Hate speech must be better policed, but the best option is for the public and private sector to partner together to counter speech with speech. We are engaged in a war. Corporations are not warfighters. On the other hand, our government hardly has a strong track record in technological innovation or effective persuasion.
But we can still act. The private sector could provide expertise, embedding skilled teams in government agencies like the State Department’s Global Engagement Center. This would marry private industry thinking with public service planning and resources. And it wouldn’t break some sort of confessional seal between tech and government; social media platforms have acted exactly this way in presidential campaigns for both parties.
The Feds also need to get their own act together—the Trump Administration’s State Department still has not secured congressionally allocated funds approved back in 2016, hampering our ability to fight terrorists online. And Washington’s struggles are not unique to Republicans. An official said the Obama Administration damaged the Global Engagement Center by trying “to do too much with too little in the way of resources and too little in the way of vision.” Fighting terrorism shouldn’t be a partisan problem.
We certainly need a broader debate about what constitutes hate, who polices it, and how it is managed. But the enemy is inside our walls already, and we can fight them now.
Matt Salisbury works in communications in Washington, DC.