Opinion: social media companies have been slow to react to abuse of their platforms, but are they solely responsible for this?

Since the 2016 US presidential elections, it has become clear that social media have been playing a significant role in allowing people to circulate misleading information and incite hate and violence. Examples abound. The BBC recently reported that Facebook is routinely being used in Libya to circulate political misinformation and hate speech. Rival militia have been using posts to suggests ideas for local attacks, providing coordinates, angles and strategical information. In addition, closed Facebook groups are being used to trade weapons, while the personal accounts of militia leaders have been inciting violence against rivals and accusing them, in an attempt to assert their own power.

The UN has confirmed that Facebook posts played a crucial role in provoking racial violence between ethnic groups in Myanmar. Even after the displacement of millions of Rohingyas, the platform is still rife with posts inciting ethnic violence with comments promoting hate, dehumanising Muslims (and the Rohingyas especially), and even spreading targeted pornographic images.

In India, WhatsApp was used to circulate anonymous false messages about child kidnappers in May and June 2018. This triggered violent reactions among scared villagers who possess limited media literacy, and resulted in violent mob attacks that killed innocent people. Similarly, a false message about surveillance equipment implanted in the new 2,000 rupee note spread rapidly via WhatsApp and eventually migrated into Indian mainstream news.

From RTÉ Radio One's This Week, Carole Coleman on US President Donald Trump’s regular use of the term "Fake News" and the challenges it presents to journalists and news producers

Most famous perhaps, the fake news hype stands accused of having contributed to a victory for Donald Trump in 2016. Whether or not social media can actually swing opinions is unclear. But masses of users were driven to fake news sites and Twitter and Facebook were major conduits for false content. Building upon the "success" of fake news, we have seen similar disinformation circulating in the run up to elections in France (2017) and Mexico (2018), as well as virulent attempts to influence policy in Germany, Italy and Brazil.

Maybe we shouldn’t be surprised at this turn in social media content. We increasingly live in a society where publicity is everywhere, where promotional activities have come to saturate communicational practices. The above examples fit right into that: people are taking to this new technology to promote their causes, and publicise what is useful information to them. One way of thinking about Twitter may be through this kind of framework, as part of such a publicity-driven culture.

This shift in communicative practices has had serious real-world consequences so what have the social media companies done in response to this? Since the Senate Committee hearings and data scandals like Cambridge Analytica, those responsible for the platforms have had to move beyond reluctant shrugging and ignoring the problem. Twitter’s CEO Jack Dorsey commented that the site was "unprepared and ill-equipped" to deal with the manipulative campaigns it experienced. Zuckerberg publicly apologised for Facebook’s shortcomings.

From RTÉ Radio One's Morning Ireland, Janko Roettgers, Senior Silicon Valley Correspondent with Variety, discusses Mark Zuckerberg's appearance before a joint sitting of two US Senate committees

But why did this take so long? California-based Silicon Valley companies subscribe to the United States' constitution’s first amendment. Abridging freedom of speech by censoring content on their platforms may therefore not be their main concern. In addition, the reason why controlling social media abuse has been historically slow has been mainly linked to difficulties monitoring the vast number of posts, language barriers of foreign content and the companies being slow to invest and operationalise better reporting tools.

Are we too eager to point the finger? The misuse of this technology could be considered a symptom of wider issues offline in our societies. The potential for the dream of social media bringing us together into a more global and kinder world, where marginalised voices can be heard, is there but, as it turns out, what these voices have to say is not always so kind.

Is the spread of hate speech an inevitable consequence of this new technology, or the responsibility of the platforms? Their guidelines and rules specify that this kind of material is not allowed, but to what extent can a social media company be considered a legal abettor to violence? Are they to police their online platform, as well as the offline world?

From RTÉ Radio One's Drivetime, Barry Lenihan reports on the appearance of senior Facebook executives before an Oireachtas Committee 

Steps have been taken, ranging from hiring more content screeners to funding outside research, using algorithms, cooperating with fact-checker groups, and even developing AI to pick up on violations of hate-speech policy. Still, 62 percent of undesirable content is currently removed merely through it being flagged by members of the public. At his congressional hearing in April 2018, Mark Zuckerberg promised that Facebook would cooperate with people and groups on the ground to learn about local players or "specific hate figures" who may be active in posting and circulating hate speech. This is laudable, but can we expect commercial companies to go this far? If the online is a mere reflection of the offline, is policing both too much to ask?

In order to understand the nature of the content found online, we also cannot ignore the political economy of social media companies. Cynically perhaps, we should remember that these companies make their money by having people interact on their platforms and stay there as long as possible so they can be exposed to advertising. This market logic of companies driven by profit may play into the extent to which mediating and censoring (i.e. curtailing) such activity has not been their number one concern.

Simply shifting responsibility and pointing the finger to commercial entities may silence but not remediate the hate

Despite governments shifting responsibility to these companies, the fact remains that while they are holding the bullhorn, they are not creating the voices that sound through it. Notwithstanding earlier celebrations that social media can simply bring light to the natural democratic soul of people, we need to look carefully at the settings where content gets produced, as well as the relationship between what is done online and the sociological reality of the context. Scholars of the internet established very early on that the online-offline distinction was a problematic one.

Simply shifting responsibility and pointing the finger to commercial entities may silence but not remediate the hate. The distinction between offline and online does not entirely capture the complex practices of social media communication that have become so thoroughly embedded in the routines of everyday life either. Fighting hate and abuse in our societies needs more thought so only persecuting social media inhibits our ability to describe and understand the communication process we see playing out here.

The views expressed here are those of the author and do not represent or reflect the views of RTÉ