Opinion: verbal abuse, death threats and hateful opinions about certain groups have become daily occurrences on social media
Since the advent of Web 2.0, digitally-facilitated abuse has been a steadily growing problem. Multiple terms are used to describe this phenomenon in different contexts, including digital violence, networked harassment, cyberhate, technology-facilitated violence, tech-related violence, online abuse, cyberbullying, cyberharassment and hate speech online. It can be difficult to conceptualise a phenomenon that includes everything from Pepe the Frog memes to the distribution of "revenge porn".
Articulations of digital hate range from verbal abuse, cyberstalking and doxing to rape threats, death threats, photoshopping victims into porn and sharing intimate images of victims with thousands of strangers. In addition to direct abuse of individuals, digital hate also includes the expression of hateful opinions about certain groups, ranging from the reinforcement of harmful stereotypes to more extreme pronouncements of violent or genocidal intent.
Digital hate exerts a range of harms, both on those directly affected and those who observe this phenomenon. These vary from the material, whereby people’s livelihoods have been sabotaged, to the psychological, whereby being subjected to hate takes a significant toll on victims’ mental health, confidence and sense of safety. Even for those who have no experience of digital violence directly, awareness of it has a chilling effect. Anyone who engages daily on social media is aware of what Umair Haque refers to as the "ceaseless flickering hum of low-level emotional violence". A related but often overlooked issue is the impact that hateful and violent content online has on content moderators employed by social media companies.
From RTÉ Radio 1's Drivetime, co-author Tanya Lokot on why social media loves hate
To date, the academic research shows that women, non-heterosexual people and people of colour experience disproportionate levels of hate online. In 2017, a survey by the Pew Research Centre in the United States revealed that women are much more likely to experience severe types of gender-based or sexual harassment: 21% of women aged 18 to 29 reported being sexually harassed online, more than twice the percentage of men in the same age group (9%).
The same year, an Amnesty International report indicated that almost a quarter (23%) of women surveyed had experienced online abuse or harassment at least once, ranging from 16% in Italy to 33% in the US. Across all eight countries, almost half (46%) of women surveyed who had experienced online abuse or harassment said it was misogynistic or sexist in nature.
A recent study by Becky Gardiner, former editor of The Guardian’s Comment is Free section, revealed that female and ethnic minority journalists were disproportionately targeted for abuse. Significantly, the study also found that, in response to the abuse they received, journalists toned down their opinions or changed story angles, while a sizeable 20% refused assignments.
Who designs and runs social media platforms matters, as their cultural values get "baked into" the code and the algorithms than run in the background
The scale and intensity of this problem has provoked researchers to ask, do social media platforms amplify hate or do they merely reveal to us hitherto unarticulated sentiments? There are a number of reasons why social media loves hate.
The technological affordances of different platforms – the opportunities for action that they offer to users in particular contexts, as defined by danah boyd in 2010 – make social media content highly searchable, replicable and persistent. The networked nature of these platforms also means that hateful, inflammatory discourse can be persistent and that hostility spreads quickly across the networks. Studies of online distribution of digital racism in particular find that the networked environment shapes the visibility and articulation of existing forms of racist hate speech in new ways. It allows for the emergence of networked spaces of hate where hostile speech becomes normalised and even valorised.
This kind of "herding" behaviour contributes to attacks on vulnerable users, as the hate-expressing individuals rationalise their disruptive actions as part of a larger affective community. On platforms that allow for community voting, abusive comments are often voted up in a coordinated manner to enhance their popularity (an act known as brigading), thus exacerbating the networked amplification of hateful expression.
From RTÉ Radio 1's Ryan Tubridy Show, Brainstorm contributor Mary McGill talks to presenter Dave Fanning about #Planebae and how social media has changed our views of people's privacy
Anonymity, a key feature of some social media platforms such as Reddit or 4chan, also tends to enable violent and hateful speech because of the disinhibition users feel to say whatever they please. But it can also lead to abusive behaviours such as doxing, when other individuals threaten this anonymity by exposing a targeted user’s identifying information and revealing their public identity.
The materiality of digital networked technologies and their design are closely connected to the political economy of technology. Who designs and runs social media platforms matters, as those are the individuals whose cultural values get "baked into" the code and the algorithms than run in the background of Facebook, Twitter or Instagram. As Adrienne Massanari demonstrates in her analysis of Reddit, the platform’s algorithms are designed to prioritise the interests of straight, white males as the dominant group, and thus to disenfranchise already vulnerable and underrepresented minority groups. Whose voices get heard on social media platforms directly impacts who gets to have agency and to make choices about what speech is made visible and made prominent, and who is silenced.
Another aspect of the social media economy is the main currency of these platforms – attention. Content that gets users onto the platform and keeps them there is prioritised, even if such content is often extreme or violent. Polarising, outrageous and distressing content has the "affective stickiness" that helps platforms to maximise profit, but it also clashes with their self-professed focus on "building a global community that works for everyone", to quote Mark Zuckerberg.
Instead of waiting to be prompted by victims of abuse every time it happens, social media companies would do better by taking a proactive stance
Linked to these conflicting goals is the inordinate power of social media companies that have now become more and more government-like in their reach (Janosik Herder calls them "biopolitical" entities) and that play an essential role in our everyday lives. This great power comes with great responsibility, and that is where social media platforms currently fail to deliver, given the opaque nature of their back-end architectures and algorithms, their ever-changing terms of service, their lack of public accountability, and the inability of national legislation to keep up with their technological advances.
Though companies are vowing to do more to combat hate and abuse on their platforms, there is still a lack of viable reporting mechanisms for hateful speech, especially ones that would enable bystander intervention. Relatedly, access to social media data and APIs for academic researchers has dwindled, which precludes meaningful inquiry into propagation of networked hate and affective responses to abusive speech online.
Instead of waiting to be prompted by victims of abuse every time it happens or blindly relying on the mythical powers of artificial intelligence to weed out hate, social media companies would do better by taking a proactive stance. This would mean creating more opportunities for collaboration with academics and civil society, supporting their content moderators and listening to their users. Hate is still easy to find on social media and the haters are very good at gaming the system. The platforms need to catch up and to become even more sophisticated in their approach to combatting hate.
Both authors will join researcher Paloma Viejo Otero and singer-songwriter and activist Farah Elle on a special RTÉ Brainstorm panel on why social media loves #hate. This will take place during the DCU Anam festival on Thursday at 10am. Tickets are free and can be booked here.
Dr Debbie Ging is Associate Professor of Media Studies in the School of Communications at Dublin City University. She is a former Irish Research Council awardee. Dr Tanya Lokot is an Assistant Professor in the School of Communications at Dublin City University.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ