Opinion: new research shows the extent to which organised groups are using online comments and social media into theatres of war

By Anne Marie Devlin and Ciara Grant, UCC

In a week’s that has seen YouTube ban comments on videos featuring children and Reddit announce an expansion of its "anti-evil team" in its new Dublin base, it’s time we opened our eyes to the cesspit that are online comments.

For the past number of years, we have been trawling this murky underworld trying to understand when and how the language we use online changed from friendly debate to what the researcher Emma Jane has referred to as ‘’e-bile’’.  The internet of today is a far cry from the early days when cyber-utopians heralded in a new era of human collaboration and communication. To them the internet was a place where individuals could come together to create helpful communities and where "citizen journalists" could combat political agendas from the ground up. This cyber-utopia was going to make the offline world a better place too. So how have we got to a stage where comments sections have become no go zones for many users?

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ 2fm's Dave Fanning Show, journalist and broadcaster Una Mullally discusses the impact of online comments 

To answer that, let’s go back to the beginning. The Rocky Mountain News launched the first ever online news comments section in 1998 and many media outlets followed suit.  However the golden age of the online comments section was short lived. By the end of the decade many sites were forced to either gate their comments sections or close them altogether, citing hateful, toxic and threatening language levelled at individuals, groups and staff. In a way, this shouldn't have come as a surprise. As far back as 1997, internet researchers such as Jon Katz were already remarking that online interactions were characterised by "confrontation, misinformation and insults".

How did something that started off as idealistic and empowering very quickly turn into a toxic, linguistic quagmire? Cyber psychologist John Suler has attributed this transformation to the "toxic disinhibition" effect.  Emboldened by the anonymity of the comments sections, internet users behave in ways that they would never do normally. Effectively, it transforms perfectly polite, respectful individuals into ranting maniacs.

With the closure of many comments sections, toxic online behaviour needed to find other platforms and the most accessible was Facebook. News outlets could post articles on the social-media platform without the responsibility of moderating comments. This is when we noted a dramatic shift. Individual keyboard warriors swinging their sabre in an attempt to land a killer barb have been replaced by organised, ideological armies ("dumb xenophobe Nazis" and "happy-clappy libtard traitors") who are engaged in highly strategic linguistic wars to further their agendas. 

The voice of the individual has all but disappeared

In 2014, when we first collected data from Facebook comments, individuals were still there and both sides were equally represented. The comments could mainly be described as readily-identifiable insults attacking opponents’ intelligence, national allegiance, political viewpoint and sexual preferences. 

Fast forward a few years and a very different battlefield emerges. The voice of the individual has all but disappeared. The comments sections have transformed from a theatre of war between the two equally-balanced militaries to a front where one side has conceded considerable ground. 81% of all comments in our study represent a right-wing agenda. 

There has also been a significant change in linguistics. The traditional insults are still there, but they have been overtaken by a new strategy that Karina Korostelina refers to as "relative insults". These can be categorised as a rational, justifiable means of damaging the opponent. In terms of the refugee crisis, they take the form of ‘’in an ideal world, maybe, but what about our homeless?’’. In other areas, they could look like: ‘’I was in the States recently and my traditionally Democrat friends love how Trump has changed things’’. By framing comments in terms of concerns or truths, the extremists are abandoning easily recognisable propaganda tactics, meaning that they are more likely to be able to extend their view point to a more diverse audience.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's News At One, RTÉ Business Editor Will Goodbody on how Facebook are set to launch new tools to prevent interference in the upcoming European Parliament elections

We found that this strategy was by far the most frequent, appearing in 32% of all comments. It suggests a systematic, organised approach by unidentified ideological foundations or corporations to rationalise extreme views to a non-aligned audience. This hypothesis is supported by research from Indiana University and the University of Southern California who report that 48 million Twitter accounts are bots.

We don’t have figures for accounts from troll farms, where real people are employed, or volunteer, to flood the internet with comments supporting a particular viewpoint (also known as astroturfing). But we do know that the US alt-right has admitted using such methods to infiltrate French elections (see the BBC World Service documentary How the Great Meme War Moved to France). Preliminary research from the University of Oxford’s Internet Institute shows that pro-Trump bots contributed at least five times more online messages than pro-Clinton ones during the last US general election,

While the abusive language that has permeated the internet since its beginning still exists, it is being increasingly weaponised by deep-pocked, influential foundations with vested ideological agendas. The internet is no longer a place where individuals have a voice. Organisations and foundations systematically disseminate their agenda by deploying bots, engaging in astroturfing and using sophisticated linguistic strategies which avoid toxicity monitors.  And that is where the real danger lies.

Dr Anne Marie Devlin is a lecturer and researcher in the Department of Speech and Hearing Sciences and on the Applied Linguistics Programme at UCC. Ciara Grant is a PhD student in Applied Linguistics at UCC

The views expressed here are those of the author and do not represent or reflect the views of RTÉ