Analysis: our susceptibility comes down to a complex number of factors relating to psychology, politics, society and technology
Fake news became a major talking point after the US presidential election and the UK's Brexit referendum in 2016. Since then, a stream of investigations have shown that the major social media platforms are awash with disinformation and extremist propaganda.
But what actually makes people susceptible to fake stories and false claims in the first place? Many researchers are trying to answer this question and what they find is a complex overlap of factors relating to psychology, politics, society and technology.
Ideally, we should evaluate new information objectively. But in reality, we tend to be biased towards our existing beliefs. This means we are likely to accept information that confirms our beliefs and to resist information that challenges those beliefs. This pattern of thinking is often associated with those who are highly committed to a political or ideological view.
A RTÉ Brainstorm video on how vaccination controversies have shown the power of fake news long befpre social media
Biased reasoning is a difficult problem to address because it means people may be motivated to reject evidence. For example, a study investigating how to correct disinformation about vaccines found that scientific evidence did little to challenge the views of those who already believed that vaccines cause autism.
Research has consistently shown that people rarely give their full attention when processing new information. This is why so many of us are easily fooled by trick questions. Consider the following: if you’re running a race and you pass the person in second place, what place are you in? The correct answer is second place, but arriving at this answer requires paying close attention to the question. When people only process the gist of the question, the intuitive answer seems to be first place.
Distracted attention is the norm on social media. Faced with a constant flow of new information, we often only skim content and offer an instant reaction. In these circumstances, an attention-grabbing image of a shark swimming on a flooded motorway or alarming stories about refugees can override our better judgement. Without intending to cause harm, we may be contributing to the spread of disinformation by liking or sharing these stories.
From RTÉ Radio 1's Morning Ireland, Eugenia Siapera from DCU discusses research which found that Twitter was dominated by far-right and political groups during the refugee crisis
When we hear a false claim repeatedly, it becomes familiar and takes on an illusion of truth. For example, there is no scientific evidence to support the claim that eating carrots improves eyesight or that vitamin C can prevent the common cold But these claims seem credible simply because we have heard them so many times and from many different sources.
Unsurprisingly, the repetition of falsehoods is a staple of political propaganda. Donald Trump makes numerous false claims about immigration and crime which are then repeated by his supporters and covered by the news media. The difficulty here is that when the news media or fact checkers refute these claims, they often increase public exposure to false claims.
Concern and negative emotions
Disinformation typically targets negative emotions such as fear and anger. A recent BBC study found that people share disinformation out of fear and concern for others. This explains why urban legends such as the "Momo Challenge" spread so quickly online. Concerned parents are motivated to share the story even if they are uncertain about whether or not it is true. Of course, negative emotions also play a major role in political disinformation. Currently, there is a surge of far-right disinformation across Europe that generates fear and anger about minorities and so-called elites.
From RTÉ One's Prime Time, a report on the facts and fictions of the Momo Challenge phenomenon
Social media magnifies the human tendency towards biased, distracted and fearful thinking. These platforms are designed to maintain our attention and use data algorithms to determine what content is likely to keep us engaged. As negative emotions tend to draw the most engagement, stories that provoke these emotions are promoted.
These practices have major implications in terms of exposing people to disinformation and extreme content. Technology writer Zeynep Tufekci observes that YouTube’s recommendation algorithm "seems to have concluded that people are drawn to content that is more extreme than what they started with". So when someone seeks out a video about the flu vaccine, the recommendations will include anti-vaccination conspiracy theories.
Disrupting the attention cycle
What can be done to disrupt the spread of online disinformation? This question is widely debated with calls for more fact-checking and media literacy education as well as regulation of the social media platforms. One way to address the problem is to intervene in the attention cycle of social media. By encouraging people to think before they hit "like" or "share", it may be possible to impede the spread of disinformation.
There is no quick-fix for disinformation as it feeds off deep divisions in contemporary politics and society
This is the underlying idea behind the Provenance project led by the FuJo Institute at Dublin City University and in collaboration with the ADAPT centre at Trinity College Dublin. The Europe-wide project is developing tools that will automatically evaluate online content to provide context about where it originated from and who has been sharing it. In addition, the tools will identify whether visual content has been manipulated in some way and whether story differs substantially from what is reported by news agencies.
When people are browsing the web or social media, they will receive a simple warning to indicate that content may be suspicious and have the option to find out more. Importantly, the aim is not to tell people what to believe but to provide them with context to make an informed decision.
There is no quick-fix for disinformation as it feeds off deep divisions in contemporary politics and society. Nevertheless, as technology is a major factor in the spread of disinformation, giving people the power to evaluate online content is an important step towards improving the information environment.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ