Analysis: the evidence suggests that such faulty ideas, concepts and terms actually misrepresent what's happening online

Filter bubbles and echo chambers have become popular concepts in media commentary. They are frequently invoked as an explanation for the apparent rise of conspiracy theories, extremism and social division.

In this popular conception, filter bubbles and echo chambers are "destroying democracy" by reducing our exposure to diverse ideas while fostering extremism among like-minded people. These ideas became popular in the wake of the 2016 US presidential election and they remain popular now to explain everything from the riots at the US Capitol to public protests against Covid-19 lockdowns.

But are filter bubbles and echo chambers real? The evidence suggests that these concepts misrepresent what's happening online and that’s important because we are unlikely to arrive at good outcomes if we rely on faulty ideas to define the problem.

From TED in May 2011, Eli Pariser on why we should be wary about 'filter bubbles'

The term filter bubble was introduced by activist Eli Pariser in 2011. Pariser was concerned about the use of algorithms to personalise the information we see online. For example, if two people enter the same term into Google Search, the results they see may differ depending on their locations and previous searches. In contrast, some search engines operate strict privacy policies and do not filter information in this way.

Social media platforms rely heavily on personalisation. They recommend content - such as Facebook groups to join or YouTube channels to watch - based on an analysis of your preferences and previous behaviour. For tech companies, personalisation means that the information we see is more likely to be relevant and engaging. For Pariser, personalisation is dangerous because it creates bubbles that filter out the news and opinions we disagree with.

We need your consent to load this YouTube contentWe use YouTube to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From Salon, Cas Susnstein on how change happens

An echo chamber is a related concept that was popularised by the legal scholar Cas Sunstein. He argued that the internet makes it easier to only communicate with like-minded people. Over time, constant exposure to a particular viewpoint may push people towards extremism. For Sunstein, this is an inevitable consequence of confirmation bias: the tendency to seek out information we agree with while disregarding information we disagree with.

As originally defined, filter bubbles and echo chambers go far beyond the idea of confirmation bias. They suggest that many of us exist in information silos where we don’t see opposing information in the first place.

These are bold claims. As Axel Burns argues, "imagine how difficult it would be to completely encapsulate yourself in an echo chamber or filter bubble, in order to receive only information that fits your existing worldview". It’s hard to imagine because the evidence suggests that digital media do not work this way.

'Anecdotes are not hard evidence'

There is ample anecdotal evidence that social media platforms push people towards extremist bubbles. A recent New York Times podcast, Rabbit Hole, tells the story of a lonely young man who watched YouTube obsessively. Pinning the blame on YouTube, it charts his transformation from an Obama supporter to a right-wing reactionary. It’s a compelling story, but anecdotes are not hard evidence.

We need your consent to load this YouTube contentWe use YouTube to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From CNN, interview with Caleb Cain who was featured in the New York Times' Rabbit Hole podcast

One problem for researchers is that it’s difficult to recreate the conditions of the online environment in a systematic way. Many studies are artificial because they do not replicate the practices of actual users with real histories of online behaviour.

To get around this, some studies use the accounts of real people. A 2019 study of Google Search used real participants and found that the top recommendations were consistently identical for conservative and liberal participants. This supports Google’s claim that it now only uses personalisation in a limited way.

Studies of social media are more complex. A 2020 study developed YouTube 'watch histories’ for 150 accounts to assess whether those accounts were recommended conspiracy theory videos. It found evidence for a filter bubble effect whereby accounts that watched conspiracy theory videos were recommended more conspiracy theory videos.

We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences

From RTÉ Radio 1's Ryan Tubridy Show, CNN's Donie O'Sullivan on the role played by conspiracy theorists in amplifying online disinformation and chaos

However, that only implies the possible existence of a filter bubble within YouTube. It says nothing about the totality of information people consume on a daily or weekly basis. When researchers focus on this broader picture of consumption, they find that people who rely on social media and search engines are exposed to a diverse range of sources including sources they disagree with.

In other words, digital media make it more likely that people will discover unfamiliar information sources and this may offset the filter bubble effects that exist within a particular platform. Even within ideological groups, people often want to know what the other side is saying and that usually means exposure to opposing views.

No doubt technology plays a role in extremism and radicalisation, but it is not the whole story

The problem with echo chambers and filter bubbles is that they misrepresent the nature of media consumption while offering a simple technological explanation for current problems. No doubt technology plays a role in extremism and radicalisation, but it is not the whole story.

The influence of social media on radicalisation and the lack of transparency surrounding personalisation algorithms merits more attention. But we must set aside faulty ideas to see those issues clearly.


The views expressed here are those of the author and do not represent or reflect the views of RTÉ