Have you heard they found evidence of life on the moon? There are animals up there, apparently, at least nine species including goats. Scientists have been using hydro oxygen magnifiers to view them, it's the latest thing in astronomy, and I have a picture to prove it. Honestly, it’s in the paper so it must be true.

Sorry to disappoint those of you who had hoped for an off-planet holiday this year, but this story is a complete fabrication, as was the picture published alongside it in the New York Sun newspaper. It’s also worth noting that this fake news was published in 1835.

"Fake News" is not new, in fact it is likely that disinformation which has been designed to mislead or manipulate has been around for as long as people have been able to communicate.

What makes today’s disinformation different however is the speed at which fake news can now spread, coupled with technology that can allow artificially enhanced images to appear startlingly real.

In this article, I will be looking at the issue of disinformation and the efforts being made at personal, political and industry levels to curb its spread.


"Disinformation is false information created with the intention of deceiving someone."

Dr Eileen Culloty is Deputy Director of the Institute for Media, Democracy and Society at DCU and has written widely about disinformation and its effects.

"Misinformation", she explains, is also false, but could be the result of an honest mistake, and if the originator discovers the error, they will move to correct it. Disinformation, however, aims to intentionally mislead.

In the early days of the pandemic almost every WhatsApp user got some version of a message that said the country was going into total lockdown the following morning

Those who spread disinformation are sometimes referred to as "bad actors" - a term which refers to a person, organisation or even a state that sets out to deliberately deceive.

This could, for example, be a group with a vested interest in spreading false information relating to climate change or an ideological group who want people to accept their own views.

Some examples of "fake news" are easily debunked. In the early days of the pandemic almost every WhatsApp user got some version of a message that said the country was going into total lockdown the following morning, or that the army would soon be out on the streets forcing everyone to stay indoors.

These messages were widely believed, not least because people were very anxious in those early days of spring 2020, and, given that the news from traditional media outlets was so concerning, it wasn’t a huge leap to believe that something more sinister was on the way.

Adding credence to the disinformation was the fact that the messages tended to be credited to "my brother’s friend who is a guard" or "a doctor in Cork", lending an air of being locally sourced and credible.

Using AI as a tool, realistic images can be generated to add veracity to a story concocted to suit whatever narrative the creator has in mind

Moreover, the speed at which the messages travelled was astonishing, and that speed is one of the biggest issues facing those who wish to tackle misleading news.

The capacity for disinformation to spread is also heightened if the receiver is already inclined to believe it, or has a bias towards believing it. For example, if a person is already suspicious of the Government, they are more likely than not to believe a story saying the Government is going to exercise new powers.

The recent rapid development of Artificial Intelligence techniques has also handed more power to those who wish to spread disinformation.

Using AI as a tool, realistic images can be generated to add veracity to a story concocted to suit whatever narrative the creator has in mind.

Last weekend, The Irish Times had to apologise to its readers after it published an article, purportedly written by an Ecuadorian woman living in Dublin, which claimed Irish women's use of fake tan was "cultural appropriation".

Following concerns from online readers that the woman’s photo might have been generated by AI, the column was taken down from the paper’s website and it later issued a statement saying that the article itself may have been produced, at least in part, using AI.

The newspaper said the incident had highlighted a gap in its pre-publication procedures and also underlined one of the challenges raised by generative AI for news organisations.

So just why is disinformation so compelling, and why does it spread so easily?

Martina Chapman, a specialist in media literacy, says that disinformation ties in with our own personal anxieties and fears.

She advises readers that if they see something online that triggers an emotional response they should stop and think why the information is making them feel that way.

Ms Chapman suggests asking the questions: "Why am I outraged? Why has someone created content to make me feel this way?".

It is also the case, as Dr Culloty points out, that disinformation now has no geographical boundaries.

Back in the 1800s if you were absolutely convinced there were goats on the moon, you could only tell your theories to your friends and family or whoever you could get to listen to you in the pub.

However today moon truthers can join an online group devoted to discussing the issue, speak to others who share their beliefs or watch carefully crafted documentaries designed to reveal "the real truth" about the situation.

In fact, the spread of disinformation relies heavily on the concept of there being a "real truth" out there, truth that someone, somewhere has a vested interest in keeping from you.

Former US President Donald Trump was one of the pioneers of the term 'fake news'

Some people, for example, will be deeply suspicious of this article, written as it is by a journalist working for an established media organisation, which quotes professionals and experts in their field, many of whom are affiliated with established bodies like universities.

Readers have of course every right to their opinion and it is up to every individual reading this article to decide whether to believe my sources or not.

To further complicate the issue, in recent years, the term "fake news" itself has also been manipulated to describe news that someone simply doesn’t want to be heard.

A pioneer in this field was former US president Donald Trump, who frequently used the term to avoid questions from reporters or dismiss entire news organisations.

In recent years, trust in news has been falling in the United States.

Last year, the Reuters Digital News report found that only 26% of those surveyed said they generally trusted news, while only 41% said they trusted the news they themselves used.

Trust was higher during the same period in Ireland, with 52% of respondents saying they trusted all news and 58% saying they trusted news they used themselves.

The fact that the WhatsApp Covid messages I referred to earlier were fake became apparent when the deadlines they alluded to came and went without any sign of soldiers patrolling the streets.

However, other examples of disinformation are much harder to debunk.

Those who disseminate disinformation are becoming more adept at targeting those who are most likely to believe it, and here is where algorithms come into play.

Once associated only with maths class, an algorithm in the social media or entertainment world can be loosely defined as a set of signals received by a media company to indicate what a viewer is most likely to interact with.

Editor of the technology website Silicon Republic Jenny Darmody says that most people are now aware, in the general sense, of the power of algorithms in our lives.

If you mostly watch history documentaries, for example, then your streaming service will point you towards more of the same.

Entertainment companies are open about this, using the concept "if you liked that, then you’d love this" as a marketing tool.

However, problems arise, according to Ms Darmody, when people think they are searching for "unbiased" information on a topic like Covid but instead "fall down the rabbit hole" and come across material that may well be dressed up as fact, but is actually designed to mislead.

The consumer of the information is unaware that they are being followed by an algorithm, or that the information being placed in front of them has been designed to play on their fears.

So just what can be done to tackle disinformation? Attempts are being made at the level of the state, media organisations and consumer themselves.

Social media sites facilitate the transfer of information to millions of recipients within seconds

For those who are unsure whether or not to trust the information they come across, Martina Chapman recommends the website "Be Media Smart", which is an initiative of Media Literacy Ireland and supported by bodies including regulators and the tech companies themselves.

The site contains tips on how to verify information, including advising people to read past the headline and not to assume that if something has gone "viral" then it is automatically true.

It also advises people to check their own biases and to ask themselves if the information they have come across challenges them or matches their own views.

However, advising people who don't recognise disinformation in the first place is far more difficult, Ms Chapman says.

This group includes readers who don’t perceive a difference between a piece of information they get from a traditional news source and something they heard from a friend or online.

In years gone by people generally got their news from one or two trusted sources.

They still made choices based on their own leanings, for example in Ireland a family might decide to read the Irish Press, rather than the Independent, but outlets in that traditional media sphere tended to have a formal set of checks and balances guiding those who wrote the news.

Today, however, what is considered "information" can have multiple sources, some of whom are just other members of the public and the onus is back on the recipient to figure out how reliable that opinion is.

Of course, there is also an onus on tech companies themselves to tackle misinformation spreading on their networks.

When companies like Facebook were first established, they tended to consider themselves "platforms", merely a place for information to be uploaded and hosted.

Dublin is home to a hub for Google experts working to combat the spread of illegal and harmful content

It soon became clear that their capacity goes much further than that.

Social media sites facilitate the transfer of information to millions of recipients within seconds, as well as the establishment of large-scale international communities of interest and must take responsibility for the content they allow to be shared.

Most major firms have now made commitments to tackle the issue of disinformation.

Some changes are very visible, for example any message now sent via WhatsApp to multiple readers displays a note saying "forwarded many times".

It’s a relatively simple but effective way of telling the receiver that this is not the "message from a friend" that it initially appeared to be.

Google also has the power to take down information and a spokesperson told me that Dublin is home to a regional hub for Google experts, working to combat the spread of illegal and harmful content and that where information doesn’t meet its guidelines it’s blocked and removed.

TikTok says it has more than 40,000 people employed in online safety

The company also said it invests in policies and tools to prevent the monetising of harmful web content.

Other companies, including TikTok and Meta, have developed ways for users to report material they feel could be false or otherwise concerning.

If the material then makes it onto a platform and a reader suspects it to be false, they can report it to the host site, for example Facebook posts have a button where you can flag "concerns".

A spokesperson for TikTok told RTÉ News it has more than 40,000 people employed in online safety and works with fact-checking organisations to help assess the accuracy of content.

The company also says it removes harmful misinformation and accounts that attempt to repeatedly post it.

A spokesperson for Meta said that it shared society's concern over misinformation and has taken what it termed "aggressive" steps to combat it, which includes removing information that could lead to real-world violence or harm, partnering with fact-checkers and giving people information to make informed decisions on what they read.

The company which owns Facebook, Instagram and WhatsApp said the area is a complex one, with no one "silver bullet" and says it continues to consult with outside experts while working to improve internal technical capabilities.

The situation is further complicated by the seismic changes that have taken place in Twitter since its acquisition by Elon Musk last year

As Jenny Darmody points out however, some material can still move quicker than a content moderator can spot and, even though some content moderation is now done by AI, spotting every piece of suspicious material is still a monumental task.

Moderating such material can also take a personal toll on people who have to view often very distressing images and it is clear that the drive to make social media a safe space for all who use it is still a work in progress.

The role of the social media companies will be explored further later in this series.

The situation is further complicated by the seismic changes that have taken place in Twitter since its acquisition by Elon Musk last year.

In the course of writing this article I was able to contact the Dublin offices of companies including Google and TikTok and address questions directly to their spokespeople.

The communications department at Twitter is no longer contactable and my attempts to email queries were met, and I wish this was fake news, but it’s not, with a poo emoji in response.

On Wednesday of this week, a shocking video of a young man being physically assaulted went viral on social media in Ireland.

This was not a piece of disinformation sadly, the assault was all too real.

Its treatment by the regulator and the social media companies themselves was an interesting example of how such material can be dealt with.

The video was viewed millions of times and gardaí then announced that an investigation into the attack was under way and asked members of the public not to share it, even if they were doing so sympathetically, to protect the victim’s privacy.

On Wednesday evening, as an experiment, I tried to find it on Google and Facebook but my basic searches led only to news stories reporting the event, many of which included the appeal not to share the original clip.

The video was still being shared on my Twitter feed.

When I queried this with the company I was sent, once again, the poo emoji.

At the time of writing this article, on Friday evening, it does now seem to have been taken down.

Efforts are also being made at national and EU levels to impose regulations on social media giants.

In Ireland, a new regulator, Coimisiún na Meán, has been established and an Online Safety Commissioner appointed to deal with issues including a new regulatory regime for online safety.

Technology has brought with it huge advantages for consumers but also facilitated an unprecedented flood of information

The role is particularly crucial because of the number of tech companies that have their European headquarters here.

Among the Coimisiún’s tasks will be the development of online safety codes and once these have been established it says it intends to begin work on establishing an individual complaints mechanism in 2024.

Although its work is still at an early stage, An Coimisiún has said it was "very concerned" at the assault video which circulated online this week and said it had contacted the main platforms and asked them to report back on what they had done to remove the video and ensure it wasn’t re-uploaded.

The reality though is that any attempt to tackle disinformation, whether by a regulator, the tech companies themselves or the consumer has to contend with the speed at which disinformation travels, the cliché "bad news travels fast" is particularly relevant online.

Empowering individuals with the skills to spot disinformation is probably the most effective way to attempt to hold back the tide.

"If I could say one thing", Martina Chapman says, "it is to make people aware that this is complicated. Everyone is vulnerable and at some stage we are all going to fall foul of disinformation".

"There is no simple solution - it's going to require collaboration from a lot of stakeholders and behaviour change."

Technology has brought with it huge advantages for consumers, allowing everything from a transatlantic flight to dinner to be booked from your phone. But it has also facilitated an unprecedented flood of information, on which the end user has to make a judgement call.

One option for consumers is to go back to trusted brands. The Reuters report from 2022 referred to earlier showed RTÉ was the most trusted Irish news brand at 74%, but in fact all of the main Irish broadcasters and broadsheet newspapers scored more than 60%.

It was also interesting to note that viewership of RTÉ’s Six-One News programme soared during the pandemic, an indication that when people were being bombarded with information on a minute-to-minute basis, they found it easier to depend on one or two traditional bulletins a day for a summary of the news.

There is also another solution. If you come across information you are unsure about, don’t know the provenance of, or that makes you anxious, you could simply stop reading it, and refuse to share it. Scroll past, put down the phone, let it be.

In a world where our phones are our companions, our watches and our torches, feeding us information on a 24-hour basis, perhaps that’s the most radical suggestion of all.

This is the first in a series of RTÉ articles examining issues around media literacy and online safety.

Further reading:

The Be Media Smart website is supported by, among others RTE, and includes tips on how to stay safe online. See www.BeMediaSmart.ie provides information for children, students and parents.

Coimisiún na Meán has links on how to report harmful content to the main social media platforms here: https://www.cnam.ie/online-safety/