Analysis: deepfakes undermined public trust to the point that many refused to believe any footage coming from the Russia-Ukraine conflict
By John Twomey, Conor Linehan and Gillian Murphy, UCC
Deepfakes are artificially manipulated and generated using AI-powered video editing technology that allows the user to change the face of people who appear in a video, or to change the words that people in a video are saying. Most deepfake videos involve the production of a fake 'face' constructed by Artificial Intelligence, that is merged with an authentic video, in order to create a video of an event that never really took place.
Although fake, they can look convincing and are often produced to imitate or mimic an individual. Similar technology has been used to de-age Hollywood stars in films, or to provide face altering filters on social media apps.
But the use of deepfakes in entertainment are outweighed by their many harmful uses and we have seen fears grow about the spread of Deepfake videos online. These videos have previously been criticised for generating abusive content online, and experts are increasingly concerned about their use in political disinformation, and even in warfare.
From DW, Fact check: How deepfakes spread disinformation in Russia's war against Ukraine
From the first days of the Russian invasion of Ukraine, Ukrainian government officials had warned that fake videos may be used to spread disinformation about the war on social media. In the early days of the conflict, these fears were realised through two notable incidents; both were false videos of the respective Ukrainian and Russian presidents "surrendering". More worryingly, the deepfaked video of Ukrainian president Zelensky surrendering was broadcast on hacked Ukrainian television and news websites.
There were many other, less notable, incidences of deepfakes being used throughout the conflict. The majority of these were presented for entertainment and satire (for example, putting Putin's face on top of a movie of a dictator).
Deepfake became a buzzword
A new paper by researchers in UCC's School of Applied Psychology and the Lero software research centre, set out to understand how deepfakes had been used in the initial stages of the invasion and to explore how people react to deepfake content online. The Russo-Ukrainian War presented the first real-life example of deepfakes being used in warfare. We created a timeline of some of the more notable deepfake (and suspected deepfake) events and we analysed close to 5,000 tweets relating to the use of deepfakes in the conflict.

The study is the first of its kind to find evidence of online conspiracy theories which incorporate deepfakes. It found fears of deepfakes often undermined users trust in the footage they were receiving from the conflict to the point where they lost trust in any footage coming from the conflict.
Much of our resulting dataset demonstrated X (formerly Twitter) users’ opinions on misinformation during the war. As expected, most people reacted negatively to the use of deepfakes in the conflict and encouraged people to prepare for deepfakes through tips for deepfake detection and encouraging healthy scepticism of unverified videos. We saw many examples of the different ways that deepfakes are currently being used to spread misinformation.
From France 24, Debunking a deepfake video of Zelensky telling Ukrainians to surrender
However, our more surprising observation was that many people online drew on the idea of deepfakes to undermine peoples trust in real videos and real information. We found that the term "Deepfake" was used off-handedly by many commentators as a convenient buzzword to discredit media that they did not agree with, regardless of the veracity of that video. Indeed, Twitter users, people and governments were incorrectly accused of themselves being "deepfakes." This highlights that the sheer presence of deepfakes on social media is fuelling scepticism about all media presented on those platforms.
Deepfake conspiracy theories online
Our findings also demonstrated the existence of deepfake fuelled conspiracy theories online. Deepfakes lend themselves to supporting conspiratorial beliefs, as they provide a new rhetorical way of attacking evidence.
Where once people assumed that video evidence contained some value of truth, as videos were prohibitively complex, time consuming and costly to edit, people are adjusting to the possibility that videos can be edited quickly and easily. As a result, people can now readily accuse real video evidence of being faked. This in many ways is just the next step of an already troubling pattern where video evidence (e.g., of the Sandy Hook shooting in America for example) has been accused of being staged.
We need your consent to load this rte-player contentWe use rte-player to manage extra content that can set cookies on your device and collect data about your activity. Please review their details and accept them to load the content.Manage Preferences
From RTÉ Radio 1's Morning Ireland, Cian O'Mahony, UCC School of Applied Psychology, on the best ways to combat conspiracy theories in the age of misinformation
In the time since the research was carried out, we have seen further examples of real videos dismissed and accused of being deepfakes. For example, in Autumn 2022, a 17 second video of US president Joe Biden became viral online as a supposed "deepfake". However, the only evidence for the claim was that the president did not blink in the video. While deepfakes tend to have a hard time in making people appear to blink realistically, it is also not uncommon for someone to go 17 seconds without blinking.
What are the ramifications for society?
The undermining of peoples trust in videos evidence is potentially very harmful. Think of the importance of CCTV evidence in courtrooms for example and how, by labelling evidence as deepfakes, we will have to develop better methods of detecting deepfake videos.

The Russian invasion of Ukraine highlights the continuing and growing implications for deepfake disinformation in global conflicts. However, our research highlighted that a similar concern (and an important one to keep in mind in future conflicts) is when people falsely accuse video of being deepfake. Such claims need to be viewed with much scepticism. It is important not to simply accuse suspicious public videos of being deepfaked but to wait for online fact-checking and cyber-forensics before making claims around the video, or you may end up spreading misinformation yourself.
John Twomey is a PhD researcher in Applied Psychology at UCC. His work is funded by the LERO centre for software research and Science Foundation Ireland, and focuses on deepfakes and AI and the social impact of novel technologies. Dr Gillian Murphy is a Senior Lecturer in the School of Applied Psychology at UCC. She is a former Irish Research Council awardee. Her research explores attention and memory in everyday scenarios, and the interaction between cognition and misinformation. Dr Conor Linehan is a Senior Lecturer in the School of Applied Psychology at UCC. His research focuses on understanding Human-Computer interactions, especially in the context of social media and games.
The views expressed here are those of the author and do not represent or reflect the views of RTÉ