AI & deep fakes Expert Advisor, Henry Ajder talks about how deep fakes are damaging online trust and what some platforms are doing to rebuild it. Listen back above.
In the rush to point out online fakes, is there a danger of accidentally ignoring what's true? Henry Adjer, an expert in deep fakes tells Ray D'Arcy how being fooled by faked-up videos, images and audio can also undermine trust in real ones. Henry also talks about some of the tools that platforms are using to help authenticate online content.
Deep fakes use generative AI to create convincing imitations of the voice or image of a real person, making it look as if they have said and done things they've never actually said or done in real life. Almost anything is possible - from making a politician give a fake speech, to stealing a person's image and transposing it onto existing pornography; to making a singer "sing" a track they've never recorded. Sometimes the content is flagged as a parody and sometimes it's not; but until recently, deep fakes have usually involved celebrities or other high-profile targets. This is no longer the case, according to Henry Adjer, who has advised Meta, the EU Commission and the BBC on issues surrounding the authenticity of online content.
Henry says that three things have come into play over the past 18 months or so that have led to a massive spike in online fakery, with ordinary people increasingly being affected. Firstly, the bar to entry has been lowered, Henry says, and generative AI has done away with the need for the skills required to create a convincing product:
"The realism that these AI-generated outputs can achieve, so that's how hyper-realistic they are, how much they sound like a real person, how much they look like a real person."
The second change is way generative AI tools need less and less data to copy a person's voice or image:
"In the case of voice audio, for example, what might have previously taken half an hour of high-quality voice audio to train a model that wasn’t that good, now might be the case of 30 seconds or a minute to achieve a hyper-realistic quality output."
The third big change is that AI tools are now generally available to use. Henry says the more accessible the tools, the more widespread the abuse:
"There is this direct relationship between these tools becoming more accessible to everyday people - they don’t require that expertise to use - and the victims or the people being targeted increasingly becoming private individuals."
Preventing harmful content from spreading has always been difficult, but now there is the added problem of recognising it, Henry says. People can be fooled by the bad stuff and fail to trust the good stuff at the same time. He recalls the case of a shaky, unprofessional looking video of a politician in Myanmar criticising the country's de facto leader - people immediately jumped to the conclusion that it was a fake, he says, when in fact it was real:
"A case I worked on in Myanmar a few years ago, where a video of a minister giving a forced confession – people thought it was a deep fake and it wasn’t."
The damage was done, Henry says, and a correction message telling people the video was real is unlikely to have the same impact as the original:
"Correcting the record is very difficult, and you’re never going to rectify that in the minds of everyone who has started to believe otherwise."
It's not so much that a lie went viral, but that the truth went viral, but everyone believed it was a lie. The average person scrolling away the hours on their smartphone won't always be able to pick what's true and reject the fakes, Henry says.
One of the issues is that apps don't currently track or report the edits people make to videos or photos. Recently however, Adjer says that media companies can opt in to a system which flags when changes have been made to content on their platform. An organisation called The Content Authenticity Initiative (CAI) are working towards developing open industry standards to show where something came from and what changes have been made to it from the moment it was created.
Some companies like Meta are introducing a layer of authentication to their platforms like C2PA or The Coalition for Content Provenance and Authenticity. As Henry points out, it’s just too difficult for regular users to investigate the origin of every piece of content they come across:
"It’s not fair to expect your everyday listener to your show to become a digital Sherlock, to spot all of this stuff with the naked eye. It’s not sustainable moving forward, in particular as this stuff gets better and better. So we really need to rethink our relationship with digital media and recognise seeing is not necessarily believing."
For a lively mix of interviews with celebs, writers, musicians, comedians and listeners' stories; listen back to The Ray D’Arcy Show here.