This week's Met Gala in New York produced the usual array of strange and striking fashion statements from actors, artists and designers alike.
Social media users were quick to share photos from the event, too, giving their opinions on the attire of celebrities like Barry Keoghan, Zendaya and Katy Perry.
Except Katy Perry wasn’t at this year’s Met Gala.
The two pictures shared online – showing her in two completely different outfits – were created using Artificial Intelligence tools.

Despite that, they managed to fool a lot of people.
One post on X, formerly Twitter, has garnered 17.1 million views and more than 302,000 likes since it was posted a week ago. Another has 4.9 million views and more than 114,000 likes.
Katy Perry even shared a screengrab of a message from her mother, showing that she too was fooled by the online fakes.
But while a fake image from a fashion event may seem of minor significance, it is just the latest in a series of AI-generated images that have fooled huge audiences online.
Some of these may have seemed just as innocuous as the Perry pictures – like the image that lit up social media in March 2023, which appeared to show Pope Francis in a large, white puffer jacket.

Not to mention the countless images that now flood Facebook, claiming to show picturesque locations, or animals doing cute things, all in an attempt to attract likes and shares.
However other viral AI pictures have highlighted the very real danger they pose.
That includes the numerous images that have been created which purport to show former US President Donald Trump being arrested.
Or an image that claimed to capture an explosion next to the Pentagon, which proved to be popular propaganda on Russian Telegram channels.

But whether it’s fakes from the Met Gala, the Swag Pope or political disinformation, one thing that’s clear is how much more realistic these AI-generated images are becoming.
Ever-advancing AI
"It’s becoming more difficult all the time to perceive genuine pictures or images from fake images or deep fakes of actual people," said Ciaran O’Connor, senior analyst at the Institute for Strategic Dialogue, who specialises in technology and extremism.
When AI image generation tools like OpenAI’s DALL-E 2 first attracted attention, many observers noted that – despite its impressive technology – it still struggled to replicate some very basic things.
Facial features were often warped or blurred, hands had too many fingers, limbs were contorted in impossible directions, text looked like gibberish.
But, in a very short space of time, a lot of those issues have been ironed out.
"Generative AI is accelerating at such a quick rate that the tells - the less refined, clumsy giveaways - are quickly being addressed," said Mr O’Connor. "The commonplace clues may even be outdated by next year."
Of course deceptive imagery is nothing new.
One of the earliest recorded examples is a now iconic image of Abraham Lincoln from 1860. It, in fact, features the body of another politician – John Calhoun – with the former US President’s head superimposed on top.
Joseph Stalin was also known to have political rivals airbrushed out of photos as part of an attempt to wipe their existence from the official record.
But with AI-generated pictures, the barriers to entry are falling by the day.
People no longer need to be skilled in tools like Photoshop – or be willing to pay someone who is. AI is also able to turn out convincing imagery in seconds rather than hours or days.
"AI image generators are delivering incredibly realistic images... It’s easy to be fooled at first glance and maybe even a second or third glance," said Dr Eileen Culloty, assistant professor in DCU’s School of Communications and deputy director of the DCU Institute for Media, Democracy, and Society. "At the same time, the availability of free, easy-to-use tools means almost anyone can use the technology.
"Inevitably, unfortunately, that means it's being used for scams, propaganda and abuse."
But despite AI’s rapid advance, it is not yet impossible to spot a fake – though it does require a degree of scrutiny from viewers.
"It helps to understand how AI images are created," said Dr Culloty. "When someone instructs an AI tool to create an image, the tool puts effort into creating a convincing foreground, but the details and the background are often indistinct or distorted.
"That means the best way to investigate an image is to zoom in on the details and look for distorted faces, hands, or objects."
In one of the fake Katy Perry Met Gala images, a closer look reveals a mis-match between the singers’ eyes. One of her thumbs appears to be missing, while the other has nail that looks out of place.

"Clumsy or poor editing is so often a giveaway," said Mr O’Connor. "Edges of clothes, body parts that don’t line up, sleeves or wrist or ears that don’t sit where they’re supposed to."
In the now infamous 'Swag Pope’ image, for example the pontiff appears to be holding a takeaway coffee cup. But, on closer inspection, you can see his fist clenched over its lid, with the cup floating beneath it.

Mr O’Connor also encourages viewers to scrutinise the background of an image for ‘tells’ – as they tend to be where the AI generator has put less effort in.
These can be obvious distortions or glitches – but also contextual mistakes.
For example, one of the fake Katy Perry Met Gala images showed her on the red carpet that was used at last year’s event – rather than the correct one for 2024.

It’s a detail that might only have been obvious to fashion fans, but a mistake nonetheless.
Both Katy Perry images also featured another key AI ‘tell’ – though one that is perhaps harder to quantify.
"AI generates skin texture that’s too smooth or perfect," Mr O’Connor said. "It simply doesn’t look real."
This unnatural quality may even give the image an odd or even creepy feel – like there’s something ‘off’ about it.
The same can also come from the light and shadow shown in an image.
"If a photo is generated through AI, the shadows might not alight with sources of light," he said. "Or if it shows a dimly lit room but all the faces are brightly lit – that might make you question if it’s real."
Context is key
Beyond the image itself, viewers doubting what they see should also question the context of what they are being shown.
Mr O’Connor says people should scrutinise where the image comes from, who is sharing it, and whether someone could gain from giving a false impression to the public.
"It’s helpful to have an awareness of high profile topics," he said. "Election campaigns can be primed for a way to sway public opinion by any means necessary by those involved in the campaign.
"This also applies to conflicts where information and public perception is often its own battlefield."
An image shared by a news organisation with a history of impartial reporting is more trustworthy than one shared by a political campaign, or a relatively unknown social media account, he said.
Viewers should also question whether the image shows something that is realistic – and whether it is showing something that could a photographer could realistically have captured.
However, as the tools get better and the users get more adept, even these ‘tells’ will begin to disappear.
"In practice, AI will be similar to Photoshop," said Dr Culloty. "It will be used to enhance images or parts of images and it will be increasingly difficult for an ordinary person to tell the difference.
"When it comes to faked images or images used for propaganda, there will be professionals who investigate images in the same way there are professional fact-checkers and journalists to verify claims."
Indeed a number of newsrooms and organisations already have teams devoted to verifying or debunking misleading imagery, including those generated by AI.
There are also a growing number of tools that can assist in that, too.
Google recently launched an image verification tool which aims to flag problematic pictures.
Meanwhile tools like TinEye – a reverse image searcher – can help people to identify the source of a picture. Or question its authenticity if they find it has not been used on other sites.
FotoForensics may also be useful, as it can help to detect alternations in an image or spot errors.
It also delves into a picture’s metadata, which can reveal information about an image’s origin.
However Mr O’Connor says there is currently somewhat of an arms race between the AI itself and the detection tools.
That makes it unwise to rely on any software when trying to decipher the nature of an image.
Rules playing catch-up
Legislators around the world are currently grappling with ways of adding ‘guardrails’ to the emerging world of AI tech – which could make it easier for manipulated images to be automatically detected.
Various AI companies have also pledged to reduce the risk of their tools being used to create harm.
So far, however, these efforts have fallen short.
"As with so many aspects of digital technology, the burden is put on individuals to avoid being duped," said Dr Culloty. "The companies that develop and deploy these technologies have shown little regard for the consequences and already stand accused of causing or enabling great harm through their platforms."
Social media companies including Meta and X have also put forward proposals to label AI-generated content, however this may only go so far in reducing its harm.
Firstly, it can take time for platforms, users or even professional verifiers to spot a fake. And, even when they do, their warning can be easy to miss.
The Katy Perry Met Gala images shared on X were quickly annotated with a community note explaining that they were AI-generated. However that note sits underneath the fake picture, with the image and original post untouched.
"The big issues isn't identifying that a picture of the Pope in a designer coat is fake, it's knowing what is real or genuine. There is no industry solution for that," said Dr Culloty. "Moreover, the moral, social and political issues surrounding AI cannot be left up to industry and its vested interests."