skip to main content

AI deepfakes of Minnesota shooter and victim appear online

37-year-old Renee Nicole Good was killed after being fatally shot by an Immigration and Customs Enforcement (ICE) officer
37-year-old Renee Nicole Good was killed after being fatally shot by an Immigration and Customs Enforcement (ICE) officer

Hours after a fatal shooting in Minneapolis by an immigration agent, AI deepfakes of the victim and the shooter flooded online platforms, underscoring the growing prevalence of what experts call "hallucinated" content after major news events.

The victim of the shooting, identified as 37-year-old Renee Nicole Good, was hit at point-blank range as she apparently tried to drive away from masked agents who were crowding around her Honda SUV.

Dozens of posts were found across social media platforms, primarily the Elon Musk-owned X, in which users shared AI-generated images purporting to "unmask" the agent from the Immigration and Customs Enforcement (ICE) agency.

"We need his name," Claude Taylor, who heads the anti-Trump political action committee Mad Dog, wrote in a post on X featuring the AI images. The post racked up more than 1.3 million views.

Mr Taylor later claimed he deleted the post after he "learned it was AI," but it was still visible to online users.

A man wearing a jacket with FBI written on it stands on front of a crime scene tape and a maroon coloured vehicle
AI deepfake images of both the shooter and victim appeared on social media

An authentic clip of the shooting, replayed by multiple media outlets, does not show any of the ICE agents with their masks off.

Many of the fabrications were created using Grok, the AI tool developed by Elon Musk's startup xAI, which has faced heavy criticism over a new "edit" feature that has unleashed a wave of sexually explicit imagery.

Some X users used Grok to digitally undress an old photo of Ms Good smiling, as well as a new photo of her body slumped over after the shooting, generating AI images showing her in a bikini.

Another woman wrongly identified as the victim was also subjected to similar manipulation.

'New reality'

Another X user posted the image of a masked officer and prompted the chatbot: "Hey @grok remove this person's face mask." Grok promptly generated a hyper-realistic image of the man without a mask.

There was no immediate comment from X. When asked for comment, xAI replied with a terse, automated response: "Legacy Media Lies."

The viral fabrications illustrate a new digital reality in which self-proclaimed internet sleuths use widely available generative AI tools to create hyper-realistic visuals and then amplify them across social media platforms that have largely scaled back content moderation.

"Given the accessibility of advanced AI tools, it is now standard practice for actors on the internet to 'add to the story' of breaking news in ways that do not correspond to what is actually happening, often in politically partisan ways," Walter Scheirer, from the University of Notre Dame said.

"A new development has been the use of AI to 'fill in the blanks' of a story, for instance, the use of AI to 'reveal' the face of the ICE officer. This is hallucinated information."

AI tools are also increasingly used to "dehumanize victims" in the aftermath of a crisis event, Mr Scheirer said.

A Grok logo of a generative artificial intelligence chatbot developed by xAI is seen on a smartphone and in the background
Grok has faced heavy criticism over a new "edit" feature that has unleashed a wave of sexually explicit imagery

One AI image portrayed the woman mistaken for Ms Good as a water fountain, with water pouring out of a hole in her neck.

Another depicted her lying on a road, her neck under the knee of a masked agent, in a scene reminiscent of the 2020 police killing of George Floyd in Minneapolis, which sparked nationwide racial justice protests.

AI fabrications, often amplified by partisan actors, have fueled alternate realities around recent news events, including the US capture of Venezuelan leader Nicolas Maduro and last year's assassination of conservative activist Charlie Kirk.

The AI distortions are "problematic" and are adding to the "growing pollution of our information ecosystem," Hany Farid, co-founder of GetReal Security and a professor at the University of California, Berkeley said.

"I fear that this is our new reality," he added.