skip to main content

How misinformation spread online after Bondi Beach attack

A wave of misinformation spread across social media after the attack
A wave of misinformation spread across social media after the attack

As authorities worked to establish the facts after the Bondi Beach attack on Sydney's Jewish community, false claims and conspiracy theories spread widely online.

Maria Flannery of the European Broadcasting Union's Spotlight Network has examined how misinformation took hold across social media platforms in the immediate aftermath.


On Sunday 14 December, two gunmen opened fire on a Hanukkah celebration on Sydney's Bondi Beach, killing at least 15 people and injuring dozens more.

Amid the chaos and shock of the attack, which has been declared a terror attack by local police, a video emerged showing the heroic acts of Syrian-born immigrant and fruit shop owner Ahmed al Ahmed wrestling a rifle out of the hands of one of the shooters.

The video, later verified by authorities and major news organisations, quickly went viral. As many hailed Mr Al Ahmed’s bravery online, another narrative began to emerge.

In the hours after the incident, rumours circulated online that the man's name was Edward Crabtree. It appears to have started on a fake news website appearing to be a national Australian outlet called The Daily.

New South Wales Premier Chris Minns shared a picture of him at a Sydney hospital visiting Ahmed al Ahmed, who was hailed as 'hero' after he wrestled a gun from an attacker during deadly Bondi Beach shooting
NSW Premier Chris Minns and Ahmed al Ahmed

An article written by 'Rebecca Chen', described as the Senior Crime Reporter, was titled: "'I Just Acted': Bondi Local Edward Crabtree Disarms Gunman in Terrifying Attack".

The article goes on to describe Crabtree as a 43-year-old IT professional and describes in detail how when he "left his Bondi apartment on Saturday afternoon for his usual weekend walk along the beachfront, he had no idea he would soon be facing down an armed terrorist".

Described as an exclusive interview from his hospital bed, the article quotes 'Crabtree' on his thoughts and feelings when he saw the gunman. The claim about the man's name circulated widely on social media, and was even picked up by X's built-in AI tool, Grok, which repeated the misinformation when asked who the man was.

Despite how widely the name circulated, subsequent investigation revealed the origin of the false claim.

Belgium's RTBF Fakey dug into the background on the 'news' website, and found that several clues indicated that it was potentially created using AI. Among the only stable articles on the site was the one about the Bondi shooting, and the byline photo for Rebecca Chen was changing with each refresh.

But the most conclusive piece of evidence came via a domain name lookup website, Whois, that showed the URL for the news site was created on the day of the mass shooting. The data of those who created the domain name was hidden behind a privacy service in Reykjavik.

the daily aus EBU
Image of AI-generated article and false Grok response

Grok gets it wrong again

The name of the heroic passerby was not the only detail Grok got wrong as people searched for information online.

A fact-check for Germany's ZDFheute detailed how the X AI initially told users wondering about the authenticity of the viral video that it was false.

The chatbot said: "This appears to be an old viral video of a man climbing a palm tree in a parking lot, possibly to trim it, resulting in a branch falling and damaging a parked car. Searches across sources show no verified location, date, or injuries. It may be staged; authenticity is uncertain."

GROK IMAGE
Another false Grok response

The video was verified by numerous news organisations and formed the basis for a major news angle as journalists and authorities scrambled to learn the truth about the shooting.

This example shows the limitations of LLMs (Large Language Models, which are advanced AI systems trained to understand, generate and process language) for verification work and should serve as a reminder that, while they are trained on online content and discussions, they are ill-equipped to perform the same level of verification and quality checks on that information as a human analyst.

Any issues with Grok specifically could also come under more scrutiny since an announcement earlier this year that it would be the AI tool of choice for the US government. The Pentagon put pen to paper on a multi-million dollar deal to roll out Grok for government use, the Department of Defense confirmed in July.

Google Trends False Claims

Among the baseless rumours that this was a false flag attack were claims that one of the shooter's names had spiked as a Google search term before the attack even took place.

More specific rumours centred on how the name was being used in Google searches in Australia and Israel before the attack happened, suggesting involved parties.

google trends EBU
Baseless claims regarding Google Trends

However, an analysis of Google Trends showed that in Australia, the name started trending at about 9am GMT on Sunday. The first reports of an active shooter on the beach were at 7.45am GMT (6.45pm local time).

In Israel, it was the same story. The search term 'naveed akram' began trending at 10am GMT, an hour later than Australia, reflecting the delay it took for news of the name of one of the suspects to go international.

Similar claims emerge frequently amid major incidents, and usually involve the uploaders making a mistake around time zones.

On Google Trends, data is shown in the viewer's local time, not the local time of the country where the event took place. For an incident in Australia, which has a very different time zone from most of the West, there is even more room for time-related errors.

EBU google trends analysis
EBU Google Trends Analysis

The claim in the X posts seen above had hundreds of thousands of views alone, and were also spread by other accounts via viral posts, including on TikTok, where some content had since been removed.

Fact-checkers at ORF’s Zeit im Bild also covered this story while tracking Bondi-related misinformation for Austrian TV.

Meanwhile, AI was also used to create images to support the claim that the attack was staged by 'crisis actors'. An AI-generated image showed a man having blood applied by a makeup artist, VerificaRTVE’s reported.

AI detectors indicated a high likelihood that the image was AI-generated. But there were also some visual clues: the text on his T-shirt was distorted - in a similar style to the way generative AI often struggles with text.

Man wrongly identified as alleged shooter

Another video that was viral in the aftermath of the attack showed a man claiming his photo was falsely linked to the Bondi attack, and his claims were indeed true, according to Deutsche Welle's Fact Check team.

The man lives in Sydney and has the same first and last name as one of the alleged shooters, but there is no connection between him and the attack.

The photo of the man was being shared around after people searched for the alleged gunman on Facebook and found the unrelated man's page. DW reported that photos of the suspect and images of this man show they are significantly different.

EBU Bondi shooter wrongly identified
Sydney man falsely linked to Bondi attack

Furthermore, while the older shooter died at the scene, the younger gunman suffered critical injuries and was in hospital. "It is therefore impossible for the man speaking in the video to be the alleged attacker," Ashraf and Thoms write for DW.

The proliferation of these unverified claims and the subsequent false identification of an innocent person underscore the critical importance of seeking information from verified news reports in the immediate aftermath of major incidents.