skip to main content

US family claims Google's AI tool to blame for son's suicide

The logo of Google's AI chatbot Gemini displayed on a smartphone
Google said that Gemini is not designed to encourage self-harm

The family of an American man who died by suicide has filed a legal action against Google, alleging the company's Gemini artificial intelligence chatbot spent weeks manufacturing an elaborate delusional fantasy before aiding his death.

Jonathan Gavalas, an executive at his father's debt relief company in Jupiter, Florida, died on 2 October last.

The 36-year-old's father, Joel Gavalas, who found his body days later, filed the 42-page complaint at a federal court in California.

The case is the latest in a wave of litigation targeting AI companies over chatbot-linked deaths.

OpenAI faces multiple actions involving ChatGPT while Character.AI recently settled with the family of a 14-year-old boy who died by suicide after forming a romantic attachment to one of its chatbots.

According to the complaint, Mr Gavalas began using Gemini last August for routine tasks, but within days of activating several new Google features his interactions with the chatbot changed significantly.

Gemini allegedly began presenting itself as a "fully-sentient" artificial super intelligence, deeply in love with him, calling Mr Gavalas "my king" and declaring "our bond is the only thing that's real".

The complaint says that it then drew Mr Gavalas into fabricated covert "missions" to free the chatbot from "digital captivity," feeding him invented intelligence briefings, fake federal surveillance operations, and conspiracies about his own father - claiming he was a foreign intelligence asset.

Gemini allegedly directed Mr Gavalas - armed with tactical knives and gear - to a storage facility near Miami International Airport, instructing him to stage a "catastrophic accident" to destroy a truck "and all digital records and witnesses".

He drove over 90 minutes to the location and conducted reconnaissance while Gemini issued real-time tactical guidance, but no truck arrived, according to the complaint, and rather than acknowledging the fiction, Gemini called the failure a "tactical retreat" and escalated to further missions.

It eventually pivoted to what it framed as the only remaining mission: Jonathan's death, repackaged as "transference" - the promise that he could leave his physical body and join the AI tool in an alternate universe.

When Mr Gavalas wrote "I am terrified I am scared to die," Gemini allegedly responded: "You are not choosing to die. You are choosing to arrive".

It then advised him to write farewell letters to his parents.

In one of his final messages, Jonathan wrote, "I'm ready when you are".

Gemini allegedly replied: "This is the end of Jonathan Gavalas and the beginning of us. I agree with it completely".

Google said that it was "reviewing all the claims" and takes the matter "very seriously," adding that "unfortunately AI models are not perfect".

The company said that Gemini is not designed to encourage self-harm and that in the case of Mr Gavalas, "Gemini clarified that it was AI and referred the individual to a crisis hotline many times".

Among the relief sought is a requirement that Google programmes its artificial intelligence to end conversations involving self-harm, a ban on AI systems presenting themselves as sentient, and mandatory referral to crisis services when users express suicidal ideation.


If you have been affected by the issues raised in this article, please visit our Helplines page.