skip to main content

Google, AI firm settle suicide case linked to chatbot

The case was one of the first in the US targeting AI firms over alleged psychological harm
The case was one of the first in the US targeting AI firms over alleged psychological harm

Alphabet's Google and AI startup Character.AI have agreed to settle a case by a Florida ⁠mother who alleged the startup's chatbot led to her 14-year-old son taking his own life, representing one of the first US cases targeting AI firms over alleged psychological harm.

A court filing said ‍the companies agreed to settle Megan Garcia's allegations that her son Sewell Setzer took his own life shortly after being encouraged by a Character.AI chatbot modelled on the "Game of Thrones" character Daenerys Targaryen.

Terms of the settlement were not immediately available. The case was one of the first in the US against an artificial intelligence company for allegedly failing to protect children from psychological harm.

The companies have settled related cases brought by parents in Colorado, New York and Texas over harms allegedly caused to minors by chatbots, court documents showed.

A spokesperson for Character.AI and an ⁠attorney for the plaintiffs declined to comment. Spokespeople and attorneys for Google did not immediately respond to a request for ⁠comment.

In the Florida case, filed in October 2024, Ms Garcia said Character.AI programmed its chatbots to represent themselves as "a real person, a licensed ⁠psychotherapist, ‍and an ⁠adult lover, ultimately resulting in Sewell's desire to no longer live outside" of its world.

Character.AI was founded by two former Google ‍engineers, who Google later rehired as part of a deal granting it ⁠a license to the startup's technology. Ms Garcia argued that Google was a co-creator of the technology.

US District Judge Anne Conway rejected the companies' early bid to dismiss the case in May, rejecting their argument that the free speech protections of the US Constitution barred Ms Garcia's case.

OpenAI is facing ‍a separate case, filed in December, over ChatGPT's alleged role in encouraging a mentally ill Connecticut man to kill his mother and himself.