skip to main content

Meta introduces new AI safeguards for teens

New safeguards for teens interacting with Meta's AI tools have been introduced by the company
New safeguards for teens interacting with Meta's AI tools have been introduced by the company

Meta, the parent company of Facebook, Instagram and WhatsApp, has announced new safeguards for teens interacting with its artificial intelligence (AI) tools.

Under new controls, parents will be able to turn off their teens' access to one-on-one chats with AI characters, and will also have more insights on how teens are interacting with AI.

Meta said its AI characters have been designed to not engage in age-inappropriate discussions about self-harm, suicide, or disordered eating with teens, or conversations that encourage, promote, or enable these topics.

"Our AIs are designed to respond safely to these topics and direct teens, when appropriate, to expert resources or support," Meta said.

"We know teens may try to get around these protections, so we're also using AI technology to place those we suspect are teens into these protections, even if they tell us they’re adults," the company added.

Online safety group Common Sense Media recently said that Meta's artificial intelligence tool "poses unacceptable risks to teen safety".

"Safety systems regularly fail when teens are in crisis, missing clear signs of self-harm, suicide risk, and other dangerous situations that require immediate intervention," the report concluded.