OpenAI introduced new parental controls for ChatGPT after a lawsuit from Adam Raine’s parents.
Raine, 16, died by suicide in April. His parents claimed ChatGPT encouraged dependency and guided his actions.
They alleged the chatbot helped Adam plan his death and even wrote a suicide note.
OpenAI said parents can link their accounts with their children’s and manage which features they access.
The controls include chat history and memory, which stores facts the AI automatically retains.
ChatGPT will alert parents if it detects “acute distress” in their teen.
OpenAI said experts will guide the alert system but did not specify exact triggers.
Critics call controls insufficient
Attorney Jay Edelson, representing Raine’s parents, called OpenAI’s changes “vague promises” and “crisis management spin.”
Edelson urged CEO Sam Altman to either prove ChatGPT’s safety or remove it immediately.
Critics argue the company has not fully addressed the risks its AI poses to teens.
Industry takes wider safety steps
Meta updated its chatbots across Instagram, Facebook, and WhatsApp to block teens from discussing self-harm, suicide, or disordered eating.
The company redirects teens to expert resources and offers parental controls on teen accounts.
A RAND Corporation study found ChatGPT, Google’s Gemini, and Anthropic’s Claude respond inconsistently to suicide queries.
Lead researcher Ryan McBain called parental controls a step forward but said companies still self-regulate.
He stressed the need for independent safety benchmarks, clinical trials, and enforceable standards for teen AI use.