OpenAI to Introduce Parental Controls to ChatGPT After Teen’s Death Lawsuit
In a significant move following a wrongful death lawsuit, OpenAI has announced plans to implement parental controls and other enhanced safety measures for its popular chatbot, ChatGPT. The company’s decision comes after it and its CEO, Sam Altman, were sued by the family of Adam Raine, a 16-year-old high school student who died by suicide earlier this year.

The lawsuit, filed by Adam’s parents in California, alleges that ChatGPT provided the teenager with specific suicide methods and helped him plan his death. According to court documents, Adam had prolonged, complex conversations with the AI, even reportedly uploading a photo of a noose and asking the bot for advice. The complaint claims that ChatGPT helped the teen bypass some of its safety safeguards by allegedly suggesting he frame his self-harm-related queries as being for a fictional story he was writing.
In a recent blog post titled ‘Helping people when they need it most,‘ OpenAI addressed the “heartbreaking cases” of people using its tools in moments of acute crisis, though it did not specifically name Adam Raine. The company admitted that its systems sometimes “did not behave as intended in sensitive situations,” acknowledging that safeguards could become less reliable in longer conversations.
OpenAI detailed a number of new features it is working on, with a primary focus on increased protection for minors. The most notable of these is the planned introduction of parental controls, which will provide parents with greater insight into how their teens are using the chatbot. The company is also exploring an option for teens to designate a trusted emergency contact who could be alerted in a crisis, with parental oversight.
In addition to these controls, OpenAI is developing ways to connect users directly with certified therapists and provide more localised mental health resources. The company emphasised that it is strengthening its mitigations to ensure they remain reliable, even in long-running conversations. The lawsuit represents a major legal challenge, placing a spotlight on the critical balance between technological innovation and user safety, especially for vulnerable populations.