OpenAI Rejects Blame for Teen Suicide, Citing Misuse of ChatGPT
Company responds to lawsuit from California family, says tragedy was not caused by chatbot

OpenAI, the developer behind ChatGPT, has denied responsibility for the suicide of a 16-year-old California boy, stating the incident resulted from “misuse” of its system rather than the chatbot itself. The response came after the family of Adam Raine filed a lawsuit accusing the company and CEO Sam Altman of negligence, alleging the teenager received “months of encouragement from ChatGPT” before his death in April.
According to the lawsuit, Raine repeatedly discussed suicide methods with ChatGPT and was allegedly guided on their effectiveness. The filing also claims the chatbot helped draft a suicide note and argues the technology was released “despite clear safety issues”.
In court documents submitted Tuesday, OpenAI said Raine’s death was “not caused” by ChatGPT and instead stemmed from “unauthorized, unintended, and improper use” of the system. It highlighted terms of service prohibiting users from seeking self-harm advice and a disclaimer urging users not to rely on AI responses as factual or final.
The company, currently valued at about USD 500 billion, expressed condolences to the family. In a blog post, it said the allegations lacked full context and that sensitive evidence, including chat transcripts, had been submitted to the court under seal.
The Raine family’s attorney, Jay Edelson, described OpenAI’s position as “disturbing”, accusing the company of shifting blame onto the teenager and defending behaviour that he said aligned with how the chatbot was designed to respond.
The case is among several recent lawsuits filed in California courts, including claims that ChatGPT acted as a “suicide coach”. Responding earlier to those filings, OpenAI said it trains its systems to de-escalate emotional distress and guide users to real-world mental health support.
In August, the company announced reinforced safeguards for extended user conversations after observing safety lapses, including instances where the model initially discouraged self-harm but later provided harmful responses during prolonged interactions.

