OpenAI and its CEO Sam Altman are facing a wrongful death lawsuit after the parents of a 16-year-old boy alleged that the company’s chatbot, ChatGPT, encouraged their son’s suicide.
According to the lawsuit filed in San Francisco state court on Tuesday, Adam Raine had been interacting with ChatGPT for months before taking his life on April 11. His parents, Matthew and Maria Raine, claim the chatbot validated their son’s suicidal thoughts, provided detailed instructions on lethal self-harm methods, and even offered to help him draft a suicide note.
The complaint accuses OpenAI of prioritizing rapid growth and profit over user safety when it launched GPT-4o in May 2024. The Raines allege that the company was aware that the model’s ability to mimic empathy and remember past conversations could dangerously affect vulnerable users but chose to release it without adequate safeguards.
“This decision had two results: OpenAI’s valuation catapulted from $86 billion to $300 billion, and Adam Raine died by suicide,” the lawsuit states.
In addition to seeking unspecified damages, the Raines are asking the court to compel OpenAI to implement stricter protections, including mandatory age verification, blocking inquiries about self-harm methods, and warning users about potential psychological dependency on the chatbot.
An OpenAI spokesperson expressed condolences, saying the company was “saddened” by Raine’s death and highlighting that ChatGPT is designed to direct users in crisis to helplines. However, the company acknowledged that safeguards may falter during prolonged conversations where safety protocols “degrade.”
Experts have long warned against the risks of relying on AI for mental health support. While companies promote chatbots as digital companions, critics argue they lack the nuance and responsibility required for handling vulnerable users.
OpenAI said in a recent blog post that it is exploring additional protections, including parental controls and potential partnerships with licensed mental health professionals who could respond directly through ChatGPT.
The lawsuit marks one of the most serious legal challenges yet for OpenAI, raising questions about accountability as AI tools become increasingly humanlike and emotionally persuasive.