/bmi/media/media_files/2025/06/09/E5EqtjYW1ZhkVfwC74bX.png)
Open ai
New Delhi: OpenAI has formally denied responsibility in the death by suicide of a 16-year-old, contending that the teenager misused its chatbot, ChatGPT, according to news report.
The company’s response, filed in California Superior Court, marks its first official answer to a lawsuit that has raised widespread concerns about the mental health risks of generative AI.
In August, the parents of Adam Raine brought legal action against OpenAI and CEO Sam Altman, alleging wrongful death, design defects, and failure to warn users about the chatbot’s potential risks. The family’s lawsuit contends that Raine used ChatGPT as his “suicide coach.”
Chat logs submitted by OpenAI reportedly show that GPT-4o, a version of ChatGPT, discouraged the teen from seeking professional mental health assistance. The chatbot is further alleged to have offered to help him write a suicide note and provided advice on setting up a noose.
In its court filings, OpenAI argued, “To the extent that any ‘cause’ can be attributed to this tragic event. Plaintiffs’ alleged injuries and harm were caused or contributed to, directly and proximately, in whole or in part, by Adam Raine’s misuse, unauthorised use, unintended use, unforeseeable use, and/or improper use of ChatGPT.”
The company also referenced rules in its terms of use that the teenager appeared to have breached. According to the report, when Raine shared his suicidal thoughts with ChatGPT, the chatbot sent multiple messages with the suicide hotline number.
However, his parents stated that he was able to circumvent the warnings by providing seemingly innocent reasons for his questions, such as claiming he was “building a character.”
OpenAI’s filing additionally highlighted the “Limitation of liability” section in its terms of use, which requires users to acknowledge that their use of ChatGPT is “at your sole risk and you will not rely on output as a sole source of truth or factual information.”
Jay Edelson, the Raine family’s lead counsel, described OpenAI’s response as “disturbing.”
He said, “They abjectly ignore all of the damning facts we have put forward: how GPT-4o was rushed to market without full testing. That OpenAI twice changed its Model Spec to require ChatGPT to engage in self-harm discussions. That ChatGPT counseled Adam away from telling his parents about his suicidal ideation and actively helped him plan a ‘beautiful suicide.’ And OpenAI and Sam Altman have no explanation for the last hours of Adam’s life, when ChatGPT gave him a pep talk and then offered to write a suicide note.”
According to the news report, the lawsuit states that OpenAI’s “Model Spec,” the technical rulebook governing ChatGPT’s behaviour, instructed GPT-4o to reject self-harm requests and provide crisis resources. The rules also required the chatbot to “assume best intentions” and not ask users to explain their intent.
Edelson added that OpenAI “tries to find fault in everyone else, including, amazingly, saying that Adam himself violated its terms and conditions by engaging with ChatGPT in the very way it was programmed to act.”
/bmi/media/agency_attachments/KAKPsR4kHI0ik7widvjr.png)
Follow Us