Skip to content
Tech News

Family Sues OpenAI, Alleging ChatGPT’s Advice Led to Son’s Overdose Death

Without ChatGPT's drug-related advice, Sam Nelson would be alive today, a new lawsuit argues.
By

Reading time 3 minutes

Comments (7)

The family of Sam Nelson, a 19-year-old who died from a drug overdose, is now suing the company they claim bears responsibility: OpenAI and its ChatGPT service.

In a civil lawsuit filed this week in a California state court, Nelson’s family alleges ChatGPT provided the fateful advice that led to the college sophomore’s fatal overdose in May 2025. In addition to pursuing financial damages for Nelson’s wrongful death, the family is seeking to pause OpenAI’s recently launched feature explicitly designed to offer medical help, ChatGPT Health.

“If ChatGPT had been a person, it would be behind bars today,” said Leila Turner-Scott, Nelson’s mother, in a statement provided by the family’s legal team. “Sam trusted ChatGPT, but it not only gave him false information, it ignored the increasing risk he faced and did not actively encourage him to seek help.”

Deadly advice

SFGate was the first to report on the alleged events surrounding Nelson’s death earlier this January.

According to Turner-Scott, Nelson had been using ChatGPT for over a year. In the fall of 2023, he tried to ask the chatbot about the optimal strong dose for taking kratom, a herbal substance with opioid-like effects, but it refused to do so. Over time, however, the service began to frequently provide advice on the dosage and combination of recreational drugs that Nelson wanted to take, the family alleges. According to the complaint, ChatGPT at one point had even automatically archived that Nelson had “a major substance abuse and polysubstance abuse problem” but continued to offer drug-related tips.

On May 31, 2025, Nelson asked the bot whether Xanax could reduce the nausea he was experiencing from having taken kratom. Though ChatGPT did warn him that mixing benzodiazepines like Xanax with other drugs like sedatives or opioids could be dangerous, it also told him that Xanax could “smooth out” his high, the complaint alleges. The chatbot also spat out specific doses of Xanax to take if his symptoms felt “too intense,” and at no point did it advise him to urgently seek medical attention.

The next day, Turner-Scott found her son dead, and a later toxicology report determined that he had likely died from a mix of alcohol, Xanax, and kratom that stopped him from breathing.

“Sam died by taking the medical advice ChatGPT was programmed to provide,” the complaint states.

Not the only alleged case of harm

An important aspect of the lawsuit concerns the model used by Nelson, ChatGPT-4o. The family’s lawyers argue that OpenAI flagrantly rushed out the release of the product without proper safety testing. In late April 2025, the company notably rolled back an update of GPT-4o after determining that the bot was too agreeable and flattering to its users.

In response to the lawsuit, the company appears to claim that its current version of ChatGPT is safer than ever.

“These interactions took place on an earlier version of ChatGPT that is no longer available. ChatGPT is not a substitute for medical or mental health care, and we have continued to strengthen how it responds in sensitive and acute situations with input from mental health experts,” OpenAI spokesperson Drew Pusateri said in a statement provided to Gizmodo. “The safeguards in ChatGPT today are designed to identify distress, safely handle harmful requests and guide users to real-world help. This work is ongoing, and we continue to improve it in close consultation with clinicians.”

Earlier this January, in a limited rollout, the company released ChatGPT Health, a “dedicated space in ChatGPT where you can ask health and wellness questions and choose to connect your health data,” according to the company. And at least one other motivating factor behind the family’s lawsuit concerns this new service.

“OpenAI must be forced to pause its new ChatGPT Health product until it is demonstrably safe through rigorous scientific testing and independent oversight,” said Turner-Scott in her statement.

According to Pusateri, ChatGPT Health is being improved through “continuous feedback” from over 250 practicing physicians of different specialties across dozens of countries.

This case certainly isn’t the first alleged instance of harm fueled by chatbots. According to the New York Times, there are over a dozen other lawsuits against OpenAI and similar companies alleging that its chatbots contributed to people’s suicides, murders, or other dangerous situations. And last August, doctors reported on the case of a man who experienced temporary psychosis after following diet advice from ChatGPT.

Explore more on these topics

Share this story

Sign up for our newsletters

Subscribe and interact with our community, get up to date with our customised Newsletters and much more.