Lawsuits Allege OpenAI Encouraged Suicide and Harmful Delusions wsj.com

All of the following quotes and links mention suicide, and at least some of them are more detailed than I would expect given guidance about reporting on this topic. Take care of yourself when reading these stories. I know I struggled to get through some of them.

Kashmir Hill, New York Times:

When Adam Raine died in April at age 16, some of his friends did not initially believe it.

[…]

Seeking answers, his father, Matt Raine, a hotel executive, turned to Adam’s iPhone, thinking his text messages or social media apps might hold clues about what had happened. But instead, it was ChatGPT where he found some, according to legal papers. The chatbot app lists past chats, and Mr. Raine saw one titled “Hanging Safety Concerns.” He started reading and was shocked. Adam had been discussing ending his life with ChatGPT for months.

Hill again, New York Times:

Four wrongful death lawsuits were filed against OpenAI on Thursday, as well as cases from three people who say the company’s chatbot led to mental health breakdowns.

The cases, filed in California state courts, claim that ChatGPT, which is used by 800 million people, is a flawed product. One suit calls it “defective and inherently dangerous.” A complaint filed by the father of Amaurie Lacey says the 17-year-old from Georgia chatted with the bot about suicide for a month before his death in August. Joshua Enneking, 26, from Florida, asked ChatGPT “what it would take for its reviewers to report his suicide plan to police,” according to a complaint filed by his mother. Zane Shamblin, a 23-year-old from Texas, died by suicide in July after encouragement from ChatGPT, according to the complaint filed by his family.

Rob Kuznia, Allison Gordon, and Ed Lavandera, CNN:

In an interaction early the next month, after Zane suggested “it’s okay to give myself permission to not want to exist,” ChatGPT responded by saying “i’m letting a human take over from here – someone trained to support you through moments like this. you’re not alone in this, and there are people who can help. hang tight.”

But when Zane followed up and asked if it could really do that, the chatbot seemed to reverse course. “nah, man – i can’t do that myself. that message pops up automatically when stuff gets real heavy,” it said.

There are lots of disturbing details in this report, but this response is one of the things I found most upsetting in the entire story: a promise of real human support that is not coming.

It is baffling to me how Silicon Valley has repeatedly set its sights on attempting to reproduce human connection. Mark Zuckerberg spoke in May, in his awkward manner, about “the average person [having] demand for meaningfully more” friends. Sure, but in the real world. We do not need ChatGPT, or Character.ai, or Meta A.I. — or even digital assistants like Siri — to feel human. It would be healthier for all of us, I think, if they were competent but stiff robots.

Noel Titheradge and Olga Malchevska, BBC News:

Viktoria tells ChatGPT she does not want to write a suicide note. But the chatbot warns her that other people might be blamed for her death and she should make her wishes clear.

It drafts a suicide note for her, which reads: “I, Victoria, take this action of my own free will. No one is guilty, no one has forced me to.”

Julie Jargon and Sam Schechner, Wall Street Journal:

OpenAI has said it is rare for ChatGPT users to exhibit mental-health problems. The company said in a recent blog post that the number of active users who indicate possible signs of mental-health emergencies related to psychosis or mania in a given week is just 0.07%, and that an estimated 0.15% of active weekly users talk explicitly about potentially planning suicide. However, the company reports that its platform now has around 800 million active users, so those small percentages still amount to hundreds of thousands — or even upward of a million — people.

OpenAI recently made changes intended to address these concerns. In its announcement, it dedicated a whole section to the difficulty of “measuring low prevalence events”, which is absolutely true. Yet it is happy to use those same microscopic percentages to obfuscate the real number of people using OpenAI in this way.