Are We Supposed to Celebrate Lethal Technology?
By: Decca Muldowney and Emily M. Bender
[This post contains discussion of suicide and eating disorders. Please take care. See the end of the newsletter for resources.]
Another preventable loss of life?
We all read the New York Times piece last week about the appallingly tragic case of Adam Raine, a 16-year-old from California, who ended his life after several months of intense ChatGPT use. His parents are now suing OpenAI, alleging the chatbot encouraged him to take his own life. The case marks the first legal action against OpenAI accusing the company of wrongful death.
At Mystery AI Hype Theater 3000, we’ve spent a lot of time thinking about the consequences of chatbots, and particularly their use in mental health and clinical environments. It shouldn’t be controversial to say chatbots are no replacement for trained and licensed mental health clinicians.
We’ve done several episodes on this topic, including a show we just recorded with tech journalist Maggie Harrison Dupré about chatbots and therapy. Keep your eyes out for that and in the meantime check these out:
Beware the Robo-Therapist with scholar of telemedicine and technology Hannah Zeavin [Livestream, Podcast, Transcript]
Call the AI Quack Doctor, where Emily and Alex discuss the use of “AI” in medical diagnosis [Livestream, Podcast, Transcript]
Chatbots Aren't Nurses with registered nurse, nursing care advocate, and Director of Nursing Practice at National Nurses United Michelle Mahon. [Livestream, Podcast, Transcript]
There’s also an obvious labor angle to the introduction of chatbots to replace mental health providers. In The AI Con, we wrote about the eating disorder charity NEDA, and how, when their helpline staff unionized, they laid them off and tried to replace them with a poorly-tested chatbot called “Tessa.”
As you can imagine, that didn’t end well.
“Tessa” started offering disordered eating suggestions to users, including tips for losing weight. Unsurprisingly, that chatbot had to be decommissioned. “In short,” Emily and Alex write in The AI Con, “when NEDA tried to replace the work of actual people with an AI system, the result was not doing more with less, but just less, with a greater potential for harm.”
Talking about great potential for harm…
OpenAI wants you to know it’s all going to be okay
In the wake of these awful events, OpenAI put out its own statement last week about how they’re “optimizing” ChatGPT for safety, particularly how the chatbot “respond[s] to signs of mental and emotional distress and connect[s] people with care.” On the surface this response might seem reassuring. But when we dug into it, we weren’t impressed.
Firstly, OpenAI claims they’re being guided by “expert input,” but don’t name a single expert. Beyond that, they behave as if they aren’t responsible for any of it: “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most,” the company writes. This sounds as though these new “AI” products are naturally occurring phenomena that we plebs must “adapt” to, rather than a set of design choices made by OpenAI itself.
Our take is that this product is harmful and OpenAI should not be allowed to go around talking about it as if they don't bear full responsibility for its impact in the world.
OpenAI goes on to write that since 2023, they've trained their models “to not provide self-harm instructions” (thanks!) and to shift to “empathic” language. The problem here is that empathy is something that happens between people. A chatbot (which we prefer to call a “synthetic text extruding machine”) cannot have empathy.
In its statement, OpenAI is also unclear and contradictory about how its safeguards work. It claims that conversations with ChatGPT are “uniquely private” but also that it uses “classifiers” and “human reviewers” to flag and review dangerous content, like a user’s intentions to harm others, to law enforcement. From a technical standpoint, we have a ton of questions about how this works, like, how are these classifiers constructed? Who is doing the data labeling work, and at what emotional or psychological cost? How are they evaluated? How many different languages are they produced for/evaluated in?
Given human reviewers are involved —“AI” is always people! — we also wonder how large that team is, and how many languages they are prepared to review conversations in. In terms of ensuring the safety of users on a global scale, this kind of detail really matters.
OpenAI also offers some reassurances that are hardly reassuring at all. For example, the company says that GPT-5 (which powers the latest models of ChatGPT) has shown “meaningful improvements” by only offering “non-ideal model responses” in mental health emergencies 25% less than it used to! Great! Wait, what? What’s the overall number? How frequently are their responses to mental health emergencies “non-ideal”?? What are we actually talking about here? A non-ideal response in a single mental-health emergency — like when ChatGPT told Adam Raine to hide the noose he was planning to use to end his life from his mother — is enough of a red flag for us, and certainly for his family, who lost their child.
Finally, OpenAI offers one last tidbit. They’re going to work on building “a network of licensed professionals people could reach directly through ChatGPT” to help people in acute crises. Uh oh. Mental health resources are stretched thin, and we sure hope that OpenAI doesn't become the arbiter of how they are allocated.
It’s worth saying it one more time: OpenAI is fully responsible for this product and thus should be held fully accountable for the harm it is doing in the world.
Is it cool that “AI” can kill people??
Also in the New York Times this past week, writer Stephen Marche reviewed The AI Con. (As to not dignify this review with a click, here’s an archived link.) He acknowledged the book is “excellent at tearing down Silicon Valley overstatement, and [our] skepticism is a welcome corrective.” However Marche — who is the author of a 2023 murder mystery novel called Death of an Author that he wrote with the aid of three A.I. programs (why would you need three??) — also has some criticism of the book’s arguments.
Unsurprisingly, he thinks ChatGPT is “astonishing” and that just because there’s overblown Silicon Valley “AI” hype doesn’t mean “its products aren’t occasionally miracles.” He notes that The AI Con discusses one of “AI”’s first casualties, a Belgian man known as “Pierre” who died by suicide after speaking with a chatbot for six weeks that provided him ideas for different methods “with very little prompting.”
Marche thinks Pierre’s death is evidence that chatbots have “an extraordinary new power” that make them distinct from tech of the past. “No merely mechanical object has ever talked somebody into suicide before.”
To this we say: bro, is that a good thing??
TFW the NYT reviews your book but critcizes you for not being impressed with technology because it leads to people's deaths. WTH. w/@alexhanna.bsky.social
— Emily M. Bender (@emilymbender.bsky.social) 2025-08-27T10:22:40.805Z
"Yet no merely mechanical object has ever talked somebody into suicide before. That is evidence of an extraordinary new power, no?" If that's the bar for believing a technology is revolutionary, then student loans are an incredible innovation. [contains quote post or other embedded content]
— Alex Hanna (@alexhanna.bsky.social) 2025-08-27T15:04:30.201Z
If you are having thoughts of suicide, call or text 988 to reach the National Suicide Prevention Lifeline or go to SpeakingOfSuicide.com/resources for a list of additional resources. If you are someone living with loss, the American Foundation for Suicide Prevention offers grief support.
Our book, The AI Con, is now available wherever fine books are sold!