Mystery AI Hype Theater 3000: The Newsletter logo

Mystery AI Hype Theater 3000: The Newsletter

Subscribe
Archives
November 13, 2025

OpenAI Tries to Shift Responsibility to Users

OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice.

ChatGPT gave you bad medical advice? That’s on you

By Decca Muldowney and Emily M. Bender

This image shows a blue xray image of a person's chest - it shows the ribs, the faint outline of a heart, and other organs. The image features yellow squares surrounding the organs (left lung, trachea, right lung, heart, diaphragm) - each of them feature labels such as: normal, midline, normal size. There is text in the right bottom corner in white which states 'no abnormalities detected'.
Elise Racine / https://betterimagesofai.org / https://creativecommons.org/licenses/by/4.0/


Late last month, OpenAI quietly updated its “usage policies”, writing in a statement that users should not use ChatGPT for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” A flurry of social media posts then bemoaned the possibility that they’d no longer be able to use the chatbot for medical and legal questions. Karan Singhal, OpenAI’s head of safety, took to X/Twitter to clarify the situation, writing: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.” 

In other words: OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice. But we believe the accountability here should lie with the companies creating these products, which are designed to mimic the way people use language, including medical and legal language, not the users.

The reality is that the medical and legal language that these chatbots spit out sounds convincing and, simultaneously, the tech bros are going around saying that their synthetic text extruding machines are going to replace doctors and lawyers any day now, or at least in the near enough future to goose stock prices today. OpenAI seems to want to have it both ways: protect themselves from liability if their tools offer extremely bad legal or medical advice, but keep making money from people seeking out the information. 

As Emily and Alex put it in the AI Con, describing what we’ll experience if we follow down the boosters’ path and we don’t hold OpenAI and the like accountable: “AI boosters will brag that these machines make key services more accessible for everybody: medical advice will be free and on-demand, legal services will be available to anyone who needs them [...] In reality, the parts of these that actually matter — relationships, economies of care, and time spent with professionals who want to help and understand your problem — will be devalued and replaced with cheap fakes for people who can’t afford real professionals.” 

And this has serious real-world consequences. New research from the University of British Columbia, also published last month, found users reported that chatbots were often more “convincing and pleasant to deal with” than professionals. “The conversations with large language models were more persuasive than the ones with people,” said Dr. Vered Shwartz, one of the UBC researchers. 

So who is responsible for creating the danger of people falling for random but authoritative sounding text as if it were legitimate advice? Clearly, it’s the companies making the janky-ass products and claiming to be creating artificial gods. That OpenAI is "clarifying" their usage policies about this at least suggests that they are getting nervous about the liability. Let's work towards actually holding them accountable.

In the meantime, here are some Mystery AI Hype Theater 3000 episodes that explore the consequences of using “AI” for things best left to medical and legal professionals. 

Relevant episodes:

  • In Don't Be A Lawyer, ChatGPT we talk to Kendra Albert, a Harvard legal and technology scholar, about ChatGPT, legal expertise, and what the bar exam actually tells you about someone's ability to practice law.  [Livestream Podcast, Transcript]

  • In Beware the Robo-Therapist, UC Berkeley historian of medicine and technology Hannah Zeavin tells us why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable. [Livestream, Podcast, Transcript]

  • In Med-PaLM or Facepalm? A Second Opinion on LLMs in Healthcare, Stanford professor of biomedical data science Roxana Daneshjou discusses Google and other companies' aspirations to be part of the healthcare system, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process.   [Livestream, Podcast, Transcript]

  • In Chatbots Aren't Nurses, we talk to registered nurse and nursing care advocate Michelle Mahon about why generative AI falls far, far short of the work nurses do. [Livestream, Podcast, Transcript]

  • Finally in a recent episode, The Robo-Therapist Will See You Now, Futurism journalist Maggie Harrison Dupré unpacks the hype around AI therapists, and tells us about her groundbreaking reporting on "AI psychosis." [Livestream, Podcast, Transcript]


Our book, The AI Con, is now available wherever fine books are sold!

The cover image of The AI Con, with text to the right, which reads in all uppercase, alternating with black and red: Available Now, thecon.ai.

Don't miss what's next. Subscribe to Mystery AI Hype Theater 3000: The Newsletter:
Powered by Buttondown, the easiest way to start and grow your newsletter.