Mystery AI Hype Theater 3000: The Newsletter logo

Mystery AI Hype Theater 3000: The Newsletter

Archives
Subscribe
January 16, 2026

ChatGPT Wants Your Health Data

Meanwhile 15,000 nurses go on strike

By Decca Muldowney and Alex Hanna

This picture is made up of 9 images in rows of 3. Each row shows a different image of a pill bottle spilling out pills onto a plain surface, on yellow or white backgrounds. On one side, the image is an original photograph. The next two iterations show it getting represented in progressively larger blocks of colour.
Rens Dimmendaal & Banjong Raksaphakdee / https://betterimagesofai.org /

The new year is already bringing us fresh “AI” horrors. 

A mere week into 2026, OpenAI launched “ChatGPT Health” in the United States, asking users to upload their personal medical data and link up their health apps in exchange for the chatbot’s advice about diet, sleep, work-outs, and even personal insurance decisions. 

"You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you,” OpenAI wrote in a blogpost about ChatGPT Health, encouraging users to connect health data tracking apps like Apple Health, Function, and MyFitnessPal. The company also claims that more than 230 million people already ask ChatGPT questions about their health every week (please don’t do this, folks!).

In an attempt to reassure us over the most obvious safety concerns raised by taking health advice from a chatbot, OpenAI write that for two years they’ve “worked with more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful—this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus.” What we take from this is that (1) the company hired a bunch of physicians as gigworkers to create their new product and (2) the company still hasn’t learned that answers to questions are highly contextual, and what is helpful to one person might be harmful to another using the exact same words.

And OpenAI has been careful to cover its back too, writing that the chatbot should be used to “support, not replace, medical care". In a previous newsletter we covered the fact that OpenAI updated its “usage policies”, warning users should not use ChatGPT for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” Basically, if ChatGPT Health gives you bad medical advice that harms you, OpenAI says that’s your responsibility. As we wrote then: “OpenAI seems to want to have it both ways: protect themselves from liability if their tools offer extremely bad legal or medical advice, but keep making money from people seeking out the information.”

We’re also not convinced by OpenAI’s claims that ChatGPT health has “enhanced privacy to protect sensitive data" or their promise to “keep health conversations protected and compartmentalized.” We already know that chatbot logs are not private in the same way that conversations with healthcare providers must be. Sam Altman has said that your data can be subpoenaed. Healthcare data is some of the most personal, private and sensitive data we have; uploading it to OpenAI  almost ensures that it will not stay that way. 

All this comes as real healthcare workers are being forced to take action to defend their working conditions in the broken US healthcare system. On Monday, 15,000 New York City nurses walked out in a historic strike after negotiations stalled over pay raises, health insurance coverage, and understaffing penalties. As the city’s new mayor Zohran Mamdani pointed out: “There is no shortage of wealth in the health care industry [...] But for too many of the 15,000 NYSNA nurses who are on strike, they are not able to make their ends meet.” NYSNA points out that at the same time hospital networks like Mount Sinai are investing millions into “AI” research (including $100 million for a single new facility), they’ve closed a hospital that served low-income communities and other facilities are chronically understaffed.

The introduction of new products like ChatGPT Health further devalue the skills and labor of healthcare professionals, particularly nurses and frontline healthcare workers. 

At Mystery AI Hype Theater 3000 we’ve discussed “AI”’s unwanted intrusion into healthcare. Check out these episodes:

  • In Chatbots Aren't Nurses, we talk to registered nurse and nursing care advocate Michelle Mahon from National Nurses United about why generative AI falls far, far short of the work nurses do. [Livestream, Podcast, Transcript]

  • In Med-PaLM or Facepalm? A Second Opinion on LLMs in Healthcare, Stanford professor of biomedical data science Roxana Daneshjou discusses Google and other companies' aspirations to be part of the healthcare system, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process.   [Livestream, Podcast, Transcript]

  • In Beware the Robo-Therapist, UC Berkeley historian of medicine and technology Hannah Zeavin tells us why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable. [Livestream, Podcast, Transcript]



Our book, The AI Con, is now available wherever fine books are sold!

The cover image of The AI Con, with text to the right, which reads in all uppercase, alternating with black and red: Available Now, thecon.ai.

Don't miss what's next. Subscribe to Mystery AI Hype Theater 3000: The Newsletter:
Share this email:
Share on Twitter Share on LinkedIn Share via email Share on Mastodon Share on Bluesky
https://dair-in...
https://twitch....
https://peertub...
https://www.buz...
Powered by Buttondown, the easiest way to start and grow your newsletter.