Why you should refuse to let your doctor record you
By: Emily M. Bender and Decca Muldowney
At a recent appointment, Emily’s physical therapist (who knows some about her research) said, “Before we get started, there’s something I want to ask you about.” The something was an automatic “scribing” system their office is trialling for two weeks and deciding whether to purchase. These systems take in a (presumably audio-only) recording of the patient encounter and then output a draft patient note for the chart.
Both Alex and Decca have had similar experiences in recent appointments with providers, suggesting these tools are infiltrating healthcare from small doctor’s offices to huge conglomerates like Kaiser. As a recent storyline on emergency room drama The Pitt showed, these scribing tools are being advertised as time-saving programs that allow healthcare workers to focus more on the patient and less on note-taking, but we are highly skeptical of these claims. (And art even imitates life; “AI” transcription tool on the Pitt made an error that compromised a patient’s well-being!)
The PT knew that Emily was likely to decline (as she did) and was really curious as to why. That conversation prompted this newsletter post, meant both for patients trying to decide whether to accept or decline, and providers (the lucky ones with decision making power) trying to decide whether to purchase or subscribe to such systems.
So what’s the big deal with “AI” charting? Here are nine reasons why we recommend refusing to consent to the use of scribing tools in healthcare settings:
-
Privacy: These systems always involve third-party software, where recordings of provider-patient interactions are sent to some other company. Even if the audio recordings are deleted relatively quickly, the transcripts are sensitive data, too. Medical providers are probably being given assurances that the systems are compliant with health privacy laws (e.g. HIPAA in the US), but this does not mean that the software providers have strong enough security protocols.
-
Informed consent: Are patients really being given enough information to meaningfully consent to the system’s use? Do they get full information about what their data is being used for, now and possibly in the future (e.g. training further iterations of the system, “quality assurance”, training “AI” doctors)? How much time would it take in a patient visit to ensure meaningful consent? Can that consent be practically revocable, mid-session?
-
Impact of recording on the patient: How do you feel when you know you are being recorded? Are you able to be as open about whatever health matter you are being seen for as you need to be, to get good care, knowing that your voice is being captured?
-
Impact of recording on the provider: We have heard from physicians who work with medical interpreters that providers accustomed to automatic scribing systems alter the way they speak during patient visits, dropping into a more technical “doctor-to-doctor” register to record information they want to see in the note. This leaves medical interpreters flummoxed, not knowing if they should be attempting translation at that point. We’re sure it can be off-putting to patients in monolingual doctor visits, too.
-
Automation bias: When people are presented with a draft note (no matter what its origin) this will impact how they proceed. It should be relatively possible to check the contents of the note for accuracy (does that match what just happened?), though we guess this gets harder the longer the time lapse between encounter and chart note correcting. Surely it’s harder to chart from scratch later, too, but we’d be really interested to see studies of how much misinformation gets recorded with or without automatic chart note drafts.
-
Automation bias, redux: Where it should be relatively straightforward to read and verify the contents that are included in the note, it’s much trickier to remember what should have been there but isn’t.
-
The false promise of efficiency: The selling point here is that providers are overworked, and charting usually ends up as uncompensated homework. Or they’re being told that this will free them up to spend more time with patients. But, especially given the underfunded nature of the US health system, that is extremely unlikely to mean more time with each patient. Instead, it will mean more patients.
-
Disparate impact: Speech technologies simply do not work equally well for all populations. The closer your speech is to the kind of data used to train the systems, the more accurately it will record your voice. So some providers (those who speak non-standard varieties, maybe as a second language; those who work with many patients who speak non-standardly, including people with dysarthria or other speech disorders) will have disproportionately more work to do correcting notes … in a work environment where they are expected to have been made “more efficient”.
-
Charting is part of care: As Aliaa Bakarat so eloquently wrote last year, the writing of chart notes, i.e. the time the provider takes to reflect on the patient’s symptoms, progress, needs, etc is part of the care. Skipping or skimping on this part not only impacts the quality of care being provided immediately, but also likely degrades care over time, as providers are naturally less engaged in the cases they are working with.
Taken together we believe it is NOT in patients’ best interest to consent when asked this question. And so long as we are at least being asked for consent, this is a refusal with potentially meaningful systemic effects: If patients as a group mostly refuse, then it will be harder for institutions to claim “efficiency” gains, and thus harder for them to impose more patients on each provider.
We also believe it is NOT in providers’ or institutions’ best interest to adopt these tools. The pressures are real: providers are being asked to see more patients, hospitals and other institutions are being underfunded and/or squeezed for profit by private equity. But this kind of automation provides at best only temporary relief to those pressures, while creating incentive to make them worse.
For more from Mystery AI Hype Theater 3000 on “AI” and healthcare, check out these episodes:
- In Chatbots Aren't Nurses, we talk to registered nurse and nursing care advocate Michelle Mahon from National Nurses United about why generative AI falls far, far short of the work nurses do. [Livestream, Podcast, Transcript]
- In Med-PaLM or Facepalm? A Second Opinion on LLMs in Healthcare, Stanford professor of biomedical data science Roxana Daneshjou discusses Google and other companies' aspirations to be part of the healthcare system, and the inherently two-tiered system that might emerge if LLMs are brought into the diagnostic process. [Livestream, Podcast, Transcript]
- In Beware the Robo-Therapist, UC Berkeley historian of medicine and technology Hannah Zeavin tells us why the datafication and automation of mental health services are an injustice that will disproportionately affect the already vulnerable. [Livestream, Podcast, Transcript]
- In A Bad Case of Hype-itis, we take a look at ChatGPT Health and scrub in to slice up some harmful new nonsense in the world of "AI" for medicine. [Livestream, Podcast, Transcript]
