Medical Science Shouldn't Platform Automating End-of-Life Care
JAMA Network publishes a view from deep inside the hype
By Emily
In a viewpoint article published in JAMA Network last week, three MDs from UCSF ask the question "Can Artificial Intelligence Speak for Incapacitated Patients at the End of Life?"
I've seen a lot of ridiculous and awful takes on "AI" in the last two years—a benefit or maybe occupational hazard of running our podcast is that people send them our way regularly—but this headline briefly left me speechless.
I decided to dive in and read the thing anyway, in the vain hope that maybe that title (headline?) was written by someone else and misrepresented the contents of the article. No such luck. It seems like phrase "artificial intelligence" led these authors and editors to collectively set aside all of their critical thinking skills.
They start with a scenario of an estranged daughter being asked to make end-of-life care decisions for her mother, the mother being unable to communicate and having not provided an advance directive. A painful and difficult scenario to be sure, for everyone involved, healthcare providers included. And yet, once again, just because you have identified a problem doesn't mean "AI" (of any form) is a solution for it.
The imaginary tech these MDs think we should ponder using includes:
- Audio recording of all doctor-patient interactions
- Automated processing of those recordings to "identify and play excerpts of the mother talking about what mattered most to her"
- Classification systems that "predict" whether a patient would want to pursue palliative care based on behaviors outside the doctor's office ("a registered Sierra Club member and volunteer dog walker who regularly purchased gardening equipment")
- Classification systems that "estimate the likelihood of achieving important functional outcomes" based on both medical information and "non-traditional sources of information" such as:
consumer wearable devices, place of residence, caregiver needs, internet search, and purchasing history
- And most appallingly, systems that would "predict" what a patient would choose:
Beyond predicting outcomes, AI has the potential to predict what an incapacitated patient would choose. When patients cannot speak for themselves, we ask surrogates to draw on their knowledge of the patient to help the clinical team make individualized treatment decisions. Although a well-informed surrogate can perform this process of substituted judgement effectively, many patients lack a well-informed surrogate. Combining individual-level behavioral data—inputs such as social media posts, church attendance, donations, travel records, and historical health care decisions—AI could learn what is important to patients and predict what they might choose in a specific circumstance.
In other words, they are supposing that the population at large should acquiesce to pervasive surveillance in order to feed algorithms for doctors to use should we be unable to communicate our own end-of-life wishes.
The authors correctly identify all of the problems with the idea of using technology in this way:
- The output might be wrong (just randomly, or because its inputs are out-dated)
- Automation bias (doctors and family members putting too much credence in the system's output)
- Lack of transparency
- Disparate impact (model predictions being less accurate for marginalized populations)
- Model outputs being used as training for future model decisions
... and of course the core problem:
For many, the notion of incorporating AI into goals of care conversations will conjure nightmarish visions of a dystopian future wherein we entrust deeply human decisions to algorithms.
The conclusion would seem obvious then, you'd think. This is both viscerally appalling and logically a terrible idea.
But this is an AI hype artifact, and so of course their reasoning has to take a sharp turn. They write that they "share the apprehensions" of the "many" in the quote above, but then continue:
However, our experience in geriatrics, palliative, and critical care reinforces how difficult it can be for families to make decisions for incapacitated patients. [...] Given these significant limitations, and the inexorable advancements in AI, it behooves us to consider how AI could be safely, ethically, and equitably deployed to help surrogates for individuals who are seriously ill. In this Viewpoint, we explore how AI could support surrogate decision-makers while addressing some of the attendant epistemic and moral challenges.
The sharp turn in their reasoning is the assumption of "the inexorable advancement of AI." The development from today's technologies sold as "AI" to something that matches what the phrase "artificial intelligence" evokes (and could provide the functionality they imagine) is not a foregone conclusion: there is no reason to believe that it is even possible, nor if it were possible, that it is inevitable. But by assuming this falsehood, they set themselves up to arrive at whatever conclusion they want.
They also make unwarranted assumptions about public acceptance and normalization of surveillance technology:
Some may worry that such recordings violate the sanctity of the physician-patient relationship or infringe on patient privacy, especially in sensitive encounters, such as those pertaining to mental or reproductive health. However, the proliferation of digital scribes suggests that the practice of recording visits may soon become broadly accepted.
Of course, it's not just the patient being subjected to automated scribes, but also all the healthcare workers involved, too. We learned a lot from Michelle Mahon, Director of Nursing Practice at National Nurses United when we recorded Episode 37—coming soon!—about what it's like, from a healthcare worker's point of view, to have automation foisted upon you.
And, for some additional support, the JAMA authors throw in some automation bias:
Algorithms—with thousands, millions, or even billions of direct observations of a person’s behavior—might actually paint a more authentic portrait of the way a person has lived, compared with a surrogate whose impression is often colored by acute psychological, emotional, and existential stress.
This is a trope I see frequently, too—that scale is magical. That a machine with access to that much data surely will give the correct answer, and be unbiased, especially compared to people, who, being human, experience emotions. (As if wisdom could ever be divorced from emotion.)
The piece concludes with a reassertion of inevitability and then a positioning of the article as "just asking questions" or in fact encouraging the field as whole to do so:
The use of AI in clinical care is inevitable and poses opportunities and risks. AI could enhance—but should not replace—human surrogates and clinicians. Ambient voice technology could give voice to the voiceless. [...] These, and other novel applications of health care AI, beget critical practical and normative questions. The time to consider such questions is now.
It is certainly timely to critique the suggested uses of automation in healthcare, especially automation that is used to justify increased surveillance and decenter people in decision making. But in order to do that critique effectively—in order to prevent harm—it must be grounded in a understanding not only of the actual functionality of the technologies in question but also of the possibility of refusal. Musings based instead on magical thinking and acquiescence to claims of inevitability should not be presented as scientific discourse.
So where were the editors in all of this? Unfortunately, they were also bedazzled by the technology the authors imagined and caught up in delusions of inevitability. The piece is accompanied by a short editor's note, which reads in part:
We published this Viewpoint because it is very interesting, somewhat scary, and probably inevitable.
In addition to, again, capitulating to unsubstantiated claims of inevitability, this editor's note highlights a vulnerability of the present moment. As authors and fans of speculative fiction have long known, tales exploring the consequences of imagined technology (or magic) can indeed be very interesting (and an element of fear or horror can heighten that effect). But the parlor trick of the present synthetic text extruding machines, together with all of the VC-charged hype around AI and the culture of passing around shoddy paper-shaped objects as if they were scholarly artifacts, leads people to confuse speculative fiction for scientific discourse. We all need to be on our guard for this, and that goes double for journal editors.