Recently on Death Panel we did a loose episode on AI in healthcare, inspired by the class action lawsuit against UnitedHealth over the use of its algorithmic tool "nH Predict" to deny coverage for post-acute care. What follows are my notes for that episode -- so if you listen to it, some of the points will be familiar. These really are just scratchy notes, not organized or edited at all, but perhaps interesting as I/we continue to develop critiques of so-called "artificial intelligence."
Distinct strands:
Overdose risk scoring/prescription drug monitoring programs (PDMPs)
UnitedHealth scandal and lawsuit
Other developments in health care
"The Gospel," the IDF's AI-enabled target-production system
What unites all of these?
Malthusian logic about scarcity.
An attempt to outsource ethical decisions to distributed computing networks -- of "lay people" or computer programs executing mathematical computations.
What do we need to know for context?
"AI" -- what it is (machine learning) and isn't (intelligent, autonomous).
Social context, function of technology and technological development (Marx, Babbage, Pasquinelli's book)
PDMPs and overdose risk scores -- the context is that they are law enforcement software.
NaviHealth: algorithms being developed and deployed in the context of vertical integration of health insurers -- algorithms like nH Predict are tools that facilitate this integration by squeezing profits from expensive sectors. nH Predict is, I almost guarantee, just a statistical model. It's built to optimize a function of some kind -- this is just a mathematical expression to maximize, minimize, or otherwise optimize using calculus or, more likely, something like least squares regression. This contributes to the technical and proprietary opacity of these systems. Some mathematical function, probably balancing minimum acceptable parameters for something like "recovery time from a hip replacement" against some kind of cost function. The purpose of the algorithm is to get people out of expensive care settings as fast as possible, to save money -- so that's what it will do. The machine doesn't know anything the humans who built it don't also know. Social function: to aid this vertical integration push -- literally mathematically optimizing the profit that can be squeezed from sick people à la the Babbage principle.
Now UnitedHealth is being sued, but this is extremely interesting because i's just that the rationing is done by an automated, algorithmic system -- not that rationing care is bad in general. (Malthusian logic of scarcity and rationing, and expending resources on those who will truly benefit from them by continuing to live and be productive.) I agree it's bad -- deploying these things without any kind of human override (or worse, instructing your staff not to deviate from the algorithmic recommendation on penalty of being fired) is asking for trouble, but the logic of rationing is left intact.
The logic of both rationing, and of outsourcing ethical debate to a distributed computing network. In the case of the Life and Death Committee, this network was at least still operating on ethical grounds. When we outsource ethical decisions to actual computers that are just doing mathematical calculations, these become machines for evading responsibility.
Human beings are directing "The Gospel" to do what it does (accelerate "target production" in Gaza, a sickening phrase) at every single stage of its operation. But because the targets are "produced" by some mathematical process, the genocidal intent of the Israeli state can be hidden behind a veil of technical mumbo-jumbo, and everyone up the chain of command can claim plausible deniability -- it wasn't me that ordered the bombing of this school or hospital. How convenient that it wasn't anyone! It was simply the machine! These explanations are widely accepted even for how ridiculous they are on their face -- the machines are obedient, they don't do anything but what they are told to do, and humans are programming, building, and deploying these tools.