Predictive AI is everywhere, and it doesn't work
A common type of AI is quietly influencing millions of lives every day, and not for the better. This is predictive AI, the software automating decision-making in fields like healthcare, education, employment, and criminal justice.
There's a good example in the new book AI Snake Oil by Arvind Narayanan and Sayash Kapoor. The authors explain that predictive AI is marketed as a fully automated decision-making tool. But when the software makes the wrong decision, either through error or explicit intent for profit reasons, there are often no humans authorized to act as guardrails:
In one extreme case, U.S. health insurance company UnitedHealth forced employees to agree with AI decisions even when the decisions were incorrect, under the threat of being fired if they disagreed with the AI too many times. It was later found that over 90 percent of the decisions made by AI were incorrect.
The UnitedHealth AI with a 90% error rate is covered in an ArsTechnica story from November 2023, which gives a sense of the harm the algorithm caused:
In 2022, case managers were told to keep patients' stays in nursing homes to within 3 percent of the days projected by the algorithm, according to documents obtained by Stat. In 2023, the target was narrowed to 1 percent.
So the doctors say (accurately) that Granny needs to stay in the rehab center a little longer after her surgery, but the UnitedHealth AI – tuned for maximum profit – says (inaccurately) that Granny should go home, so it won't pay for another day. A ProPublica story from last month shows how UnitedHealth continues this automated cost-cutting in the delivery, or refusal, of mental health care.
Flawed and spreading
Predictive AI is not just affecting people's lives by refusing healthcare claims. As stated above, it's increasingly being used in hiring decisions, education, criminal justice, and more. This is despite the technology being inherently flawed.
As Narayanan and Kapoor write in AI Snake Oil, predictive AI systems are often trained on data that is not representative of the context where they'll be used. The book gives an example from a hospital: an AI found that pneumonia patients with asthma recovered faster, so it concluded that asthmatic pneumonia patients could, in the future, receive less care and be sent home early. This was a dangerously faulty conclusion: the training data had shown asthmatics recovering faster because the hospital had sent them to the ICU for more care. As often happens, the training data didn't represent what the AI would face in a real-world situation.
Another example came from the use of predictive AI in hiring. An AI was used to predict who among several job candidates would be most likely to succeed at the company. The AI ended up choosing candidates who happened to have a bookcase in the background of their video interview. Systems like this are not only inaccurate, they're also vulnerable to users gaming the software - sitting in front of a bookcase, adding false qualifications to a resume in white-on-white type, and so on.
In yet other cases, predictive AI can have a discriminatory effect if it draws on training data that originated during a time of racist, sexist, or otherwise unfair policies. As these systems are rolled out, the authors write, "the first people they harm are often minorities and those already in poverty."
Exploiting the vulnerable for profit is the definition of an unethical system, yet predictive AI is making decisions in schools, courtrooms, employers, and health insurers throughout the country. What's worse, citizens who are harmed may not even know that an AI is at fault. Even if they do, there's never an easy way to appeal, let alone change, an unfair decision. What's your plan if an AI mistreats, denies, or discriminates against you or a loved one?
Please join us.
Support my work on Creative Good by upgrading your subscription.
AI Snake Oil makes the strong case that the problems with this technology aren't caused by faulty implementation or bugs in the software. Instead, predictive AI is fundamentally and inherently flawed. The authors offer a stark conclusion:
Predictive AI not only does not work today but will likely never work.
This is especially troubling, given the widespread - and growing - use of the technology. Institutions everywhere are investing in a predictive AI systems that won't, and can't, work. For example, the U.S. Marine Corps recently announced a new initiative focused on retention, using "artificial intelligence to predict whether a Marine recruit will complete their full term" (source). One can only guess how long the project will last before the generals realize their mistake.
In the meantime, predictive AI is spreading everywhere, and it doesn't work. The authors' definition of AI snake oil – "AI that does not and cannot work as advertised" – describes the problem perfectly.
The Techtonic interview
I'm happy to share with you my interview of Arvind Narayanan about AI Snake Oil on Techtonic. Links below:
- Stream the interview
- See episode links (and scroll down for AI memes)
- Download the podcast
The interview covers more than just predictive AI, as snake oil is present in multiple areas of AI. (Narayanan adds that AI is working well in multiple ways, too, like spellcheck and GPS. As AI pioneer John McCarthy is reported to have said, "As soon as it works, no one calls it AI any more.")
I was especially pleased to see the authors devote a chapter to where the hype comes from. AI luminaries, while not the only guilty parties, share in the responsibility for overblown promises. AI pioneer Geoffrey Hinton, for instance, famously predicted a few years ago that radiologists would soon be out of work, given the inevitable advance of AI in radiology. The meme below describes well what happened next.
This is an important and quickly developing topic. If you're interested in learning more, there are a ton of resources on our Forum for Creative Good members:
- Dozens of AI-generated commercials and short films, allowing you to put on the creepiest film festival ever
- Problems with ChatGPT and generative AI, essays and news items documenting the bugs and underlying risks of large language models
- Discussing Brian Eno's reflection on creativity and AI
- AI is powered by manual labor, a look at the human workers hired to prop up futuristic-looking AI systems
- Full of slop until the bubble pops, my column from Sep 27, 2024 about genAI
To gain access to these and hundreds of other resources about tech and its effects, please join Creative Good. You'll also support my work on this newsletter.
Until next time, have a great holiday –
-mark
Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon
P.S. Your subscription is unpaid. Please upgrade by joining Creative Good. (You’ll also get access to our members-only Forum.) Thanks!