Calling All Mad Scientists: Reject "AI" as a Framing of Your Work
By Emily
tl;dr: Every time you describe your work that involves statistical modeling as "AI" you are lending your power and credibility to Musk and DOGE.
Mad scientists at #HandsOff2025 protests
On Saturday, April 5, I joined approximately 25,000 other people at Seattle Center at our local #handsoff2025 protest. I was particularly taken with this sign:

The protest was inspiring. It was amazing to see how many other people (in Seattle, around the country, around the world) are ready to stand up and do something. Showing up is important, but so is what we do next. Thinking on the mad scientist sign led me to consider what we can do as scientists in particular and in this post, I have a suggestion. (To be clear, this is an action among many we can take, and not the be-all and end-all.)
Let's not pretend these things are the same
The term "AI" doesn't refer to a coherent set of technologies. But using the word tells the public that it does.
As a case in point, take this set of stories from the past few weeks:
This article, from Nancy Joseph in the University of Washington's newsroom (March 4, 2025), was also circulated with the headline "A professor in the Department of Speech and Hearing Sciences is using AI to improve hearing aids." From the text of the article, it sounds like this professor is using some kind of statistical modeling to improve the experience of hearing aid users, and allowing people to adjust which sounds are amplified more effectively. The piece doesn't say so directly, but it seems like the underlying data being modeled includes sound input, differential amplification settings, and user feedback.
(Other parts of the article discuss other applications of "AI", i.e. statistical modeling, in Prof. Shen's work. Most sound similarly grounded and beneficial, though I was particularly annoyed at the descriptor "just like an expert audiologist would." That kind of anthropomorphization not only misleads readers but also devalues the work that audiologists do, while dehumanizing them, too boot.)
- Inside DOGE’s AI Push at the Department of Veterans Affairs and DOGE Plans to Rebuild SSA Code Base in Months, Risking Benefits and System Collapse
Vittoria Elliott, writing in WIRED (April 4, 2025) documents how Sahil Lavingia (CEO of Gumroad, VC, and employee #2 at Pinterest) is committing code to the VA's GitHub that was generated with the "AI" tool OpenHands. Meanwhile, Makena Kelly, also writing in WIRED (Apri 4, 2025), describes how Elon Musks's DOGE is claiming they will rewrite the Social Security Administration's codebase in mere months, to move it away from COBOL to a more modern programming language. An earlier plan to make this migration estimated that it would take 5 years. Kelly writes "DOGE would likely need to employ some form of generative artificial intelligence to help translate the millions of lines of code, sources tell WIRED." Of course, in neither case is there any reason to believe that DOGE and the people who make it up actually care about effectively serving the public. They're probably just as happy to have a system that is untested and frequently denies benefits.
WIRED is consistently doing excellent reporting in this space, and they are projecting only skepticism that "AI" could be at all helpful here. But unfortunately, our newsmedia in general are nowhere near that careful. So the average news reader is likely to encounter simple platforming of Musk and his cronies claiming that "AI" can solve whatever problem is under discussion.
Cutting Musk and all the other AI bros loose
Imagine that that same average news reader has come across reporting on your good scientific work, also described as "AI", including some nice accounting of both the effectiveness of your methodology and the social benefits that it brings. Mix this in with science fiction depictions (HAL, the Terminator, Lt. Commander Data, the operating system in Her, etc etc), and it's easy to see how the average reader might think: "Wow, AIs are getting better and better. They can even help people adjust their hearing aids now!" And boom, you've just made Musk's claims that "AI" is good enough for government services that much more plausible.
I am sympathetic to the difficulties of getting research funding, especially in the climate of the past few years where it might seem like you have to sprinkle some "AI" claims into the proposal to even stand a chance of getting funded. But look at the past three months and what's happening to funding, even funding that has been awarded. Calling it "AI" is fast becoming yet another kind of anticipatory obedience.
If what you are doing is sensible and grounded science, there is undoubtedly a more precise way to describe it that bolsters rather than undermines the interest and value of your research. Statistical modeling of protein folding, weather patterns, hearing aid settings, etc really have nothing in common with the large language models that are the primary focus of "AI". And even if you are doing work on large language models, again, if it's sensible, stating more precisely and directly what you are doing with them can only help. I urge you to articulate that more precise description and then get in the habit of using it, especially with the media.
Our book, The AI Con, is out on May 13, 2025, but you can pre-order it now where ever fine books are sold!