Resistance Isn't Denialism
By Emily
I've recently noticed a new tactic on the part of AI boosters to attempt to erase the work of people who are resisting the project of "AI" and calling out the harms done in its name. Or maybe the tactic isn't new, it's just become apparent to me recently in a pair of artifacts.
tl;dr: Labeling resistance to "AI" as "denialism" is an attempt to dismiss it out of hand. The narrative of the denialism frame is that those opposing "AI" are afraid, under-informed and/or engaging in wishful thinking. None of that is true: The people who oppose the "AI" project are actively fighting and refusing to accept the premise of tech bros and AI boosters.

The first of my pair of artifacts is a very silly piece in a publication called Big Think by Louis Rosenberg, the CEO of an AI company, entitled "The rise of AI denialism". As I wrote on BlueSky, the essay was shoddy and lazy throughout (arguing with unnamed and uncited "pundits", "influencers", and "voices"), but his central argument (such as it is) is that "denial" of the supposed fact of the imminent pervasive presence of "AI" will make it harder for us to prepare for the changes he believes are coming.

The second artifact is AI cheerleader (and business school professor) Ethan Mollick's appearance on Adam Conover's Factually! podcast. He refers to "critics" who claim that "AI is bad, it's going away, it's all scam" (timestamp 37:00) and that these "critics" are, as a result, not participating in the conversation about "AI" and how to mitigate the negative impacts associated with "AI" development and use.
This struck me as a weird kind of erasure: We wrote a whole book (The AI Con, not The AI Scam, which would be a worse title, to be sure), not as a way to opt out of the conversation, but rather as a way to reject the framing shared by the AI boosters and AI doomers and redirect attention to the harms that arise in the production and use of systems sold as "AI".
But for Mollick and Rosenberg, people who don't accept their premises (that "AI" is a thing, that it's inevitable, that it has "promise", and that it will lead to an era of unlimited productivity) aren't participating in the conversation. Fortunately, they are not the only interlocutors, and they don't get to set the terms of debate. It's true that "AI" cheerleaders (and some of the doomers) are sitting on a lot of cash, and it's unfortunately true that that cash buys policymaker attention. But there are policymakers who listen to people who don't buy their way in.
Beyond that, there are people who make decisions everyday about their own use of automation (personally and/or professionally) as well as the use of automation in their organizations. There are activists who organize to oppose hyperscale data centers and labor organizers who are fighting against the devaluing of their work and the imposition of technologies without their say so. There are educators who help their students navigate the marketing of the tech companies, artists who fight back against the theft of their work, and journalists who stand up for the integrity of their profession.
Framing the conversation as one that fits into the tech bro's imagination of the world centers both the project of "AI" and the perspective and goals of tech companies. Mollick calls on critics to join his cause, saying:
I would like more critics to try to make the maximalist approach of trying to make AI do stuff, right? In science, we call this that you you actually want a really stringent test. You want to actually test the hardest possible case of trying to make AI work rather than walking away before it does. (timestamp: 1:07:56)
No, thank you. I am not at all interested in contributing to the project of "AI". It is not my job set up test cases that tech companies can use to "prove" how useful their systems are, nor is it my job to do their beta testing for them. It is my job to use what I know, as a computational linguist and public scholar, about how these technologies work and why they seem so compelling, to help people make better decisions—personal and policy-wise—about this technology.
Alex and I and others who are very much in this fight are scoping the conversation on our terms, which has much more to do with the political economy of the industry, and helping to shape data and labor regulatory regimes which bring these companies to heel, rather than accepting that LLMs (along with image generators and other kinds of data-hungry, sloppy, besparkle-emojied automation) are necessary in society and that we just have to deal with them.
Rosenberg's term "denialism" frames the discussion as being about whether or not "AI" is a thing. But that's not the conversation we need to be having. Undercutting the "thingness of 'AI'" (Suchman, 2023) is an unfortunately necessary step in cutting through the AI hype so that we can more clearly see the harms being done in the name of the project of "AI" and seek to prevent and/or remedy them.
