Mystery AI Hype Theater 3000: The Newsletter logo

Mystery AI Hype Theater 3000: The Newsletter

Subscribe
Archives
August 20, 2025

Friends Don't Let Friends Prompt

By Emily

Alex and I were at WorldCon (the World Science Fiction and Fantasy Convention) last week, talking about The AI Con with people whose livelihood (and/or passion projects) are threatened by "AI" and people who are experts in exploring possible worlds through fiction. (Many folks, of course, are in both of these groups.) This was a wonderful and inspiring event, creating space for a wide range of conversations. In this post I want to highlight some discussions I had with people concerned about chatbot and image generator usage. This post is in no way a recap of the event, but rather a set of reflections on a couple of conversations I had.

In the first, I was talking to a person who was in some distress over their inability to talk their family members out of using chatbots, despite all of the associated harms. They said their family members' primary use case was using ChatGPT to write professional emails. These folks are not professional writers and have some insecurity about their ability to write professionally. So they write something unpolished/relatively incoherent but containing the things they need to say, then have ChatGPT polish it up. Since they then verify that it says what they want it to say before sending, they see no problem and won't be disuaded. My interlocutor at WorldCon wanted to know what arguments they could try to get these people to change their minds.

The first thing I offered wasn't an argument but rather radical acceptance: We aren't going to convince everyone (despite the title of this newsletter post), as much as we would like to. So we have to let go of that as a success criterion. That said, here are the arguments we got to in that discussion (with some further polishing in retrospect):

  1. Opportunity cost: What are we missing out on when we turn to ChatGPT instead of approaching the problem in another way? Here, I think what's missed is the opportunity to learn the skill over time (writing professional emails) and, arguably, the atrophying of whatever skills we have built up. Another missed opportunity is one of connection with a coworker. What could grow if we make a habit of turning to each other for support?

  2. Individual voice: Large language models are averagers. They push everything towards some relatively polished but ultimately non-descript mean of language use. This might seem beneficial in workplace emails, but I would argue that it's not desirable, for either the writer or the reader. It's much easier as a reader to track a conversation and remember what's going on if different people write in their own voices. This is another way to say: It's rude to send people synthetic text, even if it's synthetic text that you've decided says what you want to say. Conversely, for a writer looking to develop their career, it's probably better to learn to project a strong, unique voice rather than to stay nondescript.

  3. Environmental cost: Many people seem to be carrying around the idea that the only environmental cost is in the training of the systems, and once they're trained, the additional cost of inference is negligible. But while any individual query is obviously small compared to the training of a system, they still represent energy usage, and of course, these uses all add up. Luccioni et al (2024) estimate that ChatGPT deployment costs exceeded its training costs in a matter of weeks or months. They also estimate that it takes 30 times as much energy to synthesize an answer (like in Google's "AI Overviews") as to simply extract similar text from a source.

My interlocutor thanked me for this wider range of arguments, while we also agreed that it's really hard to tell people that what they're doing is wrong. People don't want to hear that. It can help to validate the need. In many cases, people are reaching for ChatGPT or similar because of a legitimate need—and it's offered as an apparently "free" all-purpose solution, so why not?

In this context, I think the goal isn't a full ban, let alone anything like "just say no", but rather opening up the space for refusal, making "no" a viable option. Here, the work of Dr. Joy Buolamwini and the Algorithmic Justice League is an inspiring example. Through their project called #FreedomFlyers, they remind us that the use of facial recognition technology at TSA at US airports is optional—and that opting out, especially for those who feel comfortable doing so, is an important practice. I refuse every time, to maintain that habit, and to make it easier for me and others to refuse the next time.

The second conversation I want to share was after a panel that Alex and I were both on about "AI" and creativity. One of the audience members came up and asked me about their use case for image synthesis. They run a table top role-playing game (TTRPG) and find it helpful to have pictures of characters, but don't have the artistic ability to do the drawing themself. They used to spend hours using image search to find existing images that suited their purposes, but find that Midjourney (or similar, I forget which service) is a much faster way to meet this need. They aren't selling these images and wouldn't be commissioning them (i.e., paying an artist for their work) anyway.

Even though they specifically asked me for my counterarguments against this use case, this felt like an example of a type of conversation I have almost everytime I do public speaking about "AI". Usually, someone in the audience wants me to bless their usage of the systems as acceptable. I'm not interested in playing that game, but I do try to find ways to connect. Sometimes it's validating the need that is being filled. In this case, though, I started with acknowledging that this use case side steps at least some of the pitfalls of "AI" usage: My interlocutor wasn't using these systems for information access and they (arguably) weren't displacing any paid work. However, I ran through a few counterarguments.

I don't remember exactly which counterarguments I tried first, but they probably included both the theft argument (even though they weren't using it when they would have otherwise paid someone, it's still based on stolen art), the labor exploitation argument (data workers enduring terrible conditions to prevent end users from seeing horrific output), and the environmental argument (these things have an environmental cost), but none of those landed. They weren't ready to give up the convenience based any of those.

I tried one more though: I pointed out that every use of these systems builds the case for training the next one and building the next data center. And for this interlocutor, that did the trick!

I doubt that alone was what it took. This person had chosen to be in the audience for our panel after all, and had heard all of the arguments presented, but it was still interesting that this take stuck. Some further context of my WorldCon experience made it even more interesting to me: On another panel the previous day, I had talked about the amazing win by community organizers in Tucson who succeeded in blocking a proposed Amazon-affiliated data center (in the desert, ffs). In response, a co-panelist (who was very pro-"AI") said we shouldn't be eating beef or flying to conventions.

It's true that we aren't going to solve any part of the climate crisis by relying on everyone to just take individual responsibility for systemic and corporate actions. It's also true that that isn't what I was doing by lauding the activists' work in Tucson. What's interesting to me about the argument that worked is that it speaks to the power we have to shape the future we are heading towards by choosing either to acquiesce to or to resist corporate plans and narratives.

I've found myself frequently using the analogy of plastic: To try to live without using plastic now (at least in the US) is an extremely expensive endeavor, both in terms of money and and in terms of time. Plastic is so deeply integrated into so many of our systems that it is very difficult to avoid. But we are at a moment with "AI" where things aren't so deeply integrated, though corporate interests are pushing for them to be. So I believe that every act of refusal is especially powerful and meaningful now, and we would do well to avail ourselves of that power as we can.

Image of plastic bags filled with plastic trash interspersed with unbagged plastic trash
Like plastic, LLMs might be convenient, but they environmentally costly. Let’s refuse while we more easily can. (CC0 image via Rawpixel)

Our book, The AI Con, is now available wherever fine books are sold!

The cover image of The AI Con, with text to the right, which reads in all uppercase, alternating with black and red: Available Now, thecon.ai.
Don't miss what's next. Subscribe to Mystery AI Hype Theater 3000: The Newsletter:
Web Twitch PeerTube Buzzsprout
Powered by Buttondown, the easiest way to start and grow your newsletter.