Fresh AI Hell, Wrapped

By Alex Hanna, Decca Muldowney, and Emily M. Bender
We’re looking back on 2025 by sharing the episodes, published this year, that recorded the most listens. If you missed any, check them out! If you’re up to date on all of these, there are also 58 others produced to date (starting in 2022) for your listening pleasure 😄 (For those who prefer watching, you can find all of the the episodes on our PeerTube) Thanks to all of our listeners for taking this journey of ridicule as praxis with us, and especially to our livestream participants whose witty contributions you can hear in most episodes!
And now, we’re happy to present to you our most listened-to episodes of the year!
Top Ten Episodes of the Year

10. Ep. 57 - The “AI”-Enabled Immigration Panopticon (with Petra Molnar)
Recorded 2025-05-05 [Livestream, Podcast, Transcript]
Petra Molnar, a lawyer and anthropologist specializing in migration and human rights, joined us to discuss the ways in which “AI” technology is being developed for “border security” by US immigration enforcement in increasingly terrifying and dystopian ways, including the potential deployment of Department of Homeland Security robot dogs. Petra points out that the decisions made by ICE agents and other immigration officials are already opaque as it is, but the added layer of technology “adds this kind of veneer that people who are powerful like to hide behind.”

9. Ep. 59 - Et Tu, American Federation of Teachers? (with Charles Logan)
Recorded 2025-07-14 [Livestream, Podcast, Transcript]
Charles Logan, a former English teacher with a PhD from Learning Sciences at Northwestern University, joined us to discuss the false promise of ed tech, especially the drive to push “AI” into the classroom. We talked about ed tech marketing hype from OpenAI, a writing instructor who is missing the plot with using LLMs in her teaching, and the awful deal that the American Federation of Teachers made with Microsoft, Anthropic, and OpenAI for a “national academy for AI instruction.”

8. Ep. 54 - “AI” Agents, A Single Point of Failure (with Margaret Mitchell)
Recorded 2025-03-31 [Livestream, Podcast, Transcript]
We welcomed Margaret Mitchell from Hugging Face to the podcast to discuss agentic “AI.” What are “agents” and why won’t everyone stop talking about them? Is this the future of making travel plans and restaurant reservations? In a classic case of “nobody actually wants this,” we talk about the risks of giving “AI” agents access to all your information and why companies are ignoring requests from users, like those with low vision or cognitive decline, who could actually benefit from assistive technologies.

7. Ep. 65 - Crunching the Numbers (with Decca Muldowney)
Recorded 2025-10-20 [Livestream, Podcast, Transcript]
We had our very own Decca Muldowney on to talk about what happens when people claiming to do data journalism foist off part of the work to synthetic text extruding machines. Looking into our artifacts, we found that the people who want this to be a good idea aren’t so much journalists but bosses of journalists. We hope that readers of journalism will keep holding higher standards. Speaking of crunching the numbers—this episode was posted on October 30 and with that short runway still made the top 10 for the year!

6. Ep. 61 - Winning the Race to Hell (with Sarah Myers West and Kate Brennan)
Recorded 2025-08-04 [Livestream, Podcast, Transcript]
In this episode we took on Trump’s AI Action Plan, with the expert assistance of Sarah Myers West and Kate Brennan, Co-Executive Director and Associate Director of the AI Now Institute, respectively. The AI Action Plan is full of hype, Sinophobia, and gifts for Silicon Valley and not grounded in either a realistic understanding of the technology in question nor care for the needs of people. Fortunately, the AI Now Institute spearheaded a People’s AI Action Plan, with much better ideas.

5. Ep. 62 - The Robo-Therapist Will See You Now (with Maggie Harrison Dupré)
Recorded 2025-08-18 [Livestream, Podcast, Transcript]
2025 was marked by what felt like a steady drumbeat of news about people being led to self-harm and psychosis through chatbot interactions, and every “AI” critic we know receiving emails from people distressed by their interactions in various ways. Against that background, it is stunningly atrocious to hear about people and companies putting forth chatbots as therapy replacements. We were fortunate to get to talk this all through with Futurism journalist Maggie Harrison Dupré, who has done first-rate reporting on the mental health consequences of chatbots.

4. Ep. 58 - “Like Magic Intelligence in the Cloud”
Recorded 2025-05-26 [Livestream, Podcast, Transcript]
In this episode we give Sam Altman and Jony Ive’s collaboration announcement video the full Mystery AI Hype Theater 3000 treatment. OpenAI’s Altman and Ive, the iPhone guy, wander romantically around San Francisco’s North Beach discussing how great they both are, all the while ignoring the fact that their industry is destroying the city they claim to love. As Alex points out in the episode, while Altman and Ive talk about attracting “talent” to the city to work on “AI,” the tech industry has drastically pushed up rents, displacing working-class San Franciscans from traditionally Black and Brown neighborhoods.

3. Ep. 60 - Vibe Coding: Four Security Nightmares in a Trenchcoat (with Susanna Cox)
Recorded 2025-07-21 [Livestream, Podcast, Transcript]
Security engineer Susanna Cox joined us to discuss the absolute nightmare that is “vibe coding”, that is, letting synthetic text extruding machines generate code for you. We took on credulous screeds against AI critics, and also a guy who had a very bad day indeed using Replit to create a minimum viable product, with disastrous results. Adding in “AI agents” and allowing them to cross communicate opens up Pandora’s box of security vulnerabilities. “You should never let this touch your code base.”

2. Ep. 56 - AGI: “Imminent”, “Inevitable”, and Inane
Recorded 2025-04-21 [Livestream, Podcast, Transcript]
It may be 2025, but the human race will be ending in 2030. At least that’s what the rationalist authors of the awful AI 2027 report suggest in their several-thousand word dystopian science fiction. We went through their website, picking apart the most egregious of their claims. Like did you know that the most important thing that “OpenBrain” could be working on are AI agents who do science? And that they are going to pretend to be under our control, all the while planning the demise of humans with biological agents. Despite how silly this all sounds, this document has had influence with some policymakers and mainstream opinion writers at large, legacy publications.

1. BOOK LAUNCH: Why “AI” is a Con: Our Book Launch! (with Vauhini Vara)
Recorded 2025-05-08 [Livestream, Podcast, Transcript]
And last, but certainly not least, we were joined by journalist and author Vauhini Vara to talk about our new book, The AI Con! We talked about the process of writing the book, the way that the podcast made writing the book much easier, and took a bunch of fantastic questions from long-time listeners of the pod.
Fresh AI Hell, Wrapped
All year long, Emily tries to keep up with the flood of Fresh AI Hell by dropping links to news stories in our big ol’ list of links spreadsheet. This represents only the links we collectively come across and drop into our group chat (including some contributed by listeners) and is far from comprehensive. Among other things, the vast majority are written in English; Fresh AI Hell is alas international. As of this reporting, the spreadsheet has 2,154 rows, and 1,149, or about 53% of those, point to items published in 2025. It’s not just that Emily was more systematic this year (in fact, at one point she cried uncle and just left some links from the group chat unharvested)—the Fresh AI Hell seems to be flowing faster than ever. And we can barely keep up! We only discussed 190 (about 17%) of those items from 2025 on the show.

In addition to recording the link, Emily also attempts to maintain a rough topical categorization, associating each item with one more tags. Some of the items she files under “Accountability/Good pushback” (with potentially other tags) and happily, this was the most common tag this year at 133. The next four are Education (95), Info ecosystem (85), Policy/Law (77), and It’s Alive! (72).
With all that said, we bid you adieu, and look forward to you joining us in 2026 to take on the AI industry. We hope you’ll take some time to yourself, and emerge fresher than the AI Hell we’ll be encountering.
