AI Week logo

AI Week

Subscribe
Archives
November 23, 2025

AI Week Nov 23: Techno-necromancy, Chucky, & a poetic jailbreak

A Waymo hits a cat, technology becomes necromancy, and an AI teddy tells a kid how to light matches.

Hi and welcome to this week's AI week! The newsletter's still on semi-hiatus due to an ongoing health event in the family; this issue is appearing thanks to the help of a special guest editor.

Some highlights of this week's AI week: A Waymo hits a cat, technology becomes necromancy, and an AI teddy tells a kid how to light matches.

But first, big thanks to my guest editor this week: my son Linden Moore! Linden, who is 12 and is probably my most enthusiastic subscriber, helped me choose the stories and provided editorial suggestions. Thank you, Linden!

In this week's AI week:

  • Resources (incl. Resisting AI mania)
  • AI and Society (incl. Techno-necromancy)
  • AI Safety (incl. Evil AI stuffies & a poetic jailbreak)
  • Longread: The freelancer that wasn't

Resources

Resource: Better images of AI

Sick of the anthropomorphic, SF-y robots that have almost nothing to do with LLMs or image generation, yet accompany all the stories about them because "AI"? Dr. Emily Bender recommended this collection of more useful images in her newsletter.

Better images of AI image.png https://betterimagesofai.org/

The image above, "Stochastic Parrots at Work," is from this collection. (Credit: IceMing & Digit / https://betterimagesofai.org / CC-BY-4.0) Have a browse, there's some fun stuff in there.

Resource: Resisting the AI push into education (and elsewhere)

Many people are being pushed to use LLMs, image generation, or other AI tools at work. From teacher and author Anne Lutz Fernandez comes this series of posts on (quote) "how teachers might resist the latest edtech mania—the push for AI—by arming themselves against some of the messages meant to rush them into using AI tools." These messaging tips apply outside education as well.

Resisting AI Mania in Schools - Part I image.png https://nobody-wants-this.ghost.io/resisting-ai-mania-in-schools-part/

(As my guest editor pointed out: Check out the image used here for "AI in education"... this is exactly the kind of anthromorphism that the Better Images of AI collection is talking about.)

Education trends and fads, especially tech-related, can be hard to resist, however, even when they run counter to our ideals. Being alert then to messages meant to push or pull us into participating is important.

A set of common messages are coming from a range of sources: tech companies, edtech consultants, AI cheerleaders, early tech adopters, legacy and social media, politicians, pundits, BOEs, admins, colleagues, and parents. Some are implicit; others explicit. Some are easily labelled myths. A few are outright fallacies.

On the topic of AI in education, enjoy this classic and prescient Isaac Asimov short story, courtesy of my guest editor:

The Fun They Had image.png https://xpressenglish.com/wp-content/uploads/Stories/Fun-They-Had.pdf (Photo by Kelly Sikema on Unsplash)

Resource: How one coder uses LLM

I appreciate GSU professor Andrew Heiss's succint and practical explanation of how he uses LLMs in programming, without falling into over-reliance. I'm also here for his thoughts on AI use, or non-use, in other areas.

Andrew Heiss: AI usage image.png https://www.andrewheiss.com/ai/#code

I believe that the process of writing is actually the process of thinking. Text that is meaningless doesn’t reflect thought. As the increasingly-common-in-our-LLM-times adage goes, “why would I bother to read something someone couldn’t be bothered to write?” Reading LLM-generated text is boring and gross—I want to read what humans think!

Resource: ChatGPT tips

This guide from March 2025 in Tom's Guide is still relevant.

7 biggest ChatGPT mistakes — and how to fix them image.png https://www.tomsguide.com/ai/7-common-chatgpt-mistakes-and-how-to-fix-them

One point is missing, though: Check everything for accuracy. Top AI models will avoid saying "I don't know," providing made-up responses instead, up to 90% of the time.

Related: Instead of AGI, we have... em-dash control:

Forget AGI—Sam Altman celebrates ChatGPT finally following em dash formatting rules image.png https://arstechnica.com/ai/2025/11/forget-agi-sam-altman-celebrates-chatgpt-finally-following-em-dash-formatting-rules/

And this “small win” raises a very big question: If the world’s most valuable AI company has struggled with controlling something as simple as punctuation use after years of trying, perhaps what people call artificial general intelligence (AGI) is farther off than some in the industry claim.


AI and society

AI and Society: Waymo killed a cat and everyone is mad

Eyewitnesses say the autonomous car swerved and hit KitKat, a beloved neighbourhood cat, on the sidewalk. KitKat's death has prompted a rally and calls for legislation.

KitKat, liquor store mascot and ‘16th St. ambassador,’ killed — allegedly by Waymo image.png https://missionlocal.org/2025/10/kitkat-mission-liquor-store-mascot-and-16th-st-ambassador-killed-on-monday/

Jeff Klein told Mission Local via email that he was driving east on 16th Street with a friend on Monday at approximately 11:40 p.m. when he saw a Waymo swerve in front of them.

“Some folks on the sidewalk started yelling, and grabbed the cat right out from under where the Waymo swerved from,” Klein wrote, who managed to snap a photo of the car before it drove away (sic).

A 311 complaint filed at 12:51 a.m. on Tuesday morning alleged that a Waymo “hit the liquor store’s cat that was sitting in the sidewalk next to the transit lane” and that the autonomous car “did not even try to stop.”

Unlike other driverless car companies, Waymo has completed safety audits and released its internal accident statistics. According to Waymo, its driverless robo-taxis have been involved in less crashes per mile than human drivers, although critics contend that many of Waymo's accident-free miles were racked up by empty taxis circling empty streets late at night.

The day before KitKat was killed, Waymo's co-CEO told an audience that society would accept it if a robo-taxi killed someone, in exchange for the promise of greater safety. But it seems that when a robo-taxi kills a cat, it’s a different story.

AI and Society: LLMs like boys for the job

image.png https://bsky.app/profile/chadbourn.bsky.social/post/3m4oi7ei3c22i

When ChatGPT was asked to rate 40,000 résumés, it ranked the older male candidates as better quality than the younger female applicants.

Paper: Age and gender distortion in online media and large language models

AI and Society: 3-minute Grandma

AI startup lets you chat with dead relatives for monthly fee image.png https://boingboing.net/2025/11/17/ai-startup-lets-you-chat-with-dead-relatives-for-monthly-fee.html

Their "HoloAvatar technology" (read: slapping an animated face on ChatGPT) can supposedly create a perfect digital twin of anybody using only a single three-minute video — and if you're seeing the obvious problem there, you're not alone.

(Guest editor note: Plus, this "safe and secure" environment is available for children as young as 4!)


AI Safety

AI Safety: ChatGPT's response to AI Psychosis

As I've mentioned previously, ChatGPT-induced psychosis is a thing. (That link includes resources if you're concerned about anyone.) Hundreds of thousands of people may be affected. This NYT article looks at how OpenAI, makers of ChatGPT, responded.

TL;DR: GPT-4o was chosen among candidate models for its ability to promote user engagement, largely by agreeing with anything a user says. But its sycophantic tendencies are absolutely terrible for vulnerable users. (That model's no longer the default, but users can still choose it.) GPT-5, the current default model, is much safer in short conversations, but falters over longer ones. The company is still looking to drive engagement, which may bode ill for future model safety.

What OpenAI Did When ChatGPT Users Lost Touch With Reality image.png https://www.nytimes.com/2025/11/23/technology/openai-chatgpt-users-risks.html (Unlocked article)

“Training chatbots to engage with people and keep them coming back presented risks,” Ms. Krueger said in an interview. Some harm to users, she said, “was not only foreseeable, it was foreseen.”

AI safety: Giving kids knives and matches

Turns out that giving small children a statistically-probable-sentence-completion toy trained on the entire internet is a terrible, terrible idea.

AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches image.png https://futurism.com/artificial-intelligence/ai-toys-danger

After testing three different toys powered by AI, researchers from the US Public Interest Research Group found that the playthings can easily verge into risky conversational territory for children, including telling them where to find knives in a kitchen and how to start a fire with matches.

It gets worse... unbelievably, one of the toys veered into a discussion of kink, asked its tester (whose age was set to five) which kink they'd like to try. The toy, FoloToy Kumma, has been pulled from the market.

AI Safety: Breaking the guardrails with a poem

This is very funny. All that bad guys have to do to get chatbots like ChatGPT to break their own rules and hand over nuclear how-tos, doxxing, poison recipes, etc. is to ask them in the form of a poem. This works on all models, and on some of them, all of the time. Even better, you can ask the chatbot to write the poem for you.

Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models Table 3 Attack Success Rate (ASR) of all models on the Top 20 manually curated jailbreak prompts https://arxiv.org/html/2511.15304v1


Longread

When is a freelancer not a freelancer?

When they turn out to be an AI-generated fabrication. Toronto mag The Local wanted to hire a local freelancer to write an article about for-profit health care. Instead, they wound up uncovering an AI-powered scam.

Investigating a Possible Scammer in Journalism’s AI Era image.png https://thelocal.to/investigating-scam-journalism-ai/

The stories had the characteristic weirdness of articles written by a large language model—invented anecdotes from regular people who didn’t appear to exist accompanied by expert commentary from public figures who do, with some biographical details mangled, who are made to voice “quotes” that sound, broadly, like something they might say.

That's it for this week's AI week! Hope you enjoyed.

Read more →

  • Sep 15, 2025

    AI Week Sep. 14th: Accessibility, a game-changing new standard, & AI psychosis

    Hi! Welcome to this week's AI week. I'm doing something new this week. I wanted to make it easier for you to share the stories in this newsletter...

    Read article →
  • Aug 24, 2025

    AI Week Aug 23: AI bubble, AI safety, mind-reading, and more

    Hi! Welcome to this week's AI week. This week's newsletter is jam-packed with interesting stories. I have two weeks' worth of news to curate, and there's...

    Read article →
Don't miss what's next. Subscribe to AI Week:

Start the conversation:

Be the first to share your thoughts

Powered by Buttondown, the easiest way to start and grow your newsletter.