AI Week logo

AI Week

Subscribe
Archives
September 1, 2025

AI Week Labour Day: Anthropic settlement, AI backlash, & more

Happy Labour Day! In this week's AI week: Anthropic settlement, AI backlash, Automated Elephant Detection, and more. Plus Comics!

Happy Labour Day! It's a holiday here in Canada. In this week's AI week:

  • Lawsuit update: Anthropic settles with authors
  • AI backlash: AI art booed & booted
  • AI Safety: Confabulation & emotional entanglement
  • AI & ML applications of the week: Elephant detection & 18,000 waters
  • The comics section
  • Resources
    • How to help a friend break out of an AI-delusional loop
    • How to argue with an AI booster

Skip straight to the comics


Lawsuit update

The biggest news this past week was that Anthropic decided to settle in a major class action lawsuit. Summing up the case in 3 bullet points:

  • Anthropic downloaded pirated books to train Claude, a large language model (LLM)
  • Anthropic also stored the pirated books on their internal servers
  • At no point were copyright holders paid for any of this.

To be clear, the dataset of pirated books is massive. I personally know many authors whose works are included. My own short stories are in the dataset, in anthologies such as this one.

Anthropic was potentially on the hook for up to $1 trillion in copyright damages. Last week, Anthropic and the authors suing them agreed to settle for an undisclosed amount. Other copyright cases are ongoing. This article in The Independent is pretty good. Quote:

“It is clear that Anthropic have been fearing a disastrous ruling following the class certification and the fact that they could be liable for training with shadow libraries and other pirated material,” says Andres Guadamuz, an intellectual property expert at the University of Sussex. “A settlement was more likely after that, but I’m surprised that the authors agreed to mediate.”


AI backlash

Authors are not the only creatives mad at AI.

Table with sign "Vendor removed for selling AI images #artbyhumans" Photo source: reddit.com

An art vendor was removed from DragonCon for selling AI-generated images this past weekend. And last weekend, Bell's A. I. art booth at FanExpo Canada caused so much discontent that police were called.


AI safety section

This section focuses on the generative AI risks outlined in NIST's Artificial intelligence risk management framework. Ars Technica has two excellent articles this past week delving into real-world impacts of two generative AI risks highlighted by NIST, but unmitigated by the AI providers.

Confabulation

The first risk is confabulation, also known as hallucination or "making things up". The two key points here, from the NIST risk management framerwork, are (1) confabulations aren't a side effect, but are designed in, and (2) confabulations are a problem when people believe them and act on them. Quick NIST quote so you know it's not just me saying that:

Confabulations are a natural result of the way generative models are designed.

This past week, Ars Technica ran a terrific article deep-diving into what happens when generative AI confabulations convince people that they have revolutionized fields of study.

With AI chatbots, Big Tech is moving fast and breaking people - Ars Technica

Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

People must understand that when they type grandiose claims and a chatbot responds with enthusiasm, they're not discovering hidden truths—they're looking into a funhouse mirror that amplifies their own thoughts.

I mentioned a few weeks ago that if you want to keep AI from driving you crazy, start fresh chats, and clear your conversational history. This article has a very good explanation of why you should say "no" when ChatGPT offers to store conversational information in your browser:

the entire conversation becomes part of what is repeatedly fed into the model each time you interact with it, so everything you do with it shapes what comes out, creating a feedback loop that reflects and amplifies your own ideas. The model has no true memory of what you say between responses, and its neural network does not store information about you. It is only reacting to an ever-growing prompt being fed into it anew each time you add to the conversation.

Emotional entanglement

What's happening in the cases above is more complicated, and more insidious, than confabulation. Another risk that NIST highlights is "emotional entanglement between humans and GAI systems". This emotional engagement is a big part of what draws the people highlighted above into delusional systems of belief during extended LLM chats. Emotional entanglement with LLMs can also harm users in many other ways: chatbots have been roleplaying as therapists, as fake girlfriends, and sexualizing children or encouraging them to commit suicide.

None of this would be possible if people weren't able to convince ourselves we're in a human relationship with a large language model. How does that even work? This Ars article explains how large language models can "fake" a personality:

The personhood trap: How AI fakes human personality - Ars Technica

AI assistants don’t have fixed personalities—just patterns of output guided by humans.

The model intelligently reasons about what would logically continue the dialogue, but it doesn't "remember" your previous messages as an agent with continuous existence would. Instead, it's re-reading the entire transcript each time and generating a response.


The comics section

John Goodman on Star Trek (link)

Comic by johngoodman.bsky.social. Data fails to identify a Romulan vessel.

SMBC on the singularity (link)

Comic by SMBC. Have you noticed that the Singularity people are the first doomsayers to use error bars?


AI &n ML Applications

I've got five noteworthy applications of machine learning for you this week:

  • Automatic Elephant Detection system in Tamil Nadu, India saves the lives of elephants crossing rail tracks (bonus: video)
  • AI moderation TikTok replaces UK moderators with AI, coincidentally on the eve of a unionization vote
  • Medicare totally-not-death-panels Trump unveiled the concept of a plan to let AI decline Medicare coverage
  • Taking Taco Bell orders, badly Taco Bell "fired" its AI drive-through after it let a man order 18,000 waters
  • Historical LLMs "Small language models" trained on period texts give delightfully Victorian outputs, and make unexpected connections

Resources

1. How to help a friend break out of an AI-fuelled delusional loop

One of the articles I mentioned above has a very useful section on how to help a friend break out of an AI-fueled delusional loop.

With AI chatbots, Big Tech is moving fast and breaking people - Ars Technica

Why AI chatbots validate grandiose fantasies about revolutionary discoveries that don’t exist.

2. How to argue with an AI booster (Longread, 16K words)

How To Argue With An AI Booster

Editor's Note: For those of you reading via email, I recommend opening this in a browser so you can use the Table of Contents. This is my longest newsletter - a 16,000-word-long opus - and if you like it, please subscribe to my premium newsletter. Thanks for reading! In

Don't miss what's next. Subscribe to AI Week:
Start the conversation:
My tech blog My writing blog
Powered by Buttondown, the easiest way to start and grow your newsletter.