AI Week logo

AI Week

Subscribe
Archives
November 1, 2025

AI week Nov 1: How to turn off unwanted AI tools

Plus: AI and teen suicide - Boosting AI literacy - Regrettable AI products - & more

Hi and welcome to this week's AI week!

AI week is currently on semi-hiatus due to ongoing family health issues. I'm still sending out periodic linkdumps, like this one.

I've collected a handful of particularly good resources I'm excited to share, including two articles on turning off some "on by default" AI tools. You'll find these in the "Resources" section at the end of the newsletter.

In this week's AI week:

  • Mental health - AI and teen suicide
  • Regrettable AI products - Zombie celebs and Internet of 💩
  • Comic - SMBC
  • OpenAI's ChatGPT Browser - Atlas is a privacy nightmare
  • WTF? - Doritos are not a gun
  • Resources - How to turn off unwanted AI features; boosting AI literacy

AI & Mental health

How can AI be used ethically when it’s been linked to suicide?

David Perry at the Minnesota Star-Tribune asking the big questions: Is it morally OK to use a tool that we know actively hurts others?

image.png https://www.startribune.com/adam-raine-chatgpt-lawsuit-teen-mental-health-education/601498318

I want you to imagine that, at any time, you could ask some dude to hop in his big, gas-guzzling car, drive over to your house and do your work for you. Need an English paper? Done. Need a marketing presentation? Done. None of the work is really good, but it is fast, and the dude promises you that someday the quality of the work will get better.

But what if I told you that when the dude left your house, he was going down the block to visit a troubled teenager in order to help them commit suicide?

I’ve been dismayed to see how the default position on generative AI throughout the educational landscape has been to ask how we might use it ethically, without considering that the answer to the question might be: “We can’t.”

Full disclosure: I'm a former philosophy major and the kind of person who's still boycotting Nestle. (Here are some reasons why.) So while I'm not saying David Perry is right, thinking about this kind of question is very much my jam.

Related: Character.ai to restrict under-18 chats

Multiple families are suing Character.ai for allegedly contributing to their teen child's death by suicide. (OpenAI is also facing a similar lawsuit in the death of 16-year-old Adam Raine.) Character.ai is moving to first restrict, and then block, chats with under-18s.

image.png https://arstechnica.com/information-technology/2025/10/after-teen-death-lawsuits-character-ai-will-restrict-chats-for-under-18-users/

Over the next month, Character.AI says it will ramp down chatbot use among minors by identifying them and placing a two-hour daily limit on their chatbot access. The company plans to use technology to detect underage users based on conversations and interactions on the platform, as well as information from connected social media accounts. On November 25, those users will no longer be able to create or talk to chatbots, though they can still read previous conversations. The company said it is working to build alternative features for users under the age of 18, such as the ability to create videos, stories, and streams with AI characters.


Regrettable AI products

An AI Suzanne Somers

Zombie Suzanne Somers is going to give health advice on her website:

image.png https://people.com/alan-hamel-suzanne-somers-ai-project-exclusive-11832986

Her husband is all in: "When you look at the finished one next to the real Suzanne, you can't tell the difference. It's amazing. And I mean, I've been with Suzanne for 55 years, so I know what her face looks like, and when I just look at the two of them side by side, I really can't tell which one is the real and which one is the AI."

AI-powered enshittification in the stands

"Enshittification" is the term author Cory Doctorow coined for the gradual degradation of a product or service over time in order to increase profits. It's particularly relevant for digital services (like X/Twitter). But real-life services can be enshittifed too.

Spoiler alert: deploying camera/AI recognition for everything isn't great.

image.png https://a.wholelottanothing.org/bmo-stadium-in-la-added-ai-to-everything-and-what-they-got-was-a-worse-experience-for-everyone/

A year later visiting the same stadium, I got worse food, slower service, and a worse overall experience. On the bright side, the billionaire stadium owners probably got to reduce their staff in the process while maybe increasing profits.

Related: More AI may not reduce labor burden

I ran across an interesting paper posted earlier this year:

AI and the Extended Workday: Productivity, Contracting Efficiency, and Distribution of Rents (Feb 2025)

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5119118

TL;DR: from 2004-2023, the more AI you were exposed to at work, the longer your workday. Big caveat: LLM chatbot adoption wasn't at all widespread in 2023. I'd like to see a followup study on how this changed from 2023-2025.

Related: Giving “internet of shit” a new meaning

If you haven't run across the term, "Internet of Shit" is a decade-old derogatory term for IoT (ie, internet-connected) devices that are, well, shitty. Things like smart fridges that show you ads, or smart beds that won't lie flat if their servers are down. (If you want more examples, there's a Twitter account, a Verge column, and a Reddit).

Well, Kohler's new internet-connected device really puts the 💩 in "Internet of Shit". It perches on your toilet bowl and scans your excreta with "powerful machine-learning algorithms". And of course, the Terms of Service gives Kohler the right to to use your personal bathroom info to train their AI.

image.png https://boingboing.net/2025/10/23/kohler-toilet-camera-ready-to-observe-your-poop-and-train-ai.html

As far as I can tell, the supposed health advantages this offer are that it tracks how often you go (and will nag you to drink if you don't pee enough) and tells you if there's blood in the bowl. It's good to know if there's blood in your poop, sure, but you can get a much, much, much more accurate test kit for $75.


Comic

SMBC

THAT ASTEROID IS.png http://www.smbc-comics.com/comic/asteroid


OpenAI releases browser with baked-in ChatGPT

So OpenAI launched Atlas, a browser that's a bit like Chrome x ChatGPT. Instead of Google Search, you get ChatGPT. This might also be a regrettable AI product, to be honest.

image.png https://openai.com/index/introducing-chatgpt-atlas/

It looks like everything you do, every site you visit, gets fed into your chat history & you can ask ChatGPT about it -- something like the experience of using a coding environment with an LLM built in, like Visual Studio. From OpenAI's blog post:

During lectures, I like using practice questions and real-world examples to really understand the material. I used to switch between my slides and ChatGPT, taking screenshots just to ask a question. Now ChatGPT instantly understands what I’m looking at, helping me improve my knowledge checks as I go."  — Yogya Kalra, college student and early tester of ChatGPT Atlas

(Honestly, that sounds exhausting. Not sure I could listen and stay focused on a lecture while carrying on a second conversation with my web browser.)

Atlas is going to have a baked-in AI agent that will surf the web for you, so you don't have to read all those pesky web pages yourself, but the agent's in preview mode right now. Here's Ars Technica's review of this preview mode:

image.png https://arstechnica.com/features/2025/10/we-let-openais-agent-mode-surf-the-web-for-us-heres-what-happened/

It's a pretty positive review given that the agent didn't finish any of the tasks it was given. The Washington Post reviewed the features that are fully working and concluded "Use it with caution":

image.png https://www.washingtonpost.com/technology/2025/10/22/chatgpt-atlas-browser/

A test found that Atlas kept memories about registering for “sexual and reproductive health services via Planned Parenthood Direct,” according to Lena Cohen, a staff technologist at the Electronic Frontier Foundation. It also kept a memory about the name of a real doctor. “The extensive data collection in the Atlas browser could be a privacy nightmare for users,” she said.

As someone who is all “clear cookies” and “do not track”, I am just repulsed by this browser. OpenAI is losing money hand over fist, doesn’t charge most users, is building a deep and extremely personal history on its users, and has the lion’s share of a potential brand new untapped advertising vector. Hmmmm... where could this be going?


WTF

Sometimes you read a story and just go... image.png

WTF, Kenwood High School?

Armed police swarmed a teen outside his school after an AI system mistook his bag of Doritos for a weapon.

image.png Ceci n'est pas un fusil.

https://www.dexerto.com/entertainment/armed-police-swarm-student-after-ai-mistakes-bag-of-doritos-for-a-weapon-3273512/

Allen was handcuffed at gunpoint. Police later showed him the AI-captured image that triggered the alert. The crumpled Doritos bag in his pocket had been mistaken for a gun.

Algorithmic bias, in which AI systems echo and magnify the bias of their trainers and training data, is a thing. So, one guess as to this teen's race.

WTF, Albania?

To recap: First, the Albanian government website chatbot was nicknamed "Diella". Then the prime minister announced that "Diella" would become the Minister of Procurement. And now, "Diella" is "pregnant":

image.png https://futurism.com/artificial-intelligence/rama-diella-albania-pregnant


Resources

How to turn off AI tools you don't want

image.png https://www.consumerreports.org/electronics/artificial-intelligence/turn-off-ai-tools-gemini-apple-intelligence-copilot-and-more-a1156421356/

...in your Google apps specifically:

image.png https://www.zdnet.com/article/how-to-turn-off-gemini-in-your-gmail-docs-photos-and-more-its-easy-to-opt-out/

Worked for my personal account. If your Google account is managed by school or work, these settings may be managed by your org.

Elevate your AI literacy

with this program from the Poynter Institute (a nonprofit that trains journalists):

image.png https://www.poynter.org/mediawise/programs/altignite-fuel-curiosity-elevate-your-ai-literacy/

Elevate your literacy about the AI boom

with these 16 graphs from the Understanding AI newsletter.

image.png https://www.understandingai.org/p/16-charts-that-explain-the-ai-boom

It’s not often that you get to deal with a quadrillion of something. But in October, Google CEO Sundar Pichai announced that the company was now processing 1.3 quadrillion tokens per month between their product integrations and API offerings. That’s equivalent to processing 160,000 tokens for every person on Earth. That’s more than the length of one Lord of the Rings book for every single person in the world, every month.


That's it for this edition of AI week! Thanks for reading.

What did you think? Reply to this email or leave a comment on the web to let me know.

Read more →

  • Sep 15, 2025

    AI Week Sep. 14th: Accessibility, a game-changing new standard, & AI psychosis

    Hi! Welcome to this week's AI week. I'm doing something new this week. I wanted to make it easier for you to share the stories in this newsletter...

    Read article →
  • Aug 24, 2025

    AI Week Aug 23: AI bubble, AI safety, mind-reading, and more

    Hi! Welcome to this week's AI week. This week's newsletter is jam-packed with interesting stories. I have two weeks' worth of news to curate, and there's...

    Read article →
Don't miss what's next. Subscribe to AI Week:
Start the conversation:
https://rdbms-insig… https://nrmroshak.c…
Powered by Buttondown, the easiest way to start and grow your newsletter.