Creative Good logo

Creative Good

Subscribe
Archives
September 19, 2025

AI chatbots are not your friend, and neither is the Friend

I’m sorry to have to play Cassandra again, but my worries about chatbots are coming true. Several weeks ago I wrote The real threat posed by AI chatbots (June 28, 2025) with this conclusion:

In its newest incarnation as an AI chatbot, the machine’s predations are targeted at the vulnerable: the mentally ill, the young, the underprivileged. But the machine is still growing. Eventually, if we don’t stop it, the machine will come for you.

And so it is. If you’re married or otherwise in a committed relationship, and you or your partner uses ChatGPT, the machine is coming for you. From ChatGPT Is Blowing Up Marriages as Spouses Use AI to Attack Their Partners (by Maggie Harrison Dupré in Futurism, Sep 18, 2025):

one person in a couple becomes fixated on ChatGPT or another bot — for some combination of therapy, relationship advice, or spiritual wisdom — and ends up tearing the partnership down as the AI makes more and more radical interpersonal suggestions.

This is not exactly a new issue. ChatGPT and other LLMs have repeatedly been found to amplify mental health issues, and even induce “AI psychosis” – as mentioned in my June 28 column above.

More evidence comes from Maggie Harrison Dupré’s recent articles in Futurism. A scan of the headlines gives a sense of how pervasive the problem is:

  • People Are Becoming Obsessed with ChatGPT and Spiraling Into Severe Delusions (June 10)

  • People Are Being Involuntarily Committed, Jailed After Spiraling Into “ChatGPT Psychosis” (June 28)

  • Support Group Launches for People Suffering “AI Psychosis” (July 24)

I’d also recommend Matt Novak (past Techtonic guest) who writes that ChatGPT Users File Disturbing Mental Health Complaints (Aug 13, 2025), in which a FOIA request turned up dozens of complaints about ChatGPT’s harmful effects on users’ mental health.

Now imagine if there was an AI chatbot surveilling your conversations all day. I don’t mean recording when you’re chatting online, I mean actually listening to whatever you say out loud. And then, being a chatbot, it would respond to you – using the same sort of algorithm that causes the “AI psychosis” mentioned above.

There is just such a product, the Friend, a wearable surveillance device resembling a small plastic disc. You’re supposed to wear it around your neck like a pendant, or an albatross, or a millstone.

The premise of the Friend, as best I can tell, is that what people need most today is the illusion of companionship. So lonely people can speak to their Friend pendant, after which they receive a text back from the chatbot. Or if the user has the unusual opportunity to speak with another human being, the Friend dutifully listens in and sends texts in response to that conversation.

Sound creepy? Take a look at the subway ad I spotted a few days ago here in Manhattan:

A New York City subway ad with the text "I'll ride the subway with you. friend.com" accompanied by a larger-than-life photo of the Friend surveillance pendant.
Photo by Mark Hurst

“I’ll ride the subway with you.” Is that a promise or a threat? Is there any difference, really, coming from the predatory surveillance-tech industry?

I’m glad that I’m not alone here. Wired ran a review, I Hate My Friend (Sep 8, 2025), that describes the device well:

The chatbot-enabled Friend necklace eavesdrops on your life and provides a running commentary that’s snarky and unhelpful. Worse, it can also make the people around you uneasy.

The journalists, Boone Ashworth and Kylie Robison, write about their experience wearing the device:

we both came away with the gut feeling that our new Friends were real bummers.

. . . “Sad? What’s making you sad? That’s definitely not what I’m aiming for,” my Friend responded after it heard me telling a friend (a human one) that the interaction was upsetting.

It is an incredibly antisocial device to wear.

The company unwittingly has shown how unattractive it looks to wear their AI surveillance puck. In the image below, taken from Friend’s launch video last year, a young couple sits on a roofdeck. The woman coyly remarks that she’s never been there with anyone else . . . except, of course, her Friend. She fingers the device lovingly. You expect the guy to stand up and say, nice meeting you, I have to go. But he’s being paid by the company, so he acts interested.

A young woman and young man sit on a rooftop on a sunny day. They are looking at each other. The woman is fingering a shiny pendant that she wears around her neck.
From Friend Reveal Trailer (July 2024)

Who would want to start a relationship with someone – and their AI device, listening in and commenting on everything? Yet many people don’t have a choice, when their existing partner falls into a spiral with an AI chatbot.

This will be the new normal, if Silicon Valley has its way: people relating to Big Tech-owned AIs first, and only secondarily to their friends, partners, and family. This isn’t just a new product, it’s the inauguration of a new era of loneliness and isolation.

And this is what really bothers about the latest tech launches – whether the Friend surveillance puck, or the Facebook surveillance glasses, or any of the other devices that hold society and democracy in such contempt: their companies are attempting to normalize practices and habits that will, they hope, last for generations. (And that’s how you create a Butlerian jihad.)

Still, there is a way to build technology that enables, supports, and amplifies the good. I spoke with Faine Greenwood on Techtonic this week about the dangers of chatbots, as predicted 50 years ago by Joseph Weizenbaum, and an alternate mode of decentralized tech development:

  • See episode page (click “pop-up player” to listen)

  • Download podcast episode

If you’d like to join me in my quest to encourage the good in tech, please become a member of Creative Good. You’ll get access to our members-only Forum, where we post and discuss tech news, fun stuff, and the occasional game.

Until next time,

-mark

Mark Hurst, founder, Creative Good
Email: mark@creativegood.com
Podcast/radio show: techtonic.fm
Follow me on Bluesky or Mastodon

Don't miss what's next. Subscribe to Creative Good: