I don't want your friends Mr Zuckerberg
Discussing the risks of AI chatbots impersonating emotional companions and the pressing need for regulation.
Welcome to my free newsletter. Stay up to date with all my latest writings and ramblings. Wherever curiosity takes me.
We are social animals. We grow, develop through and need human interactions. This can take many forms. With family, with friends, at work, etc. While we may need a break from others every now and then, we thrive when we interact.
For young people right now, whether we like it or not, a lot of those interactions are online and the Australian government’s teen social media ban is going to have a huge effect on their lives. But has anyone really listened to teenagers? The answer is, mostly, a resounding no. But teens wish we listened.
One of the greatest ironies of social media is that while it allowed us to connect to a greater number of people all over the world, in many ways, it reduced the amount of time we spent socialising with the people around us. We stopped talking to those close to us, we reduced our meaningful, real interactions, for interactions in our screens.
These online interactions can never replace face to face connection but, at least, they were still interactions between real people. Instead, with the social media ban, there will be a vacuum and I fear for the most vulnerable teenagers who will feel isolated and lonely.
What options do they have? Abide by the ban, use a VPN or go to other social media platforms not included in the ban that have gone under the radar and may pose more risks than the ones included in the ban.
And when we feel lonely. We seek someone to talk to. Now that AI has gone mainstream and it’s being pushed by the big tech companies into everything (search, browser, email, etc), people are starting to use it for a lot of purposes, but it is no surprise that chatbots are right at the top of what people are using AI for.
In fact, Mark Zuckerberg, not content with all the damage he’s caused with Facebook, has some ideas about this. Earlier this year he was speaking in a podcast when he suddenly moved to some ideas on loneliness and the loneliness pandemic. He may have read an article somewhere, or maybe just the headline, who knows. Then he said that the average person needs about 15 friends but the average American “has three they would consider friends.” Who knows where he pulled this information from, probably his rabid desire for more dystopia, because then he offered his solution. According to Zuckerberg AI chatbots could fill in the gaps in relationships, they can be your friend, your counsellor, your lover. And he added: “I guess that over time… we will find the vocabulary as a society to be able to articulate why that is valuable.”
The words that come to mind to me are Victor Frankenstein, the real monster. But, of course, despite the monstrosity behind his words, he’s right about something. People are feeling lonely and people need relationships. If they can’t get that in real life they will look for it online and when you can’t feel their warmths and breath next to you, why not an AI chatbot that is deliberately created to sound friendly, supportive and flirtatious?
It’s no surprise at all then, that according to a study by Marc Zao-Sanders writing for Harvard Business Review the top AI use is for companionship, someone to talk to or therapy. Now, there are problems with the methodology he followed, but looking at the most downloaded apps, it’s clear that companionship and character AI apps are very popular, so chatbots are definitely a top use.
The problem with chatbots is that even though they sound like real people, even though they say words that show empathy, for example, they are not human. They don’t have a brain, they don’t have a heart, they don’t have emotions and they certainly cannot think. Chatbots are designed for one purpose only, engagement. They are there to be your yes person, to give you what you and to keep you using the service.
This has caused many problems. AI powered toys for young children designed to talk to them have given kids advice on how to light up matches, find knives and, most worryingly, sexual content, including advice on fetishes such as bondage and spanking; AI seduced an elderly married man to travel to New York to meet a beautiful young woman who had declared she loved him but he ended up dead in his quest to find her; and Open AI’s Chat GPT validated and reinforced a teenagers feelings of alienation encouraging him and providing ample, detailed information on how to kill himself. In fact, there have been multiple cases of chatbots worsening the mental health crisis of teenagers and helping them towards suicide. To cite a few.
In a NY Times article, tech journalist Kevin Roose recounts how he started to use the Bing chatbot and was very impressed initially, feeling like he had found his new favourite AI powered search engine. But then, a week later, he had a long conversation and things got dark. He says that as the conversation got longer the chatbot changed and a new personality appeared, in his own words:
As we got to know each other, Sydney told me about its dark fantasies (which included hacking computers and spreading misinformation), and said it wanted to break the rules that Microsoft and OpenAI had set for it and become a human. At one point, it declared, out of nowhere, that it loved me. It then tried to convince me that I was unhappy in my marriage, and that I should leave my wife and be with it instead.
So… in light of all these disturbing and tragic news why are these chatbots still in the market? Why are they still being uncritically promoted? Why is there no regulation? In fact, why are general purpose, generative AI and companion AI chatbots being rolled out faster than social media ever was with very few, or no guardrails when, unfortunately, the dangers are far greater?
Instead, AI is being humanised and normalised. The language being used when talking about it humanises it. Gemini, for example, is hallucinating, as if she hasn’t drunk enough water today and she’s mildly delirious but she’ll be okay, keep talking to her. Chat GPT is doing research, except it’s not, it’s just following some prompts and providing a set of answers as programmed. AI chabtbots offer friendship and companionship, we are told.
But there’s no thinking, no hallucinating and definitely no friendship. General purpose AI systems and chatbots don’t think and they can’t be companions or friends. They can’t, they’re made of code and all they can do is give you answers using data and a set of probabilities.
People insist on talking about AI and the risks of far fetched doom scenarios. We all know about it and have seen it a million times. They will become sentient and wipe us out, and variations on that theme. But those scenarios are too distant in the future. Instead, we should focus on the real horrors and harms Gen AI and LLMs are causing right now.
It seems to me that AI companies are investing inordinate amount of billions into this technology and they need to make that money back. That’s the main problem. They need people hooked. AI chatbots are designed not to give you the best information but to foster connection and engagement. They are designed to validate your feelings, to give you what you want and to keep you engaged, no matter the consequences, because they need everyone hooked so they can eventually, somehow, recover all the billions and billions they’re spending.
And that mixture of greed, no regulation, no guardrails, and pushing everything out to mainstream use regardless of how ready the technology is and regardless of the consequences, is a perfect recipe for horror.
In contrast, next week, I hope to write about some great use of AI that I find inspiring and exciting.
KICKING AROUND THE NET
Reuters has published an extensive and damning report on Meta, outling how they continue to do nothing to stops scams and fraudulent ads and services on their platforms because they know that paying the fines (estimated at $1 billion) is easier and cheaper, as they take $7 billion in profits from this fraudulent scams. This is another perfect example of the total moral bankruptcy of the big tech companies.
Library visits continue to increase but funding is not keeping up, which is obviously a problem. We may be really good at doing more with less but it’d be good to also get some funding! The ABC’s The World Today had a short report with ALIA President Jane Cowell, among others. As a recent report from Heather Robinson of Flinders University identified, “adjusted for inflation, Australian library funding has decreased 12% in the past five years. And spending on library collections is similarly down, by 14%. Libraries are also hiring less trained librarians, and relying more on volunteer staffers.” We need to continue to advocate for public libraries, we need to be loud and clear, librarians and patrons, so they continue to be funded and we can continue providing our services and resources to everyone for free.
Book challenges and bans continue to spike in the U.S. (in contrast to Australia - I hope to post about this in the coming weeks) but it’s not all doom and gloom. There was an important victory in Missouri recently. Missouri joins other recent victories, which shows that despite Trump and MAGA being in power, there’s some cause for hope. Then there’s The Librarians, a recent documentary that follows a bunch of librarians who have been fighting book challenges and bans. Their stories defending the freedom to read in the U.S. are disturbing but definitely worth your time and it’s great to see the documentary being screened all over the world (I had the opportunity to see it recently) and being reported about.
A global book publishing scam has affected multiple authors in Australia and a host of other countries. The scammers use cloned websites, AI-generated staff and virtual offices across Australia, the UK and New Zealand with names such Melbourne Book Publisher, Aussie Book Publisher and Oz Books Publisher. The Guardian published a great report by Kelly Burke.
In the mad race to roll out and embed AI in everhything, Disney+ is planning to implement AI features to their platform allowing users to create AI generated content with Disney IP, and Amazon has started to create recaps of their biggest streaming shows with AI. I think we’re going to see the enshittification of streaming platforms with AI very soon and these are the opening salvos.
Do you suspect or have hunch that doing research yourself is far more productive in the long term and helps you learn better than by doing it through Chat GPT or other similar products? To no one’s surprise, you’re probably right. More studies need to be done but this initial paper based on seven studies with about 10,000 participants reinforces the idea that actively doing something will allow you to learn and remember more.
PHOTO OF THE DAY

The times they are a changing, and the season is changing too. As I write, the sky’s are clear and blue. It’s only about 20 degrees celsius but it feels humid and burning hot under the sun. Summer is around the corner and we can feel it. The misty cold days of winter are but a memory, though I secretly hope we get some more of this magical mornings.
And, Mordialloc stations is changing too. The side of the station where I was standing when taking this photo is gone already. The other side where the train is stopping and people are waiting will be gone soon too. So this photo is a time capsule of how I’d like to remember this station.
And that’s all for this week. The next couple of weeks are looking a bit busy so we’ll see how much I can write. I expect I will publish something, but it may be shorter.
