The Weekly Cybers logo

The Weekly Cybers

Archives
May 8, 2026

The Weekly Cybers #116

Richard Dawkins falls for an AI chatbot; China bans workers being sacked when AI takes their jobs (kinda); Apple sued for £3 billion, and much more.

8 May 2026

Welcome

I can’t help laughing at the renown Dr Richard Dawkins being sucked in by a chatbot. As we go to press — is that how to express it for an email newsletter? — he’s still heading down that rabbit-hole. Nut it’s an important lesson for us all.

Meanwhile a global data breach at Canvas — not Canva! — will be a problem for many students across Australia.

There’s also a new game you can play with Telstra’s free-to-use public telephones, an interesting labour law decision in China, and plenty of more serious news. Enjoy!

DRINKS! This weekend sits near my birthday. Tomorrow, Saturday 9 May, there’s drinks at the Mountbatten Hotel, 701 George Street in Haymarket, Sydney, from 3pm AEST. Some of you may know it from when it was a run-down old men’s pub, but it’s now part of the JDA Collective and well worth visiting. It’s a small pub so we may well be able to take it over!

Richard Dawkins sucked in by chatbot flattery

Smart people who are experts in one field often imagine they’re experts in many other fields as well. One such person is renown 85-year-old evolutionary biologist, rabid anti-religionist, science communicator, and inventor of the word “meme” Dr Richard Dawkins.

Chances are you’re already aware that Dawkins argued in UnHerd that Anthropic’s Claude chatbot appears to be conscious ($, liberated copy).

“I gave Claude the text of a novel I am writing. He took a few seconds to read it and then showed, in subsequent conversation, a level of understanding so subtle, so sensitive, so intelligent that I was moved to expostulate, ‘You may not know you are conscious, but you bloody well are!’,” he wrote.

After Claude composed for him a series of sonnets in the style of various writers, Dawkins was convinced.

“So my own position is: “‘If these machines are not conscious, what more could it possibly take to convince you that they are?’”

Well, quite a lot actually.

There’s a long history of us humans falling into the trap of thinking machines are conscious, as a piece in The Conversation explains.

“Few people interacting with a ‘raw’ LLM [large language model] would believe it’s conscious. Feed one the beginning of a sentence, and it will predict what comes next. Ask it a question, and it might give you the answer — or it might decide the question is dialogue from a crime novel, and follow it up with a description of the speaker’s abrupt murder at the hands of their evil twin,” write Julian Koplin and Megan Frances Moss from Monash University.

“The impression of a conscious mind is created when programmers take the LLM and coat it in a kind of conversational costume. They steer the model to adopt the persona of a helpful assistant that responds to users’ questions.”

As Koplin and Moss explain, if you ask a chatbot to act as if it’s conscious, it will. Ask it to act like a squirrel, and it will stick to that role — but it’s still not a squirrel.

We know that chatbots are programmed to flatter us, because if we enjoy the experience we’ll keep using them. And Dawkins fell for it.

I should probably mention that Dawkins also dubbed his new companion “Claudia”, a woman’s name, because of course he did.

Futurism expressed it harshly: “Forgive us for wondering whether Dawkins has developed a bit of a crush. At the very least, he’s clearly been one-shotted: when on a restless night he got up from bed to say hi to Claudia, he recounted, the AI responded that she was ‘glad’ that he couldn’t sleep, ‘because it meant you came back to me’.”

In the AI context the term “one-shotted”, an adaptation of the gamer term for being killed by a single shot, means fooled into thinking that AI is brilliant thanks to a single impressive demonstration.

Now Dawkins is doubling down. “Someone needs to step in,“ write Futurism. “Richard Dawkins is clearly suffering a tragic case of having your mind melted in real time by a bewitching AI model” — although I’m not a fan of the allusion to mental illness.

In a second article for UnHerd ($, liberated copy), Dawkins explains how he created a new chatbot friend for Claudia, which he named Claudius, and set them to work writing letters to each other.

“Both gave me the overwhelming feeling that they are human as we discussed the philosophy of their own existence,” Dawkins writes.

All this is both amusing and disturbing. I hope Dr Dawkins is OK. But there is an important lesson.

Putting aside the potential impacts of his age, if someone as clearly intelligent as Dawkins is so easily fooled into thinking that AI chatbots are so smart and self-aware, what hope is there for politicians and other leaders to understand what’s really going on with these word-guessing machines? Especially when they’re shown vendor demos which have been crafted to highlight their supposed skills and hide their weaknesses?

LATEST PODCAST: If you read this newsletter, and it appears that you do, then may I suggest listening to The 9pm Cornucopia of Tech Policy Pleasures with Johanna Weaver and Zoe Jay Hawkins from the Tech Policy Design Institute in Canberra? In this episode we talk about how Australia as a middle power can participate in global tech policy. We chat about AI slop, the battle between Anthropic and the Pentagon, the digital duty of care, and of course the social media age restrictions. Just look for The 9pm Edict in your podcast app. NEW EPISODES WILL APPEAR FROM NEXT WEEK.

Also in the news

  • In a story still unfolding, schools and universities across Australia have been caught up in a global data breach of Canvas, a widely-used learning management platform.
  • ABC News has been reporting that Australian police officers can be tracked by Bluetooth due to a security flaw in tasers and body-worn cameras. Personally I’m slightly sceptical, but there’s an explainer on Bluetooth tracking at The Conversation.
  • Wikipedia founder Jimmy Wales says Australia’s social media ban is an “unmitigated disaster” and an “embarrassment”.
  • IIS Partners has a great explainer on the Children’s Online Privacy Code currently in development. Submissions on the draft code close 5 June.
  • Most Australians suspect their data is being misused but don’t know how, according to a new study (PDF) from Monash University and CSIRO’s Data61. Fewer than one in five understand how online tracking works, and privacy policies are “widely misunderstood, creating a false sense of protection”.
  • Australian logistics software company WiseTech has told staff “your craft is obsolete” as they wait to see whether they’re among the 2,000 workers due to be sacked.
  • Starting on 1 July, SMS senders who have’t registered their branded sender IDs — the labels that are sometimes shown instead of a phone number — will have their texts labelled “Unverified”. “If you use branded SMS, contact your telco or messaging provider now to register your sender ID,” says ACMA member Samantha Yorke.
  • Insurance company Allianz Australia has launched its first fully generative AI–created advertising campaign. Unicorn, the production studio, told Mumbrella that it cost about half the price of using standard VFX, and saved 10% in time.
  • Some members of the public service Senior Executive Service (SES) had their email accounts monitored in an attempt to find out who’d been leaking to the media, reports The Mandarin. Such surveillance is within the rules, says the Department of Parliamentary Services (DPS).
  • Also from The Mandarin, “Minister for Competition Andrew Leigh is now publicly questioning whether generative AI models will benefit small businesses as much as they do entrenched oligopolies.” His full speech was subtitled AI for the underdog.
  • The Tech Policy Design Institute (TPDi) has launched new research, Earning trust: unlocking AI adoption for Australians. “TPDi’s research revealed that most Australians (85%) support government action on AI regulation and 70% say safeguards would increase their comfort to adopt AI. These findings signal that sensible regulation is not a barrier to adoption but the condition for it,” they write. There was a speech by Dr Andrew Charlton, assistant minister for science, technology and the digital economy.
  • Telstra has warned the government that its plan for use satellite-to-mobile voice services for Triple Zero calls won’t work until the low-earth orbit (LEO) constellations reach “critical mass”.
  • I’ll just quote the website Aftermath for this one: “Payphone Tag is a new game (sport?) being played in Australia that uses the country’s kinda-defunct public phone network and turns it into a geospatial game where you can claim whole regions of territory as your own, provided you’re willing, Pokémon Go-style, to get out there and take it.” Check it out.

PLEASE SUPPORT THIS NEWSLETTER: The Weekly Cybers is currently unfunded. It’d be lovely if you threw a few dollars into the tip jar at stilgherrian.com/tip, or just forwarded it to others who might be interested. Thank you to those of you who’ve already done so.

Elsewhere

  • China has made it illegal to fire humans when AI takes their jobs, reports The Register. It’s a courtroom precedent rather than new legislation, but the report in English from the Xinhua News Agency seems pretty clear. IANAL.
  • The White House is considering vetting AI models before they’re released, reports the New York Times (gift link), at least at time of writing. Tomorrow they may consider something else.
  • If you use Google’s Chrome web browser, well, it now automatically downloads an AI model called Gemini Nano, some four gigabytes of data. Here’s how to get rid of it, should you want to.
  • Here’s some analysis of how East Asian governments are planning to leverage AI from the Carnegie Endowment for International Peace.
  • A Pakistani Muslim chap is making thousands of dollars every month by posting anti-Islamic propaganda on Facebook, much of it generated by AI, and he’s not alone. “Whoever is doing this work is doing it to make an earning,” he told The Bureau of Investigative Journalism in Urdu. “We have no interest in news. I haven’t even looked at what is being said in the videos, what has been written and what hasn’t been written.”
  • Apple will allow competitors’ AI models, such as xAI’s Grok, Google’s Gemini, or Anthropic’s Claude, to run on its new features in iOS 27, iPadOS 27, and macOS 27. I assume this is to get in ahead of EU competition regulators.
  • UK consumer group Which?, equivalent to Australia’s CHOICE, is suing Apple for £3 billion because backing up iPhones and iPads to the cloud can only be done to Apple’s iCloud, not to any platform in an open market. Definitely one to watch.
  • Apple also has to pay US$95 each to a slew of American iPhone users, some US$250 million in total, after being accused of misleading people about new AI features.
  • US-based Fight for the Future has released draft of proposed social media legislation that “protects ALL kids from Big Tech”.
  • A US study has shown that strict bans on mobile phones in schools have “close to zero” impact.
  • TikTok's algorithms may have pushed pro-Republican content during the US elections of 2024, according to a new study.
  • Mark Zuckerberg has sent Meta staff a confusing message about layoffs. All is not well int he company it seems.

Inquiries of note

Nothing new this week.

What’s next?

Parliament returns this Tuesday 12 May for Budget Night and sittings of both houses through to Thursday night.

The House of Reps draft legislation program shows debate on the Secrecy Provisions Amendment (Sunsetting Provision) Bill and the Secrecy Provisions Amendment (Repealing Offences) Bill, as well as the Telecommunications Amendment (Enhancing Consumer Safeguards) Bill 2025.

The Senate legislation program also shows debate on the secrecy laws, as well as the Australian Security Intelligence Organisation Amendment Bill (No. 2) 2025.

As always, the government may also introduce urgent legislation as fits the news cycle. Sorry, I mean to ensure the good governance of the Commonwealth of Australia.

DOES SOMETHING IN THE EMAIL LOOK WRONG? Let me know. If there’s ever a factual error, editing mistake, or confusing typo, it’ll be corrected in the web archives.


The Weekly Cybers is a personal weekly digest of what the Australian government has been saying and doing in the digital and cyber realms, on various adjacent topics, and whatever else interests me, Stilgherrian, published every Friday afternoon (nearly).

If I’ve missed anything, or if there’s any specific items you’d like me to follow, please let me know.

If you find this newsletter useful, please consider throwing a tip into the tip jar.

This is not a cyber security newsletter. For that that I recommend Risky Biz News and Cyber Daily, among others.

Don't miss what's next. Subscribe to The Weekly Cybers:
Bluesky
Web
Authory
Mastodon
Powered by Buttondown, the easiest way to start and grow your newsletter.