The Weekly Cybers #93
The Australian government releases its AI plan, the big platforms will cop a “media bargaining incentive”, we continue the countdown to the social media age restrictions, and much more.
14 November 2025
Welcome
The Australian government is going all-in on AI, with every federal public servant to get AI training and access to the Gov AI Chat bot. Hurrah!
They’re also having another go at wringing some money out of the big social media and search platforms with a ”media bargaining incentive”
And with just 26 days to go, there is of course more news relating to the social media age restrictions.
As usual there’s plenty of other news too.
I’ve also added in a few personal opinions this week. Do let me know whether you’d like more of that in the future, or not.
Platforms to cop “media bargaining incentive”
The big social media and search platforms could be hit with millions in fines if they don’t sign content deals with Australian news outlets.
Labor’s plan, first announced in December 2024, would apply to platforms with more than $250 million in Australian revenue, whether or not they carry news content.
The aim is to get some cash out of the platforms including Meta, which chose to opt out of the existing news media bargaining code put together under the Morrison government.
As The Conversation notes, the media bargaining incentive looks a lot like a digital services tax. Will Trump notice? And, I might add, calling it an “incentive” is kind of like calling a 20-year jail term a “incentive to not commit murder.
Treasury has a released a consultation paper. Submissions close 19 December.
Social media age restrictions: 26 days to go
As the magic date of 10 December approaches, there’s plenty of news as everyone tries to understand the implications.
Cam Wilson, who’s been doing some excellent work for Crikey on this topic, noticed something that Communications Minister Anika Wells said in a press conference last week.
“I have been enjoying some of the social media content by recalcitrants in the run-up to 10 December, talking online about how they are under 16, they don’t like the laws, this is how they’re going to get around it, therefore identifying themselves as someone who is under 16 and their accounts are going to need to be deactivated,” she said.
In any other context, what would we think about some unseen stranger eavesdropping on kids’ conversations to see what they’re talking about?
Social media is public, one might argue. But you try lurking in a shopping mall to listen to the kids walking past and see what happens.
eSafety Commissioner Julie Inman Grant repeated the line that the platforms already know so much about us that determining users’ ages shouldn’t be difficult. Your writer wonders why the question isn’t “Why do these platforms know so much about us and our kids, and is this really OK?”
Wilson has also outlined five ways to fix social media for teens. “It’s not a choice between a ban and doing nothing at all,” he wrote.
Plenty of platforms are ready to start deactivating kids’ accounts, Reuters reported on Wednesday in a solid update. They include “TikTok, Snapchat and Meta’s Facebook, Instagram and Threads”.
Amusingly, as the Sydney Morning Herald ($) reports, allowing 16-year-olds in may be the toughest part of the social media ban.
Government releases AI plan for public service
Finance minister Katy Gallagher has released an AI Plan for the Australian Public Service, which sets out how the government will “harness artificial intelligence to deliver better services faster, for all Australians”. Such good words.
Every federal pubic servant will have access to generative AI tools, training and support, “guidance on how to use these tools safely and responsibly”, and new Chief AI Officers.
The plan will also:
- Expand the GovAI platform and establish an in-house GovAI Chat;
- Establish the AI Delivery and Enablement (AIDE) team to coordinate adoption and fast-track priority use cases;
- “Invest in our workforce to manage AI-driven changes in job design, skills, and mobility,” somehow; and
- Create an AI Review Committee to provide expert advice on higher-risk uses.
The government could even use AI for cabinet submissions, despite the security concerns.
The AI plan is built on three pillars, of course. Pillars are essential.
- Trust: transparency, ethics and governance
- People: capability building and engagement
- Tools: access, infrastructure and support
While the government has released the plan on a page (PDF), it’s worth looking at the full 30-page plan (PDF) for the intended 18-month timelines — four and a half months of which have already happened — and a schedule of deliverables.
National security chief and NDIA are already using AI
The Department of Home Affairs’ head of national security, Hamish Hansford, is already using AI to write speeches, as well as “personnel communications”, Crikey reported.
“This is the first time that FOI has been used to reveal how the government staff are already using the technology,” wrote Cam Wilson, having obtained Hanford’s actual prompts to Microsoft’s Copilot chatbot and the responses.
Meanwhile the Guardian reports that the National Disability Insurance Agency (NDIA) is using machine learning to help create draft plans for NDIS participants. This was before the agency started using Microsoft Copilot in January 2024, so it’s really “machine learning” not generative AI.
LATEST PODCAST: Space archaeologist Dr Alice Gorman aka Dr Space Junk, astrophysicist Rami Mandow, and I discuss the 25th anniversary of the International Space Station’s habitation, the colonisation of space, and much more in The 9pm Offworld Colonies with Dr Alice Gorman and Rami Mandow. Look for “The 9pm Edict” in your podcast app.
Also in the news
- ASIO boss Mike Burgess says Chinese hackers have probed our telecoms network and key infrastructure, but then of course they have.
- In further #ChineseSpies fears, could China shut down electric buses? As ABC News reported, “Norwegian transport operator Ruter published test results last week that showed bus-maker Yutong Group had access to buses’ control systems for software updates and diagnostics on the model they tested”. Sure. And Elon Musk can shut down your Tesla.
- The Australian Federal Police (AFP) and Monash University have joined forces to fight AI-generative crime, reports Cyber Daily. A data poisoning tool, Silverer, has been in development for 12 months and can disrupt the production of abusive deepfakes and child abuse material by skewing or corrupting the images.
- “A shocking new study released today [Thursday] by the Australian Institute of Criminology (AIC), unpacks the risks and experiences of underage adult-based platform use,” said the press release. Putting aside the tabloid “shocking”, which in your writer’s opinion is inappropriate for a government research institution, the study does show that some under-aged users were “exposed to risks both online and offline”, and puts some numbers on some of it.
- Mumbrella founder Tim Burrowes reviews a 7-year-old book to show how news sites are vulnerable to the algorithm. Remember BuzzFeed?
IF YOU FIND THIS NEWSLETTER HELPFUL, PLEASE SUPPORT IT: The Weekly Cybers is currently unfunded. It’d be lovely if you threw a few dollars into the tip jar at stilgherrian.com/tip. Please consider.
Elsewhere
- Meta makes billions from scam adverts, reports The Observer, An estimated 15 billion such ads are shown daily across Facebook, WhatsApp, and Instagram.
- Three of the world’s most widely used cybercriminal malware operations have been disrupted by Europol-led Operation Endgame: Rhadamanthys, VenomRAT, and the Elysium botnet.
- Court cases in the US will help answer the question of who pays when AI is wrong, the New York Times reports (gift link).
- There’s more reports of, allegedly, chatbots encouraging suicide.
- Over the last two weeks Matt Bevan has produced a fantastic backgrounder on the AI boom and potential bust in his excellent weekly series, If You’re Listening. Watch them in order: Is the AI boom another Dot-Com bubble about to burst? and How much energy, water and money is the AI boom consuming?. If you prefer audio-only, look for If You’re Listening in your podcast app. I’ve put this under Elsewhere because it’s a global issue, but of course it’s made here in Australia for the ABC.
- Nature has published a paper which “shows how gender and age are jointly distorted throughout the internet and its mediating algorithms, thereby revealing critical challenges and opportunities in the fight against inequality”.
WHY NOT LISTEN TO OUR PODCAST ABOUT MUSIC? My good friend Snarky Platypus and I produce a music podcast, Another Untitled Music Podcast. Look for it in your podcast app.
Inquiries of note
Apart from the media bargaining incentive mentioned above, there’s nothing new.
A personal thought on “risk”
I was interested in the language used to launch the AIC study mentioned earlier, and more broadly in the policy discussions about the social media age restrictions and, indeed, when discussing online activities generally.
We keep hearing that people are “exposed to risks”. Sure, there are always risks, with everything. But are they being harmed by these risks?
I’m “exposed to risks” every time I cross the road, for example. But I mitigate against those risks by looking both ways and paying attention. And as a result I’ve only been run down once — ironically when I was using a pedestrian crossing.
A risk in and of itself isn’t necessarily bad.
It’s worth noting that Australian parents are among the most risk-averse in the world, according to research published two years ago, surpassing New Zealand, Canada, and the UK.
One of the clearest illustrations of this is, amusingly, from the UK’s Daily Mail in 2007: How children lost the right to roam in four generations. It does go off into a reverie about the importance of “nature”, but that map showing how kids aren’t allowed to go anywhere anymore is telling.
See also the long-running blog Free-Range Kids.
Maybe the government needs to look at more nuanced risk mitigations. Crossing the road is “risky”, but we don’t just bad kids from crossing the road.
And maybe Australian parents need to HTFU and STFU. Or just take a chill pill.
What’s next?
Parliament returns on Monday 24 November, which is 10 days away, for what is currently scheduled to be the last sitting week of 2025.
DOES SOMETHING IN THE EMAIL LOOK WRONG? Let me know. If there’s ever a factual error, editing mistake, or confusing typo, it’ll be corrected in the web archives.
The Weekly Cybers is a personal weekly digest of what the Australian government has been saying and doing in the digital and cyber realms, on various adjacent topics, and whatever else interests me, Stilgherrian, published every Friday afternoon (nearly).
If I’ve missed anything, or if there’s any specific items you’d like me to follow, please let me know.
If you find this newsletter useful, please consider throwing a tip into the tip jar.
This is not a cyber security newsletter. For that that I recommend Risky Biz News and Cyber Daily, among others.