The Weekly Cybers #84
The social media age assurance tech trial delivered mixed results at best, robodebt victims to get another $475 million compensation, FoI restrictions to be tightened, and much more.
5 September 2025
Welcome to a hectic week!
There’s quite a few important stories this week, so I’ll only be able to cover each one briefly — and link out to more detailed articles on each one.
Oh wait. That’s what I do every week.
Anyway, we finally have the full report from the Age Assurance Technology Trial, and it is not reassuring. Robodebt victims will likely share another $475 million in compensation. And the government is tightening freedom of information laws — on us, not them.
They’re the big stories. As always there’s much more.
Age assurance tech trial’s mixed messaging
The technology trial for Australia’s social media age restrictions and other online compliance use released its final report this week, all 1,200 pages of it. The headlines say one thing, but the details say another.
“Age assurance can be done in Australia,” it enthuses.
“Our analysis of age assurance systems in the context of Australia demonstrates how they can be private, robust and effective.”
But as we noted two months ago, that really isn’t the case. We were told to wait for the report. Well, we’ve waited, and nothing’s changed.
As Josh Taylor reports at the Guardian, false negatives are “inevitable” with age estimation systems, which means a fall-back to age verification systems — presumably using an ID.
“The report found there is a two- to three-year ‘buffer zone’ for facial age estimation where it is likely that errors in correctly estimating a person’s age will increase, for example if a user is 17 and it estimates them to be under 16, or a 14-year-old told they are over 16,” he wrote.
“For people aged 16 and 17, false rejection rates remained ‘above acceptable levels’ for facial age estimation, the report found, at 8.5% and 2.6% respectively.”
The technology is least accurate at precisely the ages at which it needs to work: confirming that someone is 16 or older so can open a social media account, or is 18 or older and can access “adult content”. Or, of more concern to parents, that someone is under 16 or under 18 and should be prevented from doing so.
To put that into context, this means it’s almost inevitable that in every senior high school classroom there will be one to three students judged wrongly. If they’re estimated to be older than they are, well, there’s access to the banned content.
The thing is, the government has said that the platforms cannot require someone to provide ID, and they must provide alternative systems. If those systems have a high error rate, forcing users to provide ID anyway, are they really an alternative?
Remember, all adults will have to demonstrate their adulthood
As we keep having to point out, it’s not that the systems have to prove someone is underage to block them. It’s that adults have to show they’re of the required age to access the services they’re allowed to access.
Apart from age verification using a formal ID, and age assurance using some form of guessing — sorry, estimation — there’s also age inference.
“Age inference draws reasonable conclusions about age by analysing facts such as school enrolment, financial transactions, content barring settings, service usage or participation in age-specific activities," the report said.
If someone has had a social media account for 15 years then they’re probably over 16. Fine. But if the system is cross-checking an email address with the subscribers to a whisky mailing list, sure, they’re probably over 18, but that implies a lot more data matching than is strictly needed — although the platforms are already doing this for their advert targeting.
“Age assurance cannot fail...”
“[The tech trial report] is surprisingly difficult to read, using many words to say very little, interspersed with advertorials — sorry, case studies — or various commercial products that the report never quite manages to cover in detail,” writes friend of the cybers Justin Warren in his newsletter The Crux.
“It might be possible to do a thing called age assurance, whatever that is, sometimes, under the right sorts of conditions. Mostly not, but that doesn’t matter, because the report exists to post hoc justify a decision the Australian government has already made. The policy will be implemented and it will be a success, regardless of what any evidence might actually show. The failures, whatever they end up being, will be acceptable collateral damage. The successes, no matter how few or far between, will absolve everyone for the failures,” he wrote.
“Age assurance cannot fail, it can only be failed.”
Warren spent some time on Mastodon making more critical comments.
Meanwhile, ABC News headlined the technology as laden with risks. And there’s lots more expert reaction at Scimex.
Robodebt victims to share $475 million compo
The government has agreed to pay $475 million in compensation to the victims of the Coalition government’s robodebt scandal, in which more than 450,000 Australians were harassed about alleged debts which had been calculated falsely and probably unlawfully.
That’s only a little over $1000 per person on average, although members of a class action will have the option of individualised assessments. So some get more, some less.
As Professor Peter Whiteford writes at the Guardian, “Despite more than a decade passing since robodebt was first devised, the National Anti-Corruption Commission is still considering investigating six individuals referred to it by the royal commission, reversing an earlier decision not to investigate”.
Meanwhile, as The Saturday Paper reported a year ago, only five people have been given access to the sealed section of the robodebt royal commission report which names those six people. Will we ever see this?
The final compensation figure is still subject to Federal Court approval.
Who’s to blame when AI makes mistakes?
This compensation payout, the biggest in Australian history, is significant in two ways, according to Associate Professor Michael Duffy from the Monash Business School.
“Firstly, it sends a message to those dealing with government, senior public servants, and others that undue reliance on an algorithm or AI may be problematic for all concerned if things go wrong,” Duffy said in a press release.
“Secondly, the increasing pleading in such cases of allegations of public office misfeasance mean that senior public servants must be careful, that in trying to please the executive government and politicians, they don't expose themselves to personal liability.”
Duffy wrote more about this last year in The rise of the “machine defendant”.
Government tightens freedom of information laws
Attorney-General Michelle Rowland says our FoI laws are “broken” and waste too much public service time on “frivolous” or spam requests.
The Freedom of Information Amendment Bill 2025 is lengthy, but here’s the guts of it.
There would now be blanket ban on any request which would take more than 40 hours to collate. There’d be a fee for any request that isn’t about the applicant’s personal information. And anonymous requests would be banned.
On that last point, the prime minister said “there's no way to determine whether a foreign agent or actor is putting in requests about information that are [sic] sensitive”. But surely this is a red herring? FoI requests can already be blocked on national security grounds.
Definition of “cabinet documents” to be expanded
“The existing exemption for documents related to cabinet decisions would be expanded and may include anything that has been brought to cabinet's attention or might inform something that is shown to cabinet in future,” ABC News reported.
“And any document in which ministers, public servants or other officials record their thoughts about policy could be deemed ‘deliberative material’ and also exempted, even ‘blue-sky thinking’ that might relate to some future policy deliberation.”
The Mandarin reports that the bill has raised alarm bells for transparency advocates, who argue it’s a tax on scrutiny, not a cure for inefficiency.
And at The Conversation, “Yes, freedom of information laws need updating, but not like the government is proposing.”
The bill has now been spun out for a Senate committee review. A closing date for submissions has not yet been set, but the committee is due to report back on 3 December.
Google dodges antitrust breakup, but it’s still operating an illegal monopoly
A US federal court has ruled that Google does not have to sell Chrome, the company’s web browser, avoiding the biggest penalties.
However the company was found to have created an illegal monopoly in the search engine market.
As Information Age reports, “Google has, however, been ordered to provide search engine data, including search index and user interaction information, with its rivals and banned from entering exclusive deals with device makers.”
The Cato Institute has written about what the ruling means for antitrust, consumers, and innovation. And at The Conversation, there’s some analysis of what happens next.
IF YOU’VE FOUND THIS NEWSLETTER HELPFUL, PLEASE SUPPORT IT: The Weekly Cybers is currently unfunded. It’d be lovely if you threw a few dollars into the tip jar at stilgherrian.com/tip. Or you might like to support my current crowdfunding campaign, The 9pm Spring Series 2025.
Also in the news
- The Australian government has announced plans to ban “nudify” tools. How will that work?
- The Australian Academy of Science has predicted a science capability gap unless we make some changes, which they list in their full report. As just one example, “Jobs in artificial intelligence (AI) are expected to surge, yet only one in four Year 12 students is studying mathematics — the fundamental science discipline.”
- In an Australian first, a Victorian solicitor has been stripped of his ability to practise as a principal lawyer after being caught using AI-generated fake citations.
- The Therapeutic Goods Administration (TGA) says it’s “stepping up its efforts” to regulate digital scribes used by medicos, including those using AI.
- “AI-generated ‘boring history’ videos are flooding YouTube and drowning out real history,” reports 404 Media.
- Via The Conversation, “How we tricked AI chatbots into creating misinformation, despite ‘safety’ measures.”
- The Independent National Security Legislation Monitor (INSLM) says there’s inadequate safeguards for the special cybercrime powers used by the Australian Federal Police and Australian Criminal Intelligence Commission. The full report makes 21 recommendations, so I guess we’ll see whether the government pays attention to them.
- “Households with a fibre to the node (FTTN) NBN connection are more likely to experience underperforming download speeds than any other fixed-line connection type,” reports the Australian Competition and Consumer Commission (ACCC). This is not about FTTN being slower than other methods, or at least not as such. It’s about it being worse than the others in delivering the advertised speed that customers are paying for.
- From iTnews, Atlassian will buy The Browser Company, a New York-based startup, for US$610 million (A$936 million) in cash, “moving into the fast‑crowding market for AI‑driven browsers”.
- Optus will sell around 340 mobile tower and rooftop sites to digital infrastructure operator Waveconn and rent them back.
- The superannuation industry will be running a sector-wide cyber exercise. Good luck!
- Finally, some legislation we’ve previously mentioned was passed: the Treasury Laws Amendment (Payments System Modernisation) Bill 2025, and the Australian Security Intelligence Organisation Amendment Bill (No. 1) 2025.
NEW PODCAST: Journalist Erin Cook and I discuss current events in The 9pm Usual Chaos in Indonesia and Thailand with Erin Cook. Look for “The 9pm Edict” in your podcast app of choice.
Elsewhere
- China says it wants to integrate AI into 90% of its economy by 2030, but this piece from the Carnegie Endowment for International Peace argues that it won’t work.
- Taco Bell is rethinking its AI drive-through after a man ordered 18,000 waters, and other disasters. “One clip on Instagram, which has been viewed over 21.5 million times, shows a man ordering ‘a large Mountain Dew’ and the AI voice continually replying ‘and what will you drink with that?’.”
- OpenAI’s ChatGPT will be implementing parental controls. As CNN reports, “The controls will include the option for parents to link their account with their teen’s account, manage how ChatGPT responds to teen users, disable features like memory and chat history and receive notifications when the system detects ‘a moment of acute distress’ during use.”
- Adult (cough!) websites that don’t comply with the UK’s age check laws have seen a surge of traffic.
Inquiries of note
Apart from the Freedom of Information inquiry mentioned above:
- The Parliamentary Joint Committee on Intelligence and Security (PJCIS) will review the Telecommunications and Other Legislation Amendment Bill 2025. Submissions close 22 September.
- The Parliamentary Joint Committee on Law Enforcement (PJCLE) is looking at Combatting Crime as a Service, which includes “these and other technology-driven advancements on criminal methodologies and activities, including the use of cryptocurrencies”. Submissions close 13 October.
What’s next?
Parliament is now on a break until 7 October, when the House of Representatives returns and Senate Estimates hearings are held.
DOES SOMETHING IN THE EMAIL LOOK WRONG? Let me know. If there’s ever a factual error, editing mistake, or confusing typo, it’ll be corrected in the web archives.
The Weekly Cybers is a personal look at what the Australian government has been saying and doing in the digital and cyber realms, on various adjacent topics, and whatever else interests me, Stilgherrian, published every Friday afternoon (nearly).
If I’ve missed anything, or if there’s any specific items you’d like me to follow, please let me know.
If you find this newsletter useful, please consider throwing a tip into the tip jar.
This is not specifically a cyber security newsletter. For that that I recommend Risky Biz News and Cyber Daily, among others.