The Weekly Cybers #56
ASIO warns of young people’s vulnerability to radicalisation, the eSafety Commissioner discovers that most young kids are already using social media, and AI continues to be rubbish when tested.
Welcome
The two biggest stories this week are about online safety, the first being that warning from ASIO. Young people are now at a “vulnerable age” as they enter a world of AI-generated misinformation.
Meanwhile the eSafety Commissioner has found that 80% of children aged 8–12 are using social media platforms, all of them being under the age minimum the government has just legislated.
Read on, because there’s also the usual wide range of smaller items.
ASIO: Young people at “vulnerable age” as AI disinformation increases
People who’ve spent all their formative years online are now entering “a vulnerable age for radicalisation”, according to ASIO director-general Mike Burgess.
“For some, their sense of normality, identity, and community will be more influenced by the online world than the real world, he said in his annual threat assessment speech on Wednesday.
“If technology continues its current trajectory, it will be easier to find extremist material, and AI-fuelled algorithms will make it easier for extremist material to find vulnerable adolescent minds that are searching for meaning and connection.”
Burgess notes that when extremism is addressed early, “vulnerable children can be diverted from the radicalisation path”, particularly when parents play an active role.
While much of the speech is beyond the scope of this humble newsletter — you can read summaries at the Guardian and ABC News and iTnews, for example, and analysis at The Conversation — Burgess also noted the impact of AI on other areas of ASIO’s work.
“Espionage and foreign interference will be enabled by advances in technology, particularly artificial intelligence and deeper online pools of personal data vulnerable to collection, exploitation, and analysis by foreign intelligence services. Artificial intelligence will enable disinformation and deep fakes that can promote false narratives, undermine factual information, and erode trust in institutions.”
“A hyper-connected world will allow political tensions or conflicts overseas to resonate quickly in Australia, spread by social media and online echo chambers, inflamed by mis- or disinformation,” he said.
eSafety: Underage social media use “widespread”
“Australian children are easily circumventing inadequate and poorly enforced minimum age rules employed by well-known social media services, with most only asking kids to self-declare their age at sign-up,” according to a new report from the eSafety Commissioner.
Some 80% of children aged 8-12 used one or more social media services in 2024, it was found. The most popular were YouTube (68% of those surveyed), TikTok (31%) and Snapchat (19%).
54% of those who had used social media used their parent’s or carer’s account(s). 36% of those who had used social media had their own account, with 77% of those saying they had help to set it up, mostly from parents or carers.
One might therefore wonder whether the parents’ concerns about social media use are as serious as the government makes out, given that a significant proportion of parents seem happy to create accounts for their kids.
That said, the government keeps saying that YouTube, the most popular platform for under-13s, will not be covered by the age restrictions — although as we’ve reported before, the legislation does not specify who’s in and who’s out. Such matters are up to the minister of the day, based on the vibes.
Unrestricted chatbots “threaten child development”
The eSafety Commissioner has also issued the first online safety advisory, wanting against “AI chatbots and companions designed to simulate personal relationships”.
“Recent reports indicate some children and young people are using AI-driven chatbots for hours daily, with conversations often crossing into subjects such as sex and self-harm. Chatbots are not generally designed to have these conversations in supportive, age-appropriate and evidence-based ways, so they may say things that are harmful.”
While the advisory does list a range of potential harms — and of course the potential does exist and there have been some tragic stories — there’s a disappointing lack of hard facts as to the relative likelihood of harm occurring which might have helped parents judge for themselves.
IF YOU’VE FOUND THIS NEWSLETTER HELPFUL, PLEASE SUPPORT IT: The Weekly Cybers is currently unfunded. It’d be lovely if you threw a few dollars into the tip jar at stilgherrian.com/tip.
Also in the news
- The National Anti-Corruption Commission (NACC) has decided that, OK sure, it will investigate the six unnamed robodebt people after an independent review into its initial decision not to.
- “The Australian Tax Office seeks real-time fraud detection technology to counter rising tax scams and unauthorised transactions,” reports The Mandarin.
- The Law Council of Australia says the newly warranted federal powers to investigate cyber-enabled crime “covertly” must be “carefully scrutinised”, reports Cyber Daily. Their comments came during public hearings held by the Independent National Security Legislation Monitor (INSLM) this week, which unfortunately I’d managed to miss, but there’s a transcript. The powers themselves, while apparently essential, are used relatively infrequently.
- Also from Cyber Daily, “The Australian Federal Police (AFP) has been appointed to lead the Virtual Global Taskforce (VGT), which fights child exploitation and child sexual abuse material, for the next three years.”
- Financial regulator AUSTRAC has recently taken action against 13 remittance and digital currency exchange providers, with more than 50 others still in its sights.
- From digital rights activist Samantha Floreani, “The tech industry has never been more powerful. How do the government’s policies stack up?”
- The Australian Institute of Criminology (AIC) has been developing a harm index for individual victims of cybercrime to measure of the relative severity of each type of cybercrime. “Repeat victims who experienced multiple types of cybercrime are disproportionately impacted and should be prioritised for intervention,” they write.
- The Australian Communications Consumer Action Network (ACCAN), wants NBN Co to dump its slowest plans of 25/5Mbps, raising baseline speeds rather than focusing on its high-end fibre upgrades. But why not both?
Elsewhere
- The BBC has been testing AI, and of course it doesn’t work. “51% of all AI answers to questions about the news were judged to have significant issues of some form,” they report, among other terrible results.
- Last year researchers ran four major large language models (LLMs) through the Montreal Cognitive Assessment (MoCA) and additional tests and found that AI chatbots test as having dementia.
- The purported brain rot from spending too much time in front of screens may not be a thing.
Inquiries of note
Nothing new for us this week.
What’s next?
Parliament is currently on a break, although there’s sessions of Senate Estimates in the coming week.
Both houses return for three days of sittings on Tuesday 25 March, then the Senate only returns for two weeks starting Monday 7 April. Unless the election is called. I explained the potential timing for that last week.
DOES SOMETHING IN THE EMAIL LOOK WRONG? If there’s ever a factual error, editing mistake, or confusing typo, it’ll be corrected in the web archives.
The Weekly Cybers is a personal look at what the Australian government has been saying and doing in the digital and cyber realms, on various adjacent topics, and whatever else interests me, Stilgherrian, published every Friday afternoon (nearly).
If I’ve missed anything, or if there’s any specific items you’d like me to follow, please let me know.
If you find this newsletter useful, please consider throwing a tip into the tip jar.
This is not specifically a cyber *security* newsletter. For that that I recommend Risky Biz News and Cyber Daily, among others.