The Weekly Cybers #104
US court to decide whether social media is addictive, eSafety puts Roblox on notice over child safety, Elon Musk’s X found to have boosted far-right politics in Germany, and much more.
13 February 2026
Welcome
Social media is in the news again this week, but not Australia’s ban for teens. Instead, there’s a landmark test case in the US over whether social media is addictive, and deliberately so.
There’s evidence to show that Elon Musk’s X deliberately boosted far-right messages ahead of Germany’s elections.
And inevitably, there’s far too much news about AI.
US court to answer: Is social media addictive?
In Los Angeles, a civil case has been launched against Meta and YouTube over whether the companies deliberately designed their platforms to be addictive to children.
The plaintiff is a 20-year-old California woman, identified as Kaley GM, who claims she suffered severe mental harm from social media addiction after starting to use YouTube at age six, Instagram at age 11, and Snapchat and TikTok a couple of years later.
The key allegation is that Meta knew its products were addictive but publicly downplayed the harms, reports TIME.
“The addictive nature of the company’s products wasn’t a secret internally,” they report. “‘Oh my gosh yall IG is a drug,’ one of the company’s user-experience researchers allegedly wrote to a colleague. ‘We’re basically pushers’.”
Meta employees had even said that the company’s tactics reminded them of tobacco companies.
According to the New York Times, Kaley’s lawyer Mark Lanier said: “They didn’t just build apps, they built traps... They didn’t want users, they wanted addicts” (gift link).
Other allegations, to be explored in this case or subsequent actions, include that Meta “lied to congress” about its knowledge of the harms, they knew Instagram was letting adult strangers connect with teenagers, they aggressively targeted young users, and they had a high threshold for “sex trafficking” content with no way to report child sexual content.
Instagram pushed back. Obviously.
“Protecting minors over the long run is even good for the business and for profit,” said CEO Adam Mosseri. “I think it’s important to differentiate between clinical addiction and problematic use.”
Meta also tried to blame Kaley’s mental health problems on childhood experiences including abuse, abandonment, and other traumas.
Tech Policy Press has detailed what to look out for in this trial and, more broadly, what it all means for the tech industry.
Meta’s CEO Mark Zuckerberg is scheduled to appear on 18 February, and YouTube’s Neil Mohan the next day.
TikTok and Snapchat have already agreed to settle equivalent lawsuits made against them. The terms of the agreements have not been disclosed.
Meanwhile, a European Commission probe has determined that some TikTok features are addictive, including infinite scroll, autoplay, push notifications, and the personalised recommendation algorithm.
eSafety puts Roblox on notice over child safety
“eSafety notified Roblox last week of its intention to directly test the platform’s implementation and effectiveness of the nine safety commitments it made to Australia’s online safety regulator last year, amid growing concerns, including from the Australian Government, about online child grooming and sexual exploitation,” the agency wrote on Tuesday.
Roblox made these commitments in September 2025, and at the end of the year said the new safety measures were in place. But are they?
“We remain highly concerned by ongoing reports regarding the exploitation of children on the Roblox service, and exposure to harmful material,” said commissioner Julie Inman Grant.
HOW’S THAT AI BUBBLE GOING? This week’s podcast episode is The 9pm S-Bend of Technology with David Gerard. He’s the editor of the Pivot to AI newsletter, video essay, and podcast. Look for The 9pm Edict in your podcast app.
Also in the news
- Most Australian government agencies are failing to fully report cyber incidents to the Australian Signals Directorate (ASD), according to The Commonwealth Cyber Security Posture in 2025.
- “A pair of Chinese nationals have been charged with reckless foreign interference after allegedly covertly collecting information about a Buddhist association in Canberra,” reports Cyber Daily.
- FiiG Securities has been fined $2.5 million for cybersecurity failures which led to 385 gigabytes of confidential information being stolen. This is the first time an Australian Financial Services (AFS) licensee has been so penalised.
- Following the Administrative Review Tribunal (ART) decision to allow Bunnings to use face recognition tech, as we reported last week, Cyber Daily has some commentary.
- REA Group has launched Australia’s first ChatGPT property search app. As if all the AI- and human-altered property photos weren’t bad enough.
- Commercial radio stations will be required to disclose the use of AI voices under the new Commercial Radio Code of Practice (PDF) coming into force on 1 July. There’s also new rules about “special care” being needed during school drop-off and pickup times, 8–9am and 3–4pm on school days.
- An evidence review on youth gambling in Australia, produced by the OurFutures Institute at the University of Sydney, is reportedly riddled with AI-generated errors. The institute says the errors can be blamed on a reference editing tool, although that wouldn’t fully explain the kinds of errors found.
- Telstra is outsourcing up to 442 tech jobs to Infosys. More jobs will supposedly be lost to AI because Accenture is magic. Or something.
- The Conversation has some analysis of the laws around non-consensual AI porn.
- Also from The Conversation, a discussion of the law relating to creating digital avatars after your death.
IF YOU’VE BEEN FINDING THIS NEWSLETTER HELPFUL, PLEASE SUPPORT IT: The Weekly Cybers is currently unfunded. It’d be lovely if you threw a few dollars into the tip jar at stilgherrian.com/tip. And thanks to those of you who’ve already done so. Much appreciated.
Elsewhere
- Elon Musk’s X boosted the voice of far-right party Alternative für Deutschland (AfD) in the lead-up to Germany’s 2025 elections, according to research from the University of Massachusetts Amherst.
- Google’s parent Alphabet is issuing a 100-year bond in British pounds as part of a US$20 billion bond sale, reports the Financial Times. Needless to say, bonds that don’t mature for a century are rare.
- None of this surprises me: “Press Gazette today names more than 50 apparently fake experts who have offered commentary to the British press in recent years and featured more than 1,000 times in newspapers, magazines and online titles.”
- “AI is making us work harder, not better,” reports Information Age, because we spend too much time checking its output.
- Governments are using AI to draft legislation. “What could possibly go wrong?” asks Tech Policy Press.
- Some US police forces are buying GeoSpy, an AI tool which can geolocate photos in seconds using clues such as architecture, soil, vegetation, and the spatial relationship of objects in the image. “Our AI can pinpoint locations in supported cities within 1-5 meters accuracy,” GeoSpy claims. Seems ambitious, but who knows?
Inquiries of note
- Is this related to digital policy? The ACCC is seeking views on Australia Post’s proposed stamp price increase. Submissions close 13 March.
What’s next?
Parliament is currently on a break and will return on Monday 2 March.
DOES SOMETHING IN THE EMAIL LOOK WRONG? Let me know. If there’s ever a factual error, editing mistake, or confusing typo, it’ll be corrected in the web archives.