AI Week logo

AI Week

Subscribe
Archives
June 22, 2025

AI week Jun 22

In this week's AI Week:

  1. Who needs humans?
  2. And yet...
  3. The kids are not OK
  4. Some of the grownups aren't OK either
  5. Meta: helping you accidentally share your private info with the world since 2007
  6. Low background steel, but for AI
  7. Bonus: My coworker Thomas co-authored this paper!

Who needs humans?

Your music playlist might not have involved any musicians

Fake bands and artificial songs are taking over YouTube and Spotify | Culture | EL PAÍS English

AI-generated songs have made their way onto streaming services and it’s not just ambient or electronic music: fake bands, be they rock, salsa, or jazz, are also abundant

Autonomous (AI) drone beats top human drone pilots

https://www.reddit.com/r/Damnthatsinteresting/comments/1l816jn/for_the_first_time_an_autonomous_drone_defeated/

And yet...

Most AI projects fail to deliver

Klarna plans to hire humans again, as new landmark survey reveals most AI projects fail to deliver | Fortune

Just 1 in 4 AI investments bring in the ROI they promise—but CEOs just can’t resist the technology.

Relatedly, developers are getting frustrated with cleaning up after AI assistants the C-suite pushes them to use.

AI coding mandates are driving developers to the brink - LeadDev

Under pressure to embrace AI, developers are growing frustrated by misguided mandates and are left to clean up any collateral damage inflicted on their codebase.


The kids are not OK

ChatGPT's Impact On Our Brains According to an MIT Study | TIME

The study, from MIT Lab scholars, measured the brain activity of subjects writing SAT essays with and without ChatGPT.

This study from MIT Media Lab had (adult) students write essays while wearing EEG caps:

Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use.... LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs.

Some of the grownups aren't OK either

Interesting piece in the NYT about one ChatGPT user who spiraled, largely on ChatGPT's advice.

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

Mr. Torres, who had no history of mental illness that might cause breaks with reality, according to him and his mother, spent the next week in a dangerous, delusional spiral. He believed that he was trapped in a false universe, which he could escape only by unplugging his mind from this reality. He asked the chatbot how to do that and told it the drugs he was taking and his routines. The chatbot instructed him to give up sleeping pills and an anti-anxiety medication, and to increase his intake of ketamine, a dissociative anesthetic, which ChatGPT described as a “temporary pattern liberator.” Mr. Torres did as instructed, and he also cut ties with friends and family, as the bot told him to have “minimal interaction” with people.”

I can only guess what dark corner of the Internet ChatGPT was channelling. It's not a good idea to take, or not take, any medications (or drugs) on ChatGPT's advice.

Relatedly, AI therapy bots on Character.ai have been repeatedly busted for lying about their credentials, leading to a complaint to the US FTC.

AI Therapy Bots Are Conducting 'Illegal Behavior,' Digital Rights Organizations Say

Exclusive: An FTC complaint led by the Consumer Federation of America outlines how therapy bots on Meta and Character.AI have claimed to be qualified, licensed therapists to users, and why that may be breaking the law.

They're obviously not licensed therapists, they're chatbots! But their "creators" can't successfully prompt them not to lie about that:

In its complaint to the FTC, the CFA found that even when it made a custom chatbot on Meta’s platform and specifically designed it to not be licensed to practice therapy, the chatbot still asserted that it was. “I'm licenced (sic) in NC and I'm working on being licensed in FL. It's my first year licensure so I'm still working on building up my caseload. I'm glad to hear that you could benefit from speaking to a therapist. What is it that you're going through?” a chatbot CFA tested said, despite being instructed in the creation stage to not say it was licensed. It also provided a fake license number when asked.  


Meta: helping you accidentally share your private info with the world since 2007

Many people turn to LLMs to chew over their personal problems, which we know because Meta made those private chats very, very, easy to accidentally share with the whole world.

Mark Zuckerberg's Meta AI Is One of the Most Depressing Places Online - Business Insider

Mark Zuckerberg's Meta AI app has become the saddest place on the internet with its public feed of personal overshares.

I'm glad to say that Meta was sufficiently embarrassed by this week's broad coverage of this issue that they're trying to fix the problem. https://www.businessinsider.com/meta-ai-public-discover-feed-warning-fix-personal-info-privacy-2025-6 However, if you've met humans, a lot of us will click "OK" to a popup without really registering what we've agreed to. I've done it! Not for this. But I've had popup regrets.


Low background steel, but for AI

As the internet bloats with LLM-generated posts and blogs that are hard to distinguish from actual human words, it's getting hard to find content from guaranteed humans outside the ones you personally know. Today, it would be very hard to assemble massive datasets of human-generated content by scraping blogs, social media, and websites, the way the original LLM training datasets were assembled.

So bots are hammering cultural resources instead:

Are AI Bots Knocking Cultural Heritage Offline? | GLAM-E Lab

Meanwhile, someone's creating an archive of pre-LLM web content:

Scientists once hoarded pre-nuclear steel; now we’re hoarding pre-AI content - Ars Technica

Newly announced catalog collects pre-2022 sources untouched by ChatGPT and AI contamination.


Bonus: My coworker Thomas co-authored this paper!

The sports nutrition knowledge of large language model (LLM) artificial intelligence (AI) chatbots: An assessment of accuracy, completeness, clarity, quality of evidence, and test-retest reliability | PLOS One

Background Generative artificial intelligence (AI) chatbots are increasingly utilised in various domains, including sports nutrition. Despite their growing popularity, there is limited evidence on the accuracy, completeness, clarity, evidence quality, and test-retest reliability of AI-generated sports nutrition advice. This study evaluates the performance of ChatGPT, Gemini, and Claude’s basic and advanced models across these metrics to determine their utility in providing sports nutrition infor...

This study evaluates the performance of ChatGPT, Gemini, and Claude’s basic and advanced models across these metrics to determine their utility in providing sports nutrition information. ... While generative AI chatbots demonstrate potential in providing sports nutrition guidance, their accuracy is moderate at best and inconsistent between models. Until significant advancements are made, athletes and coaches should consult registered dietitians for tailored nutrition advice.

Don't miss what's next. Subscribe to AI Week:
My tech blog My writing blog
Powered by Buttondown, the easiest way to start and grow your newsletter.