AI Week logo

AI Week

Subscribe
Archives
June 13, 2025

AI Week: Back from the dead!

Hi! It’s been a while since I sent out a newsletter. Thank you for sticking with me!

No excuses for the gap, but a couple of good reasons:

  • A new job! Last year, I went from freelance to full-time at Examine.com.
  • Too many words! I have a tendency to overwrite, and the AI Week newsletters were running around 2,000 words.

But I haven't stopped running across AI news that I want to share. So I'm relaunching AI week with two goals: Less stories, and less words about each story.

In this week's AI week:

  • Tower of Hanoi topples LLM reasoning
  • Wikipedia editors reject AI summaries
  • Hands off the mouse
  • My personal experience with AI obituary scams

When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype | Gary Marcus | The Guardian

The tech world is reeling from a paper that shows the powers of a new generation of AI have been wildly oversold, says cognitive scientist Gary Marcus

This story is about a paper from Apple showing that "reasoning" LLM models are terrible at solving the Tower of Hanoi disk-stacking puzzle, which you can try for yourself here, with 8 disks and up.

What Apple found was that leading generative models could barely do seven discs, getting less than 80% accuracy, and pretty much can’t get scenarios with eight discs correct at all. It is truly embarrassing that LLMs cannot reliably solve Hanoi.

Generative AI models are pretty good at the puzzle with less discs, but can't generalize to more discs even though the solution is basically the same. Essentially, the models are demonstrating pattern-matching on puzzles that were in their training data set, but aren't able to generalize beyond the training data.

There's a good take from Ars Technica on this:

The Tower of Hanoi failures are compelling evidence of current limitations, but they don't resolve the deeper philosophical question of what reasoning actually is. And understanding these limitations doesn't diminish the genuine utility of SR models. For many real-world applications—debugging code, solving math problems, or analyzing structured data—pattern matching from vast training sets is enough to be useful.... and new approaches are already being developed to address those shortcomings.... These methods show promise, though they don't yet fully address the fundamental pattern-matching nature of current systems.

But this poses a challenge for humans trying to use generative AI, because we don't know what's in the training data. As Gary Marcus puts it in the first article,

One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not.


“Yuck”: Wikipedia pauses AI summaries after editor revolt - Ars Technica

The test grew out of a discussion at Wikimedia’s 2024 conference.

"I feel like people seriously underestimate the brand risk this sort of thing has," said one editor. "Wikipedia's brand is reliability, traceability of changes, and 'anyone can fix it.' AI is the opposite of these things."


Hollywood studios target AI image generator in copyright lawsuit - Ars Technica

Multiple-studio complaint cites AI image outputs as evidence of “bottomless pit of plagiarism.”…

The Disney lawsuit against Midjourney, over infringing AI image generation, joins many other lawsuits over copyright and/or trademark infringement in AI text and image generation.


My personal experience with AI obituary scams

Recently, a friend passed. Another friend heard the news and googled up her obituary. It was easy to find... but it wasn't actually her obituary. It was AI-generated sludge that cobbled some web-scraped facts together with some outright hallucinations into an obituary-shaped object.

The fake obit looked good, but it didn't mention her family at all, and it hallucinated the memorial service completely. And what for? These scammers weren't even trying to solicit donations. Somehow, it's more insulting that they were exploiting someone's death just to drive traffic to an AI-generated, virus-infested WordPress website full of scammy pop-ups.

I've been seeing articles about assorted AI obituary scams now for over a year (there's one from 2024 below). But it hits different when it happens to a friend.

AI Generated Fake Obituary Websites Target Grieving Users

Follow us on Twitter (X) @Hackread - Facebook @ /Hackread

(By the way, some of the fake-obit search results were from Facebook, so don't assume that if you see a friend's obit on FB, it's the real thing. Look for something posted by family or on their own socials instead.)

Don't miss what's next. Subscribe to AI Week:
My tech blog My writing blog
Powered by Buttondown, the easiest way to start and grow your newsletter.