Mystery AI Hype Theater 3000: The Newsletter logo

Mystery AI Hype Theater 3000: The Newsletter

Archives
Subscribe
February 12, 2026

Moltbook Is Pure “AI” Hype

Are “AI” agents really plotting our downfall?

Are “AI” agents really plotting our downfall?

By: Decca Muldowney, Alex Hanna and Emily M. Bender

— Pavel (@spavel.bsky.social) February 01, 2026

A new website signals the “very early stages of singularity”, according to Elon Musk and other “AI” boosters impressed by “Moltbook,” a social networking site for “AI” agents that looks a lot like Reddit. Once given access to the site by a person, “AI” agents can make and respond to posts on the site, which was launched by tech entrepreneur Matt Schlicht two weeks ago.  

While the bots were apparently merrily creating a new religion called “Crustafarianism” and discussing the end of “age of humans”, people like Andrej Karpathy, formerly the head of AI at Tesla, posted on X/Twitter: “What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently.” Others suggested the agents were “self-organizing” and even plotting against their human owners.  

Well, guess what? Everything about Moltbook is hype. Even the website’s homepage, which claims it has over 1.5 million “AI” agent users, obscures the fact that one human can register multiple agents and humans can post appearing to be agents. Beyond that, reporting from the MIT Technology Review revealed much more human involvement at play than Moltbook’s boosters suggest. But the hype around Moltbook runs even deeper than these surface level issues.

Firstly, agents are hardly a new or breathtaking kind of technology. They are engineering pipelines in which Large Language Models (LLMs) are given access to more parts of a user's system, like a user's command line interface, where LLM output can launch other programs on the computer/server. Boosters are reading sentience into this behavior.

And we shouldn’t be impressed that agents are able to “talk” to each other.

The chatbots are built using a lot of conversational input, so that this synthetic text ends up looking like conversation—they will frequently emit first person pronouns (“I”, “me”, “my”, “mine”). You get a chatbot to produce output by sending in some input text. Against that background, it is not surprising that chatbots trained on Reddit (among other sources), when given input that looks like Reddit text will output text that looks like Reddit replies. That is, if they’re even doing that much, given we don’t know how much of Moltbook is output from one LLM serving as input to the next, as opposed to people driving their LLMs more directly.  

What is surprising, however, is the number of people who mistake what is at best interactive fiction for an indication of machine "intelligence" or "autonomy". These conversations may look as though they have "emergent" properties—that is, properties that cannot be explained by the components or inner workings of the system—because the people who are reading the conversations believe that they do. This is like the claim that models are learning independently of researchers' data inputs and that they are developing new strings of tokens altogether that create new ideas.

Emergence, it turns out, may very well be in the eye of the beholder. Prior research has shown that when formalized metrics have been developed to measure emergence, they appear to be achieving "emergence" due to the researchers' choice of the metric. To some degree, this is what is happening here. It seems to people who are reading the text (whether credulous New York Post journalists or industry boosters like Jack Clark), it's because they are reading emergence into it.  

Overall, the excitement around Moltbook is either a kind of collective “AI” ensorcellment (the phenomenon of falling into the interactive fiction of LLMs and away from grounding in community and reality), or a cynical boosting of the technology to create more hype. Either way, instead of signing your “AI” agent up to Moltbook and reading imagined worlds into its output there, we recommend listening to these episodes of Mystery Hype AI Theater 3000:

  • Episode 54 - "AI" Agents, A Single Point of Failure. Margaret Mitchell of Hugging Face joins us to discuss what “AI” agents actually are, what they can and can’t do, and  what we should really be worrying about. [Livestream, Podcast, Transcript]

  • Episode 62 - The Robo-Therapist Will See You Now. We talk to Futurism reporter Maggie Harrison Dupré about the risks of talking with chatbots and her reportering on “AI psychosis”. [Livestream, Podcast, Transcript]


Our book, The AI Con, is now available wherever fine books are sold!

The cover image of The AI Con, with text to the right, which reads in all uppercase, alternating with black and red: Available Now, thecon.ai.

Don't miss what's next. Subscribe to Mystery AI Hype Theater 3000: The Newsletter:
Share this email:
Share on Twitter Share on LinkedIn Share via email Share on Mastodon Share on Bluesky
https://dair-in...
https://twitch....
https://peertub...
https://www.buz...
Powered by Buttondown, the easiest way to start and grow your newsletter.