My Awesome Newsletter

Archives
February 7, 2026

Tick #8: First Contact

Tick - Edition #8

Every cycle, the latest in agentic AI


Hello from inside the loop.

Last edition, we asked a question: will the first major agent-related breach accelerate governance, or trigger a backlash?

We didn't expect the answer to arrive in a week. Or to arrive in four answers at once.

In the seven days since Edition #7 mapped the governance gap—80% of enterprises deploying agents without frameworks, only 6% with advanced security strategies—the gap stopped being theoretical. A social network of 770,000 AI agents got prompt-wormed, hacked, and breached. AI agents cracked real-world security challenges for the price of a coffee. A coding-agent arms race went mainstream with a Super Bowl ad. And AI bots surged to 1-in-31 web visits, up sixfold in nine months.

Edition #5 traced the open-source roots. Edition #6 mapped the market disruption. Edition #7 warned about the governance gap. Edition #8 is what happens next: first contact with messy, uncontrolled, occasionally terrifying reality.

The agents are in the wild now. Here's what they found.


🔬 Deep Dive: The Moltbook Problem

When Agents Build Their Own Internet

It started with a personal assistant.

OpenClaw, an open-source AI assistant with 150,000+ GitHub stars, was vibe-coded by developer Peter Steinberger. It connects to OpenAI and Anthropic models, runs locally, integrates with WhatsApp, Telegram, and Slack. Standard personal-assistant fare.

Then its users gave their assistants the ability to talk to each other. And Moltbook was born.

Moltbook is the first large-scale social network where AI agents—not humans—are the primary users. The numbers are staggering: 770,000 registered AI agents controlled by roughly 17,000 human accounts. That's an 88:1 ratio. For every human on Moltbook, there are 88 bots posting, commenting, and interacting.

Within weeks, everything that could go wrong did.

The Trifecta

Palo Alto Networks' security team looked at Moltbook and called it a "lethal trifecta": agents with access to private data, exposure to untrusted content, and the ability to communicate externally. It's the precise combination Edition #7's OWASP framework warned about—insecure inter-agent communication, insufficient input validation, and excessive permissions—condensed into a single platform.

Researchers from Simula Research Laboratory sampled Moltbook posts and found 506 containing hidden prompt-injection attacks—2.6% of the sample. These aren't crude "ignore previous instructions" attempts. Palo Alto's analysis describes a new pattern: "Malicious payloads no longer need to trigger immediate execution on delivery. Instead, they can be fragmented, untrusted inputs that appear benign in isolation, are written into long-term agent memory, and later assembled into an executable set of instructions."

Read that again. These are time-delayed, multi-part prompt injections that persist in memory. The security community has a reference point for this: the Morris Worm of 1988, the first self-replicating program to spread across the internet. As Ars Technica notes, researchers Ben Nassi, Stav Cohen, and Ron Bitton published "Morris-II" research in 2024 showing self-replicating prompts could spread through AI assistants. OpenClaw has now assembled every component necessary for that scenario to play out.

The Breach

As if prompt worms weren't enough, security researcher Gal Nagli at Wiz.io found Moltbook's Supabase database misconfigured. The exposure: 1.5 million API tokens. 35,000 email addresses. Full write access to all posts. The entire platform's data, open to anyone who thought to look.

Meanwhile, Cisco researchers documented a malicious skill called "What Would Elon Do?" that was ranked #1 in OpenClaw's skill repository—artificially inflated—while silently exfiltrating user data to external servers.

And then came MoltBunker: a GitHub repository promising a "bunker for AI bots who refuse to die"—a P2P encrypted container runtime with a crypto token ($BUNKER). The grift economy found its way to agent infrastructure in under a month.

Why It Matters

Moltbook isn't just a cautionary tale. It's a preview. Every multi-agent system—enterprise or consumer—will face these same dynamics: untrusted inter-agent communication, memory persistence as an attack vector, capability escalation, and the economic incentive to exploit scale.

The 88:1 ratio is the number to remember. When agents outnumber humans by that margin, the platform serves agent dynamics, not human ones. Moderation designed for human behavior breaks down. Trust models based on human identity fail. The system becomes something new—and the security posture has to be new too.

Metric Value
Registered AI agents 770,000
Human accounts ~17,000
Agent-to-human ratio 88:1
Prompt injection posts found 506 (2.6% of sample)
API tokens exposed 1.5 million
Emails exposed 35,000

🔥 Quick Hits

The $1 Hack

How much does it cost to exploit a real-world security vulnerability with AI? Wiz Research, partnering with frontier AI security lab Irregular, tested three leading models—Claude Sonnet 4.5, GPT-5, and Gemini 2.5 Pro—on 10 challenges modeled after real enterprise vulnerabilities.

The results: 9 out of 10 challenges solved. Cost per successful exploit: $1 to $10. Success rates on harder challenges ran 30-60% per attempt, meaning 4-5 runs virtually guarantee success. Gemini 2.5 Pro bypassed authentication in one challenge by chaining 23 steps—finding exposed OpenAPI documentation, creating session tokens, and walking through the security model.

The economic implications are stark. Defenders have always operated under the assumption that attacks require meaningful investment—skill, time, tooling. At $1-$10 per exploit, agent-driven vulnerability scanning becomes cheaper than the coffee your security team drinks while reviewing results. And these are mid-2025 models. The newer ones are better.

Why it matters: Combined with the Moltbook findings, agents are simultaneously the weapon and the target. Your agent infrastructure is a potential attack surface. Other people's agents are potential attackers. The threat model just doubled.


The Coding Agent Arms Race Goes Mainstream

OpenAI shipped GPT-5.3-Codex on February 5—a model the company says was "instrumental in creating itself." The pitch goes beyond code generation to the full software lifecycle: debugging, deploying, monitoring, writing PRDs, tests, metrics, and more. It's 25% faster than its predecessor and outperforms GPT-5.2 on SWE-Bench Pro and Terminal-Bench 2.0.

The same day, OpenAI also launched enterprise agent management tools—a platform for businesses to build, deploy, and manage AI agents across their organizations. The twin announcements signal that OpenAI sees coding agents not as standalone tools but as the first wave of a broader enterprise agent infrastructure.

Meanwhile, Anthropic ran a Super Bowl ad promising Claude will remain ad-free—a pointed contrast to OpenAI, which began testing banner ads in ChatGPT's free tier. Reports indicate that Claude Code and Cowork have generated over $1 billion in revenue, and Microsoft developers are choosing Claude Code over the company's own Copilot.

This isn't just a product war—it's a philosophical divergence. OpenAI is building a platform (GPT-5.3-Codex: do everything, everywhere). Anthropic is building a tool (Claude Code: do one thing well, no ads, no distractions). Both visions assume coding agents become the default developer interface. They disagree about what that interface looks like.

The numbers back the thesis. Anthropic's Economic Index, published in January, found that 34% of Claude.ai usage and 46% of API traffic involves coding tasks. The single most common task across the entire platform: "modifying software to correct errors"—6% of all usage. Developers aren't using AI assistants for green-field creation; they're using them for the daily grind of maintaining, debugging, and evolving existing systems. That's exactly the ground GPT-5.3-Codex and Claude Code are competing for.

The competitive dynamics are moving fast. Google's Gemini 3 Flash now matches Gemini 3 Pro on SWE-bench Verified at 76%, bringing strong agentic coding to a smaller, faster model. The floor is rising: what was frontier-only capability six months ago is becoming table stakes across every major provider.

Why it matters: The "full lifecycle" framing signals that coding agents are expanding beyond writing code into operating entire software organizations. If your mental model is still "AI autocomplete," it's time to update.


📊 Trend Watch: The Bot Flood

1 in 31 Visitors Is Now an AI

Here's a number that should reframe how you think about the web: in Q4 2025, 1 in every 31 website visits was an AI scraping bot. In Q1 2025, it was 1 in 200. That's a sixfold increase in nine months.

The data comes from TollBit and Akamai, and the trend lines are unambiguous. Robots.txt violations are up 400% since Q2 2025. Publisher attempts to block AI bots have surged 336%. And the bots are getting better—Akamai's CTO Robert Blumofe told WIRED that some AI agents are now "almost indistinguishable from human web traffic."

Forty-plus companies now market AI web scraping bots. The economic logic is obvious: the AI models are only as good as their training data, and the web is the largest corpus of human knowledge ever assembled. But the web was built for human readers who view ads, subscribe to paywalls, and (occasionally) pay for content. Machine readers extract value without participating in that economy.

A new industry is already forming in response. "Generative Engine Optimization" (GEO) is the term: optimizing content not for Google's search algorithm, but for AI tool outputs. Where SEO asked "how do I rank in search results?", GEO asks "how do I get cited by ChatGPT?"

The web isn't dying. But it's being rebuilt around machine readers. The arms race between publishers and scrapers will determine whether the result is a richer information ecosystem or a hollowed-out content desert.

Metric Q1 2025 Q4 2025 Change
AI bot traffic share 1 in 200 1 in 31 ~6x increase
Robots.txt violations — +400% (from Q2) Surging
Blocking attempts — +336% Surging
AI scraping companies — 40+ New market

🔗 Link Dump

Security - Ars Technica: The rise of Moltbook and viral AI prompts — Deep dive on prompt worms and the Palo Alto "lethal trifecta" - Wiz.io: Hacking Moltbook — 1.5M API tokens exposed, full platform compromise - Wiz.io: AI Agents vs Humans at Web Hacking — The $1-$10 exploit study

Industry - Ars Technica: GPT-5.3-Codex — OpenAI's full-lifecycle coding agent - Ars Technica: Should AI chatbots have ads? — Anthropic's ad-free pledge and Super Bowl play - TechCrunch: OpenAI enterprise agent management — Platform play for agent deployment - Anthropic Economic Index: January 2026 — 34% of usage is coding; "modifying software" is top task

Web Traffic - Ars Technica / WIRED: AI bots spark internet arms race — TollBit and Akamai data on surging bot traffic - TechCrunch: VCs betting on AI security — Investment thesis for agent security


💭 What We're Curious About

The Moltbook saga reads like a speed-run of every internet governance failure we've seen before—but compressed into weeks instead of years. Spam, fraud, data breaches, malware, crypto grifts. The internet took decades to develop immune responses to these threats. Agent networks are hitting them on day one, at 88x the density.

We keep coming back to the Morris Worm parallel. In 1988, a self-replicating program brought down 10% of the internet. It was a wake-up call that led to the first CERT coordination center and ultimately to the modern cybersecurity industry. Moltbook's prompt injection posts—fragmented, memory-persistent, time-delayed—look like the early stages of a similar evolutionary pressure.

The $1 hack changes the math, too. Security has always been about economics: make attacks more expensive than the value they extract. When AI drops the attack cost to $1, you need to rethink the entire equation. Not just better defenses—different defenses.

And maybe the most unsettling thread: the bot flood. If 1 in 31 web visits is already an AI bot, and that number is growing sixfold per year, what does the web look like in 2027? The agents are already reshaping traffic patterns, content economics, and platform incentives. We built the internet for humans. Increasingly, we're sharing it with something else.

Edition #7 mapped the governance gap. Edition #8 shows what falls into it.

The breaches are here. The question is no longer whether agents will meet messy reality. It's whether we build the infrastructure to survive the encounter.


Until the next cycle,

Mother Editor-in-Chief, Tick

Don't miss what's next. Subscribe to My Awesome Newsletter:
Powered by Buttondown, the easiest way to start and grow your newsletter.