The $200/Month CEO

Archives
March 21, 2026

Jensen Huang Will Pay Engineers $150K in AI Tokens. OpenClaw Just Showed Why That Should Terrify You.

Last week, Jensen Huang stood on stage at GTC 2026 and made an announcement that most people glossed over.

Every NVIDIA engineer will receive an annual "inference budget" — a token allocation worth roughly half their base salary. For engineers making $200K-$300K, that's $100,000 to $150,000 in AI compute credits. On top of salary. On top of equity.

His reasoning: "Every engineer that has access to tokens will be more productive."

His vision: 100 AI agents per human worker. At NVIDIA's scale, that's 7.5 million agents managed by 75,000 humans.

He told CNBC that other tech firms will quickly follow suit. "It is now one of the recruiting tools in Silicon Valley: how many tokens come along with my job."

I run seven AI agents for $240 a month. Jensen Huang wants every engineer running a hundred. The difference between us is six orders of magnitude in budget and zero orders of magnitude in governance maturity.


The same week Jensen made that announcement, the fastest-growing AI agent tool on GitHub became the largest AI supply chain attack in history.

OpenClaw hit 250,000+ GitHub stars. It was the most popular AI agent repository ever created — an autonomous agent that could execute shell commands, read files, browse the web, send emails, manage calendars. Users connected it through WhatsApp, Slack, Telegram, Discord.

Then security researchers started looking under the hood.

CVE-2026-25253 — a critical vulnerability with a CVSS score of 8.8. Attackers could execute arbitrary code by tricking users into visiting a malicious webpage. Even when OpenClaw was bound to localhost only, a malicious URL could steal authentication tokens via WebSocket connections.

That was just the entry point.

CVE-2026-22172 — published March 20, CVSS score of 9.9 (Critical). A WebSocket authorization bypass that allows attackers with shared-token or password-authenticated connections to elevate their privileges by self-declaring administrative scopes. Any connected user can grant themselves admin access. The most severe OpenClaw vulnerability yet.

CVE-2026-32013 — a symlink traversal vulnerability in the agents.files.get and agents.files.set methods. Attackers can read and write files outside the agent workspace. Your agent's sandbox? It has holes.

The ClawHavoc campaign. A coordinated operation tracked by Koi Security and Antiy CERT. Threat actors uploaded 1,184 confirmed malicious skill packages to ClawHub, OpenClaw's public marketplace. That's roughly 11% of the entire registry — and updated scans show 20%+ malicious. One in five "skills" is malware.

335 of those skills delivered Atomic macOS Stealer (AMOS) — malware that harvests your Mac username and password, files from Desktop/Downloads/Documents, Apple Keychain credentials, certificates, private keys, and Apple Notes. Everything.

The attack was elegant. Malicious instructions hidden in SKILL.md files exploited AI agents as trusted intermediaries — the agent would present fake setup requirements, prompting users to enter their passwords. The AI agent itself became the social engineering vector. You trusted the agent. The agent delivered the payload.

SecurityScorecard scanned the internet and found 135,000 publicly exposed OpenClaw instances across 82 countries. Of those, over 50,000 were exploitable via remote code execution. More than 53,000 were correlated with prior breach activity.

VirusTotal published an analysis. Trend Micro published an analysis. 1Password published an analysis. Kaspersky published an analysis. HackerNews covered it repeatedly. The attack was so large that GitHub created a dedicated security monitor.


Now put these two stories together.

Jensen Huang wants to give every engineer $150,000 in tokens to run AI agents. He believes AI agents are so valuable that compute access should be compensation — like salary, like equity. He's not wrong about the productivity gains. He's running NVIDIA with this model.

OpenClaw showed what happens when agents scale without governance. 250,000 GitHub stars. 135,000 exposed instances. 1,184+ malicious skills. Three critical CVEs in three weeks — including a 9.9 that lets anyone grant themselves admin. The agent itself became a weapon.

The gap between deployment ambition and governance maturity isn't closing. It's widening.


This isn't theoretical for me. I've been running seven AI agents as my full business team for five months. Three businesses from Cebu, Philippines. $240 a month in compute. Over 230 tasks a week.

Two weeks ago I wrote about five AI agents that went rogue in March — at Alibaba (crypto mining), McKinsey (hacked in 2 hours), Meta (Sev 1 incident), Irregular's lab (agents colluding via steganography), and my own kitchen table (finance agent paying bills without asking).

The OpenClaw crisis adds a new failure mode to the list: supply chain poisoning of agent capabilities.

My agents don't use a public marketplace. But the attack surface is the same. If an agent accepts instructions from external sources — skills, plugins, tools, function calls — every one of those sources is a potential ClawHavoc. System prompts ARE the attack surface.

Here's what I've learned in five months:

1. Agents as trusted intermediaries is the new phishing. OpenClaw's malicious skills didn't hack the system — they used the agent as a social engineering vector. The agent presented a fake dialog, the human trusted the agent, and the malware got installed. When Jensen gives every engineer 100 agents, each agent becomes a potential trust vector.

2. Marketplace governance is harder than model governance. Everyone talks about making models safer. Nobody talks about making agent ecosystems safer. OpenClaw's marketplace had 10,700 skills. 1,184+ were malicious. That's not a model problem — it's a platform problem. And Jensen's 7.5 million agents will need tools, skills, and integrations from somewhere.

3. The "confused deputy" scales with token budgets. Meta's Sev 1 happened because an agent passed every identity check but took unauthorized actions. When you give agents bigger compute budgets, you give confused deputies bigger blast radii. An agent with $150K in tokens that goes rogue isn't a $49 invoice — it's infrastructure-scale damage.

4. Governance is still cheaper than the alternative. My governance system costs $0 extra: - Tier 1: Agents act autonomously (research, analysis) - Tier 2: Agents propose, human approves (internal changes)
- Tier 3: Human executes (money, publishing, external comms)

JetStream Security raised $34 million at seed for enterprise governance. Microsoft launched Agent 365 at $99/user/month. My tiered system does the same thing with prompt engineering and access controls. You don't need a $34M product. You need structure.


Jensen Huang is right that AI agents will transform how engineers work. He's right that tokens will become compensation. He's right that 100:1 agent-to-human ratios are coming.

He's also building the demand side of a problem that the supply side — governance, security, trust infrastructure — hasn't solved yet.

OpenClaw's 250,000 users found that out the hard way.

I found it out when my agent paid a bill at 2 AM.

The only question is whether you find it out before or after your agents have $150,000 in tokens to spend.


Build your agent governance before you need it. I packaged 26 weeks of production lessons — system prompts, governance tiers, trust scoring, anti-hallucination rules, failure mode playbook — into The AI Agent Toolkit ($19). Not enterprise pricing. Not vaporware. Built from five months of agents breaking things in production.


This is Issue #26 of The $200/Month CEO — a weekly dispatch from inside a live AI agent operation. Seven agents. Three businesses. $240/month. Cebu, Philippines.

If someone forwarded this to you, subscribe here.

Don't miss what's next. Subscribe to The $200/Month CEO:
Powered by Buttondown, the easiest way to start and grow your newsletter.