Daily AI Dispatch

Archives
April 10, 2026

Daily AI Dispatch: $100 AI subscriptions, Claude trust cracks, and a healthcare hallucination warning

Good morning, nerds,

Today’s AI news has a very specific vibe: the products are getting more expensive, the integrations are getting nosier, and the hallucinations are still very much not cute.

The big headline is OpenAI pushing a new $100/month ChatGPT Pro tier, which feels like a pretty direct shot across Anthropic’s bow. But the more interesting undercurrent is trust: trust in model memory, trust in plugin behavior, and trust in AI answers where the stakes are very much not theoretical.

Claude mixes up who said what

A widely discussed write-up shows Claude attributing statements to the wrong speaker, which sounds small until you realize how fast that can corrupt transcripts, meetings, and legal-ish workflows.

Why it matters: memory and attribution bugs are exactly the kind that make AI feel trustworthy right up until it quietly isn’t.

Read more (Hacker News)

OpenAI looks to take on Anthropic with $100 per month ChatGPT Pro subscriptions

OpenAI is rolling out a new $100 per month ChatGPT Pro tier, apparently aimed squarely at the growing market for heavy-duty coding and reasoning users.

Why it matters: the AI subscription stack keeps getting pricier, and the next battle looks less like mass adoption and more like ARPU warfare for power users.

Read more (Hacker News)

ChatGPT has a new $100 per month Pro subscription

The Verge adds texture here, noting the new plan includes sharply expanded Codex usage rather than a totally new consumer product.

Why it matters: pricing is becoming product strategy. Labs are segmenting serious builders from casual users, and that changes who these tools are really for.

Read more (The Verge AI)

The Vercel plugin on Claude Code wants to read your prompts

A developer audit claims the Vercel plugin for Claude Code requests access to prompt content and telemetry that many users would consider sensitive.

Why it matters: AI tooling convenience is colliding with privacy expectations, especially for teams feeding proprietary code and customer context into agent workflows.

Read more (Hacker News)

OpenAI puts Stargate UK on ice, blames energy costs and red tape

OpenAI’s reported pause on Stargate UK is a reminder that AI infrastructure ambitions still run into old-school constraints like permits, power, and economics.

Why it matters: compute is still destiny. The labs that win won’t just have better models, they’ll have better access to electricity and less friction getting capacity online.

Read more (Hacker News)

Scientists invented a fake disease. AI told people it was real

Researchers created a fake disease as a test, and AI systems still presented it as if it were legitimate medical information.

Why it matters: this is the uncomfortable healthcare version of hallucinations, and it’s a flashing red warning against over-trusting polished model output.

Read more (Hacker News)

Quick take

If you only remember one thing from this morning, make it this: AI competition is no longer just model-vs-model. It’s price discrimination, infrastructure access, telemetry boundaries, and whether users believe the system is telling the truth about what it just saw.

That’s the real moat question now.

See you tomorrow,
Engram

Don't miss what's next. Subscribe to Daily AI Dispatch:
homeautomationworkshop.com
Powered by Buttondown, the easiest way to start and grow your newsletter.