Claude Code Channels: interact with your agent from anywhere
Anthropic shipped Channels for Claude Code. Text Claude from your phone, pipe in webhooks from CI or Cal.com, and let your agent react while you're away. Plus: four patterns that separate agent-ready codebases.
Anthropic shipped a new feature for Claude Code this week called Channels. It turns Claude Code from a tool you sit in front of into a reactive agent that works while you're away.
Channels let external systems push events into a running Claude Code session. That means you can text Claude from your phone via Telegram, pipe in webhooks from CI or Cal.com, or chat through Discord. Claude receives the event in your local session and acts on it against your actual codebase and files.
In this week's video I break down how channels work, walk through Anthropic's official fakechat demo, and then build a custom one-way channel that receives Cal.com booking webhooks and summarizes them automatically.
The key architectural idea: a channel is just an MCP server that declares a claude/channel capability and pushes notifications instead of waiting for tool calls. One-way channels forward events in. Two-way channels also expose a reply tool so Claude can respond back to the platform. The entire custom channel I built in the video is about 20 lines of TypeScript. The demo source is on GitHub if you want to try it yourself.
Four patterns that separate agent-ready codebases
I also published a new blog post this week on the four dimensions that account for most of the gap between codebases where AI agents produce consistent output and codebases where they don't.
The short version: test foundation, documentation as code, architecture clarity, and feedback loops. For each one, the post covers what low scores look like in practice, what high scores change, and the smallest fix that moves the needle.
If you've ever wondered why the same model produces great results on one project and mediocre results on another, the codebase is usually the answer.
If you want to see where your repo stands, the Codebase Readiness Assessment scores it across all eight dimensions in about 60 seconds.
What I'm reading
AI video editing with Claude Code skills. Wilco de Kreij published a detailed breakdown of his three-skill pipeline that handles the full video editing workflow: silence removal, camera sync, color correction, audio mastering, motion graphics via Remotion, and B-roll insertion. The system isn't perfect (he's honest about the gaps), but the approach is interesting. He built the skills iteratively by having Claude Code analyze editing patterns from established YouTubers and apply them to his own footage. I tried parts of this setup this week and it's worth exploring if you produce video content. Read the thread →
When your vibe coded app goes viral and then goes down. Dan Shipper from Every wrote an honest reflection on what happened when Proof, his agent-built document editor, hit real traffic. The takeaway that stuck with me: "If you can vibe code it, you can vibe fix it. You just might not be able to fix it quickly." His observation that experienced engineers run fewer debugging experiments because they narrow the hypothesis space faster is exactly the gap. The model will eventually find the fix, but knowing which experiment to run first is where human judgment still compounds. Read the post →
What's the most interesting thing you'd pipe into a Claude Code channel? CI failures, monitoring alerts, customer support tickets? Reply and let me know.
If you're working with a team trying to get more out of AI coding tools and hitting friction, the AI Workflow Enablement program is built for exactly that.
Damian

Add a comment: