Damian Galarza | AI Engineering logo

Damian Galarza | AI Engineering

Archives
May 13, 2026

Your AI team doesn't need more people. It needs agents.

Stripe merges 1,300 agent-written PRs per week. Coinbase is restructuring around AI-native pods. This week: what 'supported by a fleet of agents' actually means in practice, plus Codex for Claude Code users and OpenAI's $4B deployment bet.

This week's blog post

Stripe merges 1,300 agent-written PRs per week. Every one is human-reviewed. None contain human-written code. Stripe published the number on their engineering blog earlier this year. What makes it land is the implication: the linear headcount-to-output relationship engineering leaders have planned around for two decades is no longer reliable.

This week's post is a take I've been chewing on for a while, finally written down. What "supported by a fleet of agents" actually means in practice. Which tasks automate well, which don't, where the ROI compounds, and where it breaks down. Evidence from Stripe, Coinbase, Ramp, and Shopify, plus the three metrics (MTTV, AI-CFR, Interaction Churn) that separate AI-native teams from AI-curious ones.

Read the full post →

One thing the post lands on hard: before you add agents to a team's workflow, audit what they'll inherit. Agents amplify whatever they find. Messy codebase, flaky CI, thin test coverage, undocumented business logic, all of it gets reproduced at agent speed. If you want a structured starting point, the Codebase Readiness Assessment runs in Claude Code and scores your codebase across the dimensions that matter most for agent leverage.


This week's video: Codex for Claude Code Users

I've been on Claude Code as my daily driver for twelve months. A couple weeks ago I added Codex (GPT-5.5) to the stack. I'm still running both for different tasks.

I Use Both Codex and Claude Code. Here's the Rule.

Watch the video →

The video covers what changed my mind (a codebase audit on Emma and a vague-intent feature build), how to share an existing setup across both tools (AGENTS.md ↔ CLAUDE.md, npx skills add for skills in both .claude/ and .agents/), and the rule I run today: Codex for production agent code where edge-case completeness matters, Claude Code for editorial and writing work where collaborative iteration matters more. Plus the auto-mode config that actually enables a long autonomous run.

The migration cost is smaller than the discourse implies. If you've already invested in CLAUDE.md, skills, hooks, and MCP servers, most of that work carries straight over. The video shows you which pieces map cleanly and which need a small adjustment.


Curated links

OpenAI launches the OpenAI Deployment Company. OpenAI just put $4B behind a standalone company designed to embed Forward Deployed Engineers into enterprises and turn AI capability into operational change. They're acquiring Tomoro to seed the bench with 150 FDEs on day one and partnering with TPG, Bain, McKinsey, Capgemini, and a dozen other investment and consulting firms. Take it as $4B of validation that "Forward Deployed Engineer" is a real category, not a job-title experiment. The F500 segment now has a name-brand option for it. Mid-market and growth-stage companies still need the model-agnostic, founder-direct version of the same work, which is most of what I do as an independent. The category just got a much louder argument.

Lenny's podcast with Eric Ries: Incorruptible. Eric Ries, the author of Lean Startup, on his new book about how successful companies are destroyed by failing to protect what made them valuable. I've watched this happen up close at a company I cared about, where the ethos that made the early work great quietly got priced out by the ethos that made the next phase efficient. Most of the way it happened was structural, not personal. If you're building something you want to last, or you're inside something you can feel slipping, this conversation is worth your time.

HoolaHoop's AI-Native Team Topology. Leigh Newsome documents the team-shape pattern showing up across HoolaHoop's coaching engagements: three to five senior engineers, supported by agents, matching or exceeding what eight to twelve person teams shipped a year ago. Calm, specific, and grounded in real client observations rather than thought-leader speculation. The coaching-side view of the same shift this week's post covers.


One quick note. Practical AI is the small, invite-only Slack I run for builders putting AI into real work. Same tone as this newsletter, faster format. The "where do agents actually fit" conversation in this issue is exactly the kind of thing the room works through. If that sounds like a room you'd want to be in, request an invite →.


If your team is in the middle of figuring out where agents fit, where humans still need to own the judgment, and what your codebase has to change to ship reliably at the new leverage ratio, reply to this email or book an intro call. Always happy to trade notes.

Damian

Don't miss what's next. Subscribe to Damian Galarza | AI Engineering:

Add a comment:

Website
YouTube
Twitter
Powered by Buttondown, the easiest way to start and grow your newsletter.