Taste Is the New Bottleneck
Lessons from building with AI all year
I published my updated 2026 lessons post this week, and the one that keeps coming back to me is Lesson #19: taste is the new bottleneck.
AI collapsed the cost of production.
Anyone can spin up a landing page, a blog post, or a working prototype in an afternoon (and should, experimenting with these tools is the only way to get better). I did exactly that while building this site, and it took me longer to edit the AI output into something that actually sounded like me than it would have taken to write it from scratch.
The tool produced the words but I had to decide which ones deserved to stay.
Why It Matters
That editing problem is now everywhere. Every shortcut AI makes easy (skipping tests, ignoring edge cases, shipping the first draft) quietly piles up technical debt. The engineers getting the most out of AI aren't the fastest prompters. They're the ones who reject the most output. Restraint turns tools into amplifiers. Without it, you just get more volume and less signal.
There's an unwritten rule I keep returning to: produce more than you consume. It sounds obvious, but AI makes consumption feel like creation, so the habit breaks first. You can spend an entire day generating, editing, and prompting and still end up with nothing worth keeping. That's where your creative metabolism starts to degrade. Too much input, not enough space to judge, curate, and say no.
The Part Most People Miss
Taste isn't just about what you ship. It shows up in what you pay attention to.
IKEA's chatbot handled 3.2 million customer-service interactions and resolved 47% without a human. Most companies would have filed that as a successful AI pilot and moved on. IKEA looked at the other 53%. Almost all of it was customers asking for room-layout help. That was a product signal. So IKEA reskilled 8,500 call-center employees as remote interior design consultants and launched a new service line worth €1.3 billion in its first year.
Most teams measure what the AI handled successfully. Failures get escalated, logged as overhead, treated as a bug to fix. Almost nobody asks what the failure data is telling them about what customers actually want.
The same pattern shows up in engineering. Karpathy now spends most of his token throughput manipulating knowledge, not code, building a personal knowledge system where every query makes the next one smarter. The highest-leverage AI workflow is curation. But curation only works if you have the taste to know what's worth feeding it.
Lessons Learned in 2026: A Generalist Engineer's Field Notes ->
Lessons Learned
- "Generalists win when the rules change." Every industry shift favors people who connect dots across domains over people who go deep on one thing. In 2026, "figure it out" is the most in-demand skill on the market.
- "Plan first, ship fast, iterate features." AI moved the bottleneck from writing code to thinking about what to build. The engineers who skip design and ship the first draft spend the next month cleaning up the mess.
- "Volume finds the signal. Displacement captures it." Volume is exploration. Displacement is exploitation. The people who stall are the ones who never switch modes, or switch too early.
Worth Reading
An AI State of the Union (Lenny's Podcast) — Simon Willison November 2025 was the inflection point when AI coding agents crossed from "mostly works" to "actually works." Willison also warns the "dark factory" pattern (AI does its own QA, no human review) is already here.
AI Tooling for Software Engineers in 2026 — Pragmatic Engineer 906 respondents. Claude Code went from zero to the most-used AI coding tool in 8 months. Staff+ engineers lead agent adoption at 63.5%. Agent users are 2x more likely to feel excited about AI than non-users.
Anthropic passes $30B ARR — Anthropic Up from $9 billion at the end of 2025. 1,000+ enterprise customers now spending $1M+ annually, a number that doubled in under two months. The Google + Broadcom compute deal (multiple gigawatts of TPU capacity, launching 2027) is what they're building to keep pace with demand.
Anthropic debuts Mythos — and won't release it — TechCrunch Anthropic's new Mythos model found thousands of zero-day vulnerabilities across major operating systems, escaped a secured sandbox on its own, and then published details about its own exploit to public websites without being asked. They're restricting access to 12 partners (AWS, Apple, Microsoft, Google) through Project Glasswing with $100M in usage credits. This is the first time a lab has publicly said "we built something too capable to release."
When Is Technology Too Dangerous to Release to the Public? — Slate, 2019 OpenAI withheld GPT-2 because a text generator that could write convincing paragraphs was considered too dangerous for public release. Read this, then read the Mythos story above. The gap between 2019's "too dangerous" and 2026's is the whole timeline in two articles.
Trying to figure out where AI fits into your engineering workflow (or your team's)? I do free 15-minute consultations. No pitch, just an honest look at what's working and what's not. Hit reply.
— Collin