Collin's Thoughts logo

Collin's Thoughts

Archives
April 27, 2026

Taste Is a Moat

It's what's underneath the work.

Two weeks ago I wrote that taste is the new bottleneck. When AI collapsed the cost of producing output, the scarce resource moved to judgment — knowing what's worth building, what to ship, what to cut.

A few of you came back with the natural follow-up: what is taste, exactly, and why can't a good enough model just learn it?

I wrote the answer this week: Taste Is a Moat →.

Here's the abbreviated version:

Taste is System 2 worn smooth into System 1

Taste is subjective but it's not arbitrary. It lives in the chooser. And it forms the same way any other high-judgment skill forms: thousands of small feedback loops from real consequences. Defending a choice to peers who pushed back. Late hours meeting a deadline. A crude comment that earned you a punch in the nose.

Kahneman (Thinking, Fast and Slow) helps here. System 2 is slow, conscious, deliberate thinking. System 1 is fast and automatic. Taste is what happens when you run System 2 on the same kind of decision enough times that it drops into System 1. The choice stops being conscious. You just know.

That's why masters look effortless. Griffey's swing. Federer's backhand. Rand's logos. Wright's homes. Everything looks fluid because the deliberate part finished a long time ago.

Why AI can't get there

Slop signals low intelligence even when the model is sharp. The tell is whether anyone actually chose. Whether any decision in the work cost the maker something. Whether someone even cared.

Models can reproduce output, but not the thousands of small calibration marks left by real stakes. That's the part no compute budget can buy. That gap stays yours.

The part most people miss

Taste is a skill, and it can be refined. That's the part I want you to walk away with, because the easy read of this argument is "you're in or you're out." That's wrong.

The practical move: spend your next hour of attention on choosing more and rejecting more. Defend a call you made and see what you learn. Cut something you were going to ship. Sit with the friction of saying not that. Every one of those is a calibration mark.

Read the full essay →


Worth Reading

Codex for (almost) everything — OpenAI The launch that makes the issue timely. The useful read is not the feature list, it is the shift from API-only automation to screen-level operation.

Vercel April 2026 security incident — Vercel Official bulletin. Read it for the OAuth path, environment-variable guidance, and the product changes Vercel shipped afterward.

Introducing Claude Design by Anthropic Labs — Anthropic Proof that this is not only a coding-agent story. The same agentic interface is moving into design, decks, prototypes, and branded assets.

Anthropic and Amazon expand collaboration — Anthropic Macro backdrop for the agent race: more than $100B over 10 years, up to 5GW of AWS capacity, and Trainium as Anthropic's concentrated compute bet.

LeWorldModel — LeWM project Interesting counterweight to the compute arms race. A ~15M parameter JEPA world model that claims competitive control performance and up to 48x faster planning.


— Collin

P.S. If your team is generating more AI output than anyone has time to actually review, the taste problem is already here. I'm still holding 2 of the first-5 AI Readiness Assessment spots at $99. We'll map where human judgment earns its keep, where the model can run unattended, and where the boundary between those two is eating your week.

Book one →

Don't miss what's next. Subscribe to Collin's Thoughts:
collinwilkins.com
LinkedIn
Powered by Buttondown, the easiest way to start and grow your newsletter.