De-risking the Wrong Thing
The conversation that quietly ended one version of my career and began another.
The conversation that quietly ended one version of my career and began another.
"Why would we ship something we know is broken?"
That sentence lived in me like a reflex. My training. My identity. My responsibility as an engineer. I had written flight control software. I had built agriculture machine control systems. I had engineered low latency fraud detection, domains where a mistake didn't cost money — it cost trust. In those environments, certainty is not a preference. It is a moral requirement.
So when Ryan turned to me, I did not see a reframe coming.
"Why would we build something unless we know it's valuable?"
The floor didn't drop. It shifted — quietly, the way the ground moves before you realize it was moving. I had an objection. Rather, I didn't just have an objection, the sentence was completely unmoored to my reality. "Ryan didn't hear me", I said to myself. He cannot possibly want me to ship a feature I know is incomplete. But objections aren't answers. Somewhere in the part of my mind that has always known the difference, I felt that.
A new feature came up adding audio to our AI story game. I did what senior engineers do: mapped the risks, identified what needed attention before we could ship, broke it into components, estimated timelines. A week, maybe a little more.
Ryan asked two questions.
"How long to build the simple version?"
A day.
"How long to de-risk each of the risky pieces?"
A couple of days each.
I heard him nod over the phone. "Build the simple version and ship it. If people complain about the rough edges, we know the feature is valuable — and we ship the fixes the next day. We show our users something better than a polished feature: we show them we're engaged. We're listening. We're with them. But if no one complains? That's signal. The feature isn't valuable. And then it was a waste of our time to de-risk the implementation at all."
Why would we ship something we know is broken?
I heard my own words from the other side of his question. I had been asking whether the code was ready. He had been asking whether the code mattered.
These are not the same question.
The understanding came in layers.
The first was logical. A user complaint is cheaper validation than engineering certainty. If I spend a week hardening an implementation for a feature no one uses, I haven't protected my team. I've wasted them. I optimized against the wrong risk.
The second landed differently. There is something an engineer can do that no amount of polished code can replicate: show up. Ship a rough feature on Tuesday. Hear from users on Wednesday. Ship exactly what they need on Thursday. That is not a failure of quality control — it is the highest form of engagement with the people you are building for. You are in dialogue.
The third keeps this from becoming reckless:
There are times when you should not ship something broken.
When a feature's value is proven — when users depend on it, when trust is extended — and the implementation has sharp edges, take the time. Shore it up. Protect your users.
The question is not "Is this ready?"
The question is "What are we de-risking, and for whom?"
When value is unproven → de-risk the value. Let users tell you whether it matters before you make it perfect.
When value is proven and implementation has sharp edges → de-risk the implementation. You know it matters. Protect the people it matters to.
The rigor does not disappear in either branch. The question is what it is aimed at — and how much correctness the moment, and the market, is actually asking for. Best for what? is a harder question than Is it ready? But it is the one the moment is asking.
The engineer who cannot tell the difference is optimizing for best without asking best for what. That question — best for proving value, or best for protecting the people who already depend on it — is the threshold, and it does not require a title to ask.
I built my career in aerospace software, agricultural machine control, fraud detection at scale. That gave me rigor. The ability to see where failure hides before anyone in the room notices the risk. A professional love for the systems I built.
But it also gave me a belief I didn't know I was carrying: that I built to build. That the code's correctness was the endpoint.
What I had to lose that day — and it was a loss, even if it looks like growth in the retelling — was that belief. Not the rigor. The rigor stays. But the belief that building was the purpose had to die.
The identity forged in aerospace and machine control and fraud detection was not wrong. It was a form of professional love. But it was love directed at the instrument, the code itself. And the instrument is not the music.
I don't build to build. I build to connect — to connect people, to connect business, to connect purpose.
Look back. Find the day — the conversation, the offhand question from someone who saw something you hadn't — where someone showed you something outside your role boundary, and when you hear something from someone you respect that is so outside of our experience if seems crazy. Those are the deep learning moments. Those are the times when you need to slow down and you likely are still the expert in the room...on the domains that go you there...but you are about to expand your domain. Slow down and listen.
What comes after, if you allow it, is a deeper understanding of your role, and more importantly, that your output is just one step along the value chain. The boundary you thought defined you was a starting place.
The day you meet that boundary and feel the resistance. That day is the day something is trying to begin.