š” Trust as a bottleneck to growing teams quickly (and more)
Trust as a bottleneck to growing teams quickly
I am a big believer in āmoving at the speed of trustā with teams. You cannot shortcut the work to build strong relationshipsāand Iām afraid there is no roadmap or deadline for that. Sometimes itās easy, sometimes it takes longer. But donāt skip this work. Move at the speed of trust.
Ben Kuhn shares some good tips around this in Trust as a bottleneck to growing teams quickly . I particularly like these two:
- Overcommunicate status . This helps in two ways: first, it gives stakeholders more confidence that if something goes off the rails theyāll know quickly. And second, it gives them more data and helps them build a higher-fidelity model of how you operate.
- Proactively own up when something isnāt going well . Arguably a special case of overcommunicating, but one thatās especially important to get right: if you can be relied on to ask for help when you need it, itās a lot less risky for people to ātry you outā on stuff at the edge of what they trust you on.
And speaking of communicationā¦ Also see Arne Kittlerās Part 4: Clear Communication , part of a series on āClarity for Product Managersā:
Lengthy texts dilute your message or even discourage your counterparts to deal with them in the first place. Focus on the main points you want to make and provide the context thatās necessary to understand them as quickly as possible. When asking for information or a decision, be clear about whatās unclear.
How we got here (itās not a āroot causeā, itās the system)
Lorin Hochstein shares a characteristically solid systems-thinking take in CrowdStrike: how did we get here? :
Systems reach the current state that theyāre in because, in the past, people within the system made rational decisions based on the information they had at the time, and the constraints that they were operating under. The only way to understand how incidents happen is to try and reconstruct the path that the system took to get here, and that means trying to as best as you can to recreate the context that people were operating under when they made those decisions.
The ā no root cause ā concept is something Iāve been thinking about a lot as Iām working on a particularly complex project at work. Somehow we constantly forget that things usually are the way they are not because of a single āmistakeā, but because of a the culmination of a bunch of legitimate reasons.
Systems get the way they are because of decisions made in good faith based on the data available at the time. And the worst thing you can do as a new person coming in to improve things is to hunt for a single āroot causeā to fix. Thatās just not how software (or people!) work. So take the time to understand Chestersonās fence . Go ahead and draw boxes and arrows until no one disagrees any more about how the system works. And then figure out which parts can be improved, and in which order.
PS. Also see How Complex Systems Fail :
Because overt failure requires multiple faults, there is no isolated ācauseā of an accident. There are multiple contributors to accidents. Each of these is necessarily insufficient in itself to create an accident. Only jointly are these causes sufficient to create an accident. Indeed, it is the linking of these causes together that creates the circumstances required for the accident. Thus, no isolation of the āroot causeā of an accident is possible. The evaluations based on such reasoning as āroot causeā do not reflect a technical understanding of the nature of failure but rather the social, cultural need to blame specific, localized forces or events for outcomes.
Thanks for reading Elezea! If you find these resources useful, Iād be grateful if you could share the blog with someone you like.
Got feedback? Send me an email.
PS. You look nice todayĀ š