Why outcome-based performance management (still) doesn't work
Why outcome-based performance management (still) doesn't work
About four years ago, I interviewed Dr. Toby Lowe in what was an attempt to start a podcast (which never took off, but here's a link to that episode). The topic was results-based or outcome-based performance management (OBPM) and why it does not work. This week, conversations in our foundation reminded me of the insights Toby and I discussed, highlighting their continued relevance—especially as we reflect on how we manage projects, define success, and relate as funders to our partners.
I also wrote a blog post about the episode, emphasising key arguments Toby made. Here, I’ll briefly synthesise two critical points before discussing how we’re moving forward.
One: Outcomes Attempt to Collapse Complex Realities into Simple Measures.
Change initiatives aimed at improving lives often encounter the problem of reducing 'better life' into measurable outcomes. These improvements vary greatly depending on context, timing, and personal preference. Simplifying these aspects into single or even multiple measures inevitably loses context and nuance. Worse, an initiative might achieve real improvements, yet be deemed a failure if it doesn’t meet predefined measures. Conversely, positive reports might mask a lack of fundamental change in people’s lives.
A significant issue here is the power dynamic OBPM creates. As Toby puts it: "Performance management is a control perspective where someone with authority dictates terms and checks compliance." For funders like us, who aim to work relationally and collaboratively, it’s essential to acknowledge that our partners often understand local realities better than we do.
Two: Setting Quantitative Targets Leads to Gaming Behaviour.
The recent push for "concrete numbers" ignores the extensive research on gaming behaviour associated with quantitative targets. This behaviour is not about cheating but rather a natural response to pressure for specific outcomes. Toby identified four common gaming strategies:
- “Teach to the test” – Focusing only on what’s measured.
- Cherry-pick – Helping those easiest to help, ignoring others.
- Reclassify – Altering classifications to meet targets.
- Make stuff up – In extreme cases, fabricating data.
Given these issues, how do we mitigate the realisation that OBPM doesn't work? What was once 'best practice' in performance management has proven flawed.
Firstly, while we still include specific outcomes and targets in our grant agreements, we treat these flexibly, accommodating change requests from partners. Payments are not tied strictly to achieving specific outcomes, especially if new information indicates these outcomes are no longer relevant.
Secondly, we aim to eliminate targets and logical frameworks altogether, focusing instead on collaboratively formulated outcomes with our partners. These outcomes serve as indicators of desirable changes, not as control tools. They articulate what we believe needs to change to move towards our mission, allowing for adaptability and learning.
To avoid losing context, we base these outcomes on existing evidence and theories, formulating them broadly to allow for local specificity and ongoing refinement. These outcomes are tools for shared learning and exploration:
- Do we see signs of change?
- Is the change desirable?
- What mechanisms and contexts are at play?
- Are there consistent patterns or unexpected pathways?
Our goal is to transition from outcome-based performance management to outcome-based learning.
There remains a desire to assess the performance of our grants. We’re testing an instrument that evaluates six dimensions, such as contextual fit and youth engagement, which may evolve as we refine it. This is more about assessing collaborative efforts than controlling partners, though we acknowledge the inherent power dynamics.
(GPT helped me edit this post)
The Paper Museum
Here is a quote from Nora Bateson that I picked up on LinkedIn, but I cannot pinpoint the original post.
The kind of change that we're looking for, should not be familiar.
Why have I added this to my Paper Museum? I guess the main reason is that it is an important warning for people who are working in change/transformation. We often think we can imagine what the change should look like and how things should be different. The danger in this is that if we create something that feels familiar, even though it makes the impression to be real change, it will inevitably be shallow, not really transformational. What comes after a real transformation is necessarily unfamiliar and new.