Flipping the Script on MEL: Why Learning Must Come First
Flipping the Script on MEL: Why Learning Must Come First
In development, philanthropy, and social change, being “evidence-driven” has become a mantra. Programmes are judged not by their ability to create tangible change, but by how rigorously they generate data on pre-defined indicators meant to prove that a desired change has occurred. Monitoring, Evaluation, and Learning (MEL) frameworks reflect this, treating learning as an afterthought—something that happens only after meticulous monitoring and rigorous evaluation.
This approach is backwards. In complex social change, learning must come first. Instead of locking ourselves into rigid metrics and performance indicators, we should start with deep curiosity about what is actually happening and how we can influence it in a meaningful way. Only then should we decide what to monitor and how to evaluate progress.
Flipping MEL into Learning, Monitoring, and Evaluation (LME) recognises that change is emergent and unpredictable. Rather than chasing exhaustive data, we focus on insights that inform decisions. Instead of proving impact, we prioritise adaptive change. Monitoring and evaluation move away from compliance-driven exercises and become tools that support learning and adaptation.
The Evidence Trap: When MEL Becomes Self-Referential
For decades, development (and management more broadly) has operated on the assumption that only what can be measured can be managed. The logic is simple: define indicators, track them meticulously, and use rigorous evaluation to assess impact. This works well for predictable, technical challenges—rolling out vaccination campaigns, building infrastructure—where success can be clearly quantified.
In complex systems, however, rigid indicators can be more harmful than helpful. They create a false sense of precision and distorting behaviour as people optimise for what’s being measured rather than what actually matters. They limit adaptation, making it difficult to adjust course when reality unfolds differently than expected.
Complex social change isn't predictable or linear. The challenges we face—climate change, poverty reduction, economic transformation, inclusive governance—are messy, dynamic, and emergent. Success isn’t about hitting predefined metrics but about shifting conditions and relationships in ways that can’t always be forecast in advance.
I'm more and more under the impression that rather than adapting to this complexity, the Monitoring & Evaluation (M&E) industry has become self-referential. Increasingly, it measures its own effectiveness not by how much change it enables, but by how much “better” evidence it produces. The race to refine methodologies, improve indicators, and generate more precise numbers has become an end in itself rather than a means to better decisions. And this even though a large portion of the field is adopting systems and complexity language.
Instead of asking whether an intervention is working, much of M&E today is focused on proving that it has worked—according to predefined criteria. Practitioners spend more time generating numbers than using them to reflect, adapt, and improve. Learning is reduced to a compliance exercise rather than a genuine process of discovery. Complexity gets flattened into simplistic, linear causal chains, and success is reduced to whatever can be quantified along these chains.
Beyond Indicators: Seeing Patterns Instead of Chasing Numbers
If rigid indicators don’t work well in complex systems, what should we do instead? Real change often emerges in unexpected ways, so we need tools that help us detect patterns and respond to them.
For example, outcome harvesting helps us identify what actually changed and work backward to understand why. Typologies allow us to categorise changes into meaningful clusters. Causal hotspots help us explore the conditions under which specific interventions work, allowing for more generalisable insights.
The goal is not to eliminate measurement but to make it meaningful—to shift from rigid tracking to dynamic sensemaking, freeing ourselves from measurement constraints and creating space for genuine learning and adaptation.
The Myth of Neutral Data: Recognising Bias in Measurement
A core assumption behind data-driven decision-making is that numbers don’t lie. But this is an illusion.
As this article highlights, data is never neutral. What gets measured, how it is framed, and what is left out all shape the story data tells. We tend to measure what is easy to quantify rather than what actually matters, creating an illusion of objectivity while reinforcing hidden biases.
The presence of data can create a false sense of certainty, making us overconfident in conclusions that may be incomplete or misleading. Worse, when numbers drive decision-making without critical reflection, they can lead us to optimise for measurement rather than meaningful change.
Instead of treating data as absolute truth, we must see it as one facet of reality. We need to be careful with data collected by others—what did they choose to ignore? Learning requires more than numbers—it demands context, critical thinking, and an openness to the unexpected.
The Role of Human Judgement in a Learning-Driven Approach
If data is never neutral and predefined indicators can distort reality, what do we rely on instead? The answer isn’t to abandon measurement but to recognise the essential role of sense-making, joint deliberation, and human judgment.
In complex systems, change is rarely obvious or linear. No dataset, no matter how sophisticated, can tell us exactly what is happening or what to do next. Practitioners, communities, and those closest to the ground often have deep, experience-based insights that no set of indicators can fully capture.
Deliberating over diverse forms of evidence—including lived experience—helps us see different facets of reality and reflect not just on the data itself, but on how we relate to it and to each other. This relational approach lays the ground for a joint judgment call on what to do next.
Rather than seeking definitive answers, we should frame our interpretations as hypotheses to be tested. What patterns are emerging? What might explain them? What signals should we watch for next? This shift from certainty to inquiry creates space for adaptation, creativity, and responsiveness.
Conclusion: Embracing Good Enough Monitoring & Utilisation-Focused Evaluation
MEL has too often become an industry that serves itself rather than the work it is meant to support. We refine methodologies and collect more rigorous data, not because it improves decision-making, but because the system demands it.
By putting learning first, we break free from this trap. We move from rigid measurement to sense-making, adaptation, and joint deliberation based on diverse types of evidence and including different perspectives. We embrace good enough monitoring and utilisation-focused evaluation—not for compliance, but for learning.
This isn’t about rejecting evidence. It’s about making evidence work for us. When we replace control with curiosity, we open the door to real learning, real adaptation, and real change.
What do you think? How does this resonate with your experience in monitoring, evaluation, and learning? Have you seen examples where shifting the focus from proving to improving made a difference? What strategies have you used to prioritise learning over rigid measurement?
Let’s continue the conversation—share your thoughts in the comments or by replying to this email!