Organisational level monitoring, evaluation and learning – what works and what doesn’t
Bond’s working groups are made up of individuals from across the international development sector, who come together to network, learn and share with their peers.
The Monitoring, Evaluation and Learning (MEL) group is one of the largest working groups, with over 450 members, ranging from dedicated MEL professionals to those with a more peripheral involvement or interest. The group meets quarterly online or in person to provide space and opportunity for learning, networking, and sharing best practice.
In June, the MEL group held a face to face event, hosted by The King’s Trust International. The event was attended by 22 members from across 18 different organisations. The objective was to discuss experiences of designing, conducting, validating, and synthesising MEL activities at organisational level. We heard from Tiphaine Valois (Ethical Tea Partnership) and Anna Downie (Frontline AIDS), who shared valuable insights from their organisations’ practices, innovations, and challenges through presentations and Q&A. Discussions covered whether participants’ current efforts were geared to improving or merely proving their impact, and a brief World Café explored covering organisational learning cultures, data management tips, and alternative methodologies like outcome mapping and harvesting.
These are issues with no universal “right answers”, but the opportunity to exchange experiences was invaluable for understanding how different organisations have navigated them.
Here are the highlights from our discussions:
Why does organisational MEL exist, and why is it difficult?
Within individual programmes, the primary purpose of MEL is to allow all stakeholders to know whether the programme is doing the right things (evaluation) and whether it is doing them right (monitoring) – and so, ultimately, whether it is achieving its aims. At organisational level, the purpose should similarly explore progress towards a higher-level ambition, typically found within a mission statement or strategic vision.
However, these broader strategic ambitions often lack the same geographic constraints, thematic parameters, or time limitations. A single organisation may need to bring together information on the number of people receiving agricultural equipment in sub-Saharan Africa, case studies from an after-school education programme in southeast Asia, statistical change in the maternal mortality rate in a hospital in Central America, and the extent of a governmental response to a policy-advocacy initiative in Europe.
The audience for MEL also shifts at organisational level, with senior management or colleagues in fundraising and communication often prioritising a demand for good-news stories over nuanced examples of learning. Progress towards a strategic vision tends to be featured in public-facing annual reports, which are read by audiences less familiar with the sector than the typical recipients of programme reports, and which will have much less space for describing context.
One of the key challenges in managing information across multiple programmes and contexts is therefore finding the right balance between granularity and accessibility.
What does (or should) good organisational MEL look like?
From our discussions, three features emerged as common to most organisations’ interpretations of “good practice”:
1. Focusing leadership and culture on impact. Organisational MEL practitioners must ensure that every level of our organisations remain focused on the transformational changes we are aiming to achieve in people’s lives.
It is vital that boards and executive teams do not perceive data from programmes as merely a component of some “bigger picture.” Instead, other KPIs and capacity metrics should be recognised as pathways to fulfilling our core mission, which is our shared corporate raison d’être. It can be tempting for those based in HQs to cite resource constraints or donor requirements as unfortunate “realities” that need attention, but a smallholder farmer losing a day’s livelihood to attend an ineffective programme event is just as real, and as significant.
2. Balancing nuance and clarity. When designing organisational systems and frameworks, it is essential to determine what data to simplify and what to amplify, and for whom. The goal is to communicate the most meaningful data effectively, without needing a guidebook to explain it.
Although nearly a decade old, many participants agreed that Nigel Simister’s paper on Summarising portfolio change: results frameworks at organisational level (INTRAC 2016) remains the go-to reference here.
3. Responding to localisation. A key point of discussion was how an organisation’s structure and history shapes or constrains its organisational results framework.
Much of current practice has been about getting the right data flowing “up” from programmes to decision-makers, so that decisions can more closely reflect programme realities. That’s not a bad start, but ultimately, we should instead be advocating for decisions to be made closer to programmes, so that they can be based on lived experiences and not curated data – however carefully packaged. Of course, as INGOs evolve from traditional, centralised models to ones that distribute power across multiple centres, the complexity of tracking and managing results – both what is measured and whose outcomes count – will intensify.
How can we get there?
Group members identified several strategies, including:
- Engage constructively with colleagues. Meet people where they are, communicate through stories rather than technicalities, and focus on what truly matters to them.
- Invest time. Be prepared to spend considerable time, potentially months or even years, aligning expectations and defining parameters for new systems or frameworks.
- Understand the real needs behind requests. If stakeholders’ demands of MEL seem excessive, they can still benefit from receiving something “a bit better” than what they’ve currently got. If you think people are focusing on the wrong metrics, show them something better rather than just pointing out flaws. Remember that when people say they are “evidence driven”, they usually – even in the best cases – mean they want data to help them optimise actions they’ve already decided to take.
- Build a network of champions. Cultivate a community of practice that promotes active reflection, adaptive management, and meaningful metrics. Identify and train individuals from across teams and functions to act as advocates. Including influential figures, such as your CEO, will certainly help.
- Promote impact-focused dialogue. Make use of existing organisational meetings and communication channels to talk about impact. If colleagues announce success stories that don’t go beyond activities or outputs, prompt them to go further. Repeatedly asking “so what?” will encourage answers that relate to your organisation’s goal. You may even wish to go further and link organisational goals to external and cross-sectoral ambitions, such as the Sustainable Development Goals.
- Embrace methodologies that work for your organisation. Aggregating and visualising results from across countries and themes often seems easiest when data is quantitative, and boards are often most comfortable with numerical targets. However, you need to be careful about adding apples and oranges. As at programme level, some of the richest and most compelling data is qualitative, and it should not all be lost in the drive for simplicity.
Methodologies such as Outcome Harvesting and Most Significant Change have potential to work at organisational level, despite coming with their own challenges – which often relate to the time needed to gather, validate, and summarise outcomes. For example, the Ethical Tea Partnership maintain quantitative data aggregation as a key component of their approach, but they supplement it with outcome harvesting and framing indicators to create richer and more contextualised findings. They then use sensemaking workshops to explore harvested outcomes before compiling their global impact dashboards.
Outcome Mapping was also discussed, although it was noted that at programme-level this requires careful planning before the intervention begins (unlike the previous two methodologies), and so this would likewise need consideration at organisational-level. It’s also important to ensure that everyone understands the approach – we heard of one organisation whose long list of progress markers had been interpreted as indicators by some colleagues, essentially giving them 130 KPIs.
Thank you Diletta Morinello of King’s Trust International for hosting the event. The Bond MEL group is currently chaired by Natalya Williams (World Vision), Daniel Burwood (Integrity Action), and Alastair Spray (INTRAC); with a steering committee comprised of Veronica Fletcher (All We Can), Paska Moore (Salvation Army), Angela Keenan (Link Education), and Lucia Solda (SD Direct).
You can join the page here to make sure you don’t miss out on the next one. On the group page, you can also access MEL resources and presentations from events.
Category
News & ViewsThemes
MEL