Principles for Measurement, Evaluation and Learning (MEL): Round 1 reflections
A few weeks ago, I posted a draft set of principles for Measurement, Evaluation and Learning (MEL). I was staggered by the amount of engagement and comments – thanks so much to everyone who contributed.
My main reflections:
I guess a key point is that good MEL principles should be developed for your context, but I harbor a hope that this set may be a useful starter pack!
I’m continuing to develop the principles through engagement with groups in the MEL field to incorporate additional perspectives and expertise. I’ll share the next version with all of you again so we can continue our conversations.
Here are the main themes that I’ve been digesting and looking further into:
1.????Mutual accountability
Most people agree that MEL should prioritise learning and adaptation, but many added that accountability should get a mention. Pablo Vidueira suggested the idea of mutual accountability, and Yonas Dare and others suggested the concept of downward accountability to the community/ stakeholders as well as transparency. Accountability for learning was also mentioned.
2.????Drawing from other similar sets of principles
Many suggested that the development of MEL principles should usefully draw from other sets of principles. These were suggested:
3.????Definition of MEL
This made me reflect on how MEL is different to evaluation, monitoring & evaluation, empowerment evaluation, results-based accountability and so forth. Perhaps it’s worth defining MEL more fully as suggested by Donna Loveridge
领英推荐
?
4.????What are principles?
Several people questioned the use of principles, and how they differ from criteria or standards. Kate McKegg and others suggested that the MEL principles be phrased according to the GUIDE offered by Michael Patton in Utilisation Focused Evaluation. I think this is an excellent idea and am adopting that framing.
?
5.????Decolonising MEL/ locally led practices
Michael Patton called on us to be reflexive and understand that evaluation processes and results are affected by who the evaluator is. Anna Gibert suggested an additional principle about decolonisation and locally-led approaches.
There were also comments about how MEL should be done BY and AS First Nations people as advocated by Nan Wehipeihana , incorporating their ways of knowing such as the Kaupapa Maori Evaluation. ?A great resource from Daniel Ticehurst, MSc https://evalpartners.org/sites/default/files/10Qs4evaluatorsUSletter.pdf.
Data sovereignty as a concept was also explored in the comments.
?
6.????Inclusion
There were comments about more explicitly calling out an inclusive approach. Kate Wilson suggested a more explicit principle around gender equality and social inclusion/diversity including the measurement of do no harm, disaggregation of data and outcomes, use of gender and social analysis and inclusive of diverse voices.
?
7.????Impact driven /outcomes focused
These dual terms in the principles were a bit confusing for people (based on the reflections and questions in the comments). I am exploring how to navigate this - such as dropping the term impact all together - or pair these terms with a more detailed explanation. Others in the comments noted that emergent outcomes should be included.
The comments on your last post were fascinating, thank you for initiating the conversation. We learned a lot from you and your audience and appreciate this reflection document.
Thoughtful Monitoring | Reluctant Evaluator | Organisational Learning | Programme Design | Partnerships | Indigenous Evaluation | Sustainable Agriculture | Livelihoods | Private Sector Development |
2 年Dear Jess Dart, many thanks for pulling together the responses to your draft "MEL" principles. On reading through your reflections and emerging themes, two things struck me. 1. It is perhaps ironic that systems supposedly designed to learn from experience have been so slow or reluctant to learn from their own. It is evident some of these repeat those from the past: Rapid Rural Appraisal in the mid 1980’s with its emphasis on significance, but not as a statistician would define it; in being as accountable to aid's ultimate clients who legitimise aid as much as if not more than those who fund it - the rationale for "beneficiary assessments" in the late 1980's with its emphasis on listening; and aid workers learning about indigenous evaluation through Participatory and Community Based M&E in the 1990’s.? 2. Adding letters to M&E, such as L,?A & R appears to be more for affect, not effect; and reflects experiences among those who put them there and who consider this either revealing or helpful.?I may be in a minority, but adding letters to remind us why you M and E doesn't resolve why they may not. It seems contrived. Don't we need to find out why beyond an unlettered assertion: "unlike traditional ....."? Thanks again. Daniel
Evaluation and systems change consultancy, research, and training
2 年Thanks for keeping up this interesting discussion, and for sharing your reflections, Jess Dart. Deeply appreciated it, and very much looking forward to the next iterations.