How To Gather Evidence for a Program Learning Workshop
Florence Randari
Empowering development teams to drive sustainable change through Learning and Adaptive Management | Founder: The Learn Adapt Manage (LAM) Network
Welcome to my newsletter. I'll share biweekly insights on how you can generate, translate or use evidence for program learning and adaptive management. To get future LinkedIn issues, sign up by clicking the Subscribe button in the upper right corner.?
Most of us are open to using data and evidence for program learning and adaptive management. However, some people need help identifying suitable sources and types of evidence that can be used during program learning workshops. In this article, I share tips to help curate evidence for a program learning workshop.
What is a Program Learning Workshop?
To avoid doubt, I will start by defining a program learning workshop.
A program learning workshop is a meeting where a program implementation team comes together to review their work and the context of operation to adapt or scale interventions based on available evidence.
The learning and adaptive management happens during the design and implementation cycle of the program. The frequency of the workshops varies depending on the length of implementation, but most programs conduct reviews quarterly or semi-annually. Now that we are all on the same page let's discuss the types and sources of evidence that can effectively support learning during the workshop.
What are the Suitable Evidence Types and Sources for Program Learning?
Before I share the list, I would like to let you know that having a solid monitoring system is a prerequisite for effective program learning and adaptive management. Most programs, either as a donor or organizational requirement, will have a basic monitoring and evaluation system. If your program lacks a theory of change, a logical framework, an indicator tracking system, or anything close to this, I recommend setting that up first.
1) Indicator monitoring data. Depending on how frequently data is collected, most programs will have output and outcome level indicator data available at least every quarter. The key is to ensure that the indicator monitoring data is disaggregated into relevant variables to promote helpful conversations during the workshop. For example, if a program is tracking the number of farmers trained in agricultural practices, saying 500/800 farmers were trained helps the team to think about why they did not meet the aggregated target, but saying that 100/400 females and 400/400 males were trained immediately brings in a gender perspective to the discussion.
2) Field monitoring/visit reports. In addition to the frontline staff, who frequently engage with program participants, field visits by other staff, e.g., the monitoring and evaluation team, technical managers, program leadership team, e.t.c., can generate valuable evidence. The key is to ensure a standardized way of documenting evidence during field visits. For example, a team can decide to put together a guidance note or report template with questions that individuals have to reflect on during their interactions with participants.
领英推荐
3)Technical checklists/assessments. Pre and post-training assessment results can help a team identify technical issues during implementation. You'll be able to learn more about how my program uses technical checklists here to assess if the immediate results of an intervention are met. In addition to indicator monitoring data, this data can be used to review the performance of specific interventions.
4) Context monitoring data. Context indicators measure conditions relevant to the program's performance, e..g., economic, social, political, and critical assumptions of the program's theory of change. A program should implement a mechanism for context monitoring, including relying on context data from other sources like the world bank data, research institutions, etc.
5) Evaluation data. Teams can rely on evaluation data if a program conducts an evaluation during the design or implementation phase, e.g., baseline and midline. In addition to the more common types of evaluations, as part of its #cla strategy, a program can schedule and budget for different evaluations specifically for learning and adaptive management purposes.
6) Research data. Internal or external research data can be valuable in providing both intervention performance and context information. For programs with a research partner in the consortium, aligning the research questions to the program's needs is essential.
7) Program Participant. The participant's/beneficiary's voice is crucial. As a program decides on whom to include, it is essential to consider the different participant groups and how each group might be experiencing the effects of the interventions differently. For instance, if a program works with farmers in a specific location, the experience of a male versus a female farmer might be significantly different. Some programs prefer to conduct interviews with program participants to use the information during the learning workshop, while others have at least one participant in the room during the workshop. Listening directly to the participant on how interventions affect them is the most effective way of understanding the impact of your work.
8) Other program stakeholders. In addition to the program participant or beneficiary, other individuals or groups are directly or indirectly affected by the program interventions. Other stakeholders include regional implementation partners, local government officials, community elders, religious leaders, private sector actors, etc. It is essential for a program to consider data from other stakeholders that might positively or negatively affect or be affected by its interventions.
Thank you for reading! Please leave your feedback in the comments section and share this biweekly newsletter with others you think would benefit.
Would you be interested in learning more about program learning workshops? I'd like you to please read my previous LinkedIn article on 'How to lead a successful multi-sectoral program learning workshop.'
Florence Randari?is a Monitoring, Evaluation, and Learning (MEL) professional who seeks to provide evidence-based guidance to international development actors so that they can achieve sustainable development. She is also a Collaboration, Learning, and Adaptation (CLA) practitioner seeking to empower all implementors with the required knowledge and skills to apply CLA principles in their day-to-day work.
Result oriented Research, Monitoring, Evaluation, Learning Specialist @ Project Hope Namibia | Coordinating M&E activities to gather evidence for informed programming.
1 年An insightful read! I would love to get a sense of what that workshop would be like, any pointers..
Humanitarian | Innovative Problem Solver | Impact Driven Non-Profit Leader
1 年Great resource for holistically thinking about program learning workshops.
MEL Manager
1 年Great!, I am sure the quick question that would come to one's mind would be, Does the program need to generate and use evidence from all the various sources? and I would quickly say Yes, as much as possible although some organizations/programs may be limited by resources to generate data from all the various sources.