Monitoring progress in a local government technical assistance programme: Some notes from experience
PDG Development Consultants
PDG is a public sector consulting firm that supports decision-making in the public interest
Author: Cara Hartley - PDG Senior Consultant
This piece describes the evolution of an approach to monitoring whether technical assistance projects to city governments are on track to deliver the intended benefits. City environments are complex and lasting benefit from the technical assistance requires the adoption of any product, decision-making tool, model, or research findings into the city institution. To know whether we were progressing towards this goal, we found it necessary to move beyond the conventional monitoring of pre-determined standardised outputs and outcomes. This piece describes our emerging practice of identifying and measuring indicators of progress on a quarter-by-quarter basis. It is aimed at fellow M&E practitioners as well as funders of technical assistance.
Introduction
Since I started working full-time in monitoring and evaluation (M&E) around 2014, most of my M&E experience has been external. I have mainly served as external evaluator, or as the designer of M&E frameworks for application by my clients.
After working mainly externally, I now have an internal M&E role. In November 2020, I became the M&E lead for a three-year programme consisting of a package of technical assistance projects to several city governments. As M&E lead, I get to design as well as implement the M&E tools – including monitoring tools.
In this role, I work with several technical assistance project teams across several cities. Together we make sense of what they are trying to achieve – and whether they are on track to achieve it.
This brief post shares one example from this experience, which we have dubbed “short-term progress monitoring”.
Moving beyond standard output monitoring
Cities are complex environments. Any city government is complex and it takes wisdom and savvy to navigate them effectively. In addition, it’s been a particularly challenging three years for South African cities: COVID-19 struck mere months into this programme’s implementation. Subsequently our cities have navigated the July 2021 unrest; the uncertainty of the local government elections; and the recent flooding in KwaZulu-Natal. All these events impact on the institutional environment within which we are seeking to support changes in policy, planning and management.
Interventions in complex environments require more, not less, monitoring. Consider the difference between trying to run 3?000 meters on an athletics track, or trying to run 3?000 meters along an unknown footpath in a nature reserve:
In the world of M&E, it is well known that your understanding of the intervention will shift over time. It is therefore recommended that theories of change are updated regularly (e.g. every year) and that course corrections are made to the intervention accordingly. I had not previously considered what it would mean for monitoring.
Thinking about “institutionalisation”
On a quarterly basis, we needed to report to the funder on whether our projects were “on track”. The mere accumulation of standard outputs – such as the number of workshops held or reports produced – could demonstrate whether things were going according to our original plan. But it could not really answer the question of whether the city was likely to benefit from our work in a lasting way. In a complex environment, it is quite possible to deliver predetermined outputs (meeting contractual obligations), but for them to sit on the shelf, and for this only to become apparent through an annual survey or by city counterparts volunteering the information. We could not wait for this; we needed a more regular indication of whether our work was “landing” in the city.
Over time, our programme management team started using the language of “institutionalisation”. We recognised the need to “institutionalise” our work, if it was to have lasting benefit for the cities. By this we meant integrating the product, decision-making tool, model, research findings, newly established committee, etc., into the city in such a way that it remains in use. This was closely related to our growing emphasis on “value for money”, and particularly the “effectiveness” dimension of this.
We were already drawing on an implicit understanding of what was needed “institutionalise” our work. When discussing project risks, reporting on project progress, or reflecting on whether we were delivering value to the cities – we would often touch on indications that the work was still relevant; that it was synergising with internal city processes and would slot into these well; we would reflect on whether the appropriate stakeholders were engaging with the processes, etc. However, this was unsystematic and typically retrospective. We relied heavily on what the project leads chose to highlight from quarter to quarter. Some important processes might be stalling, and project teams might be trying their best to adapt to changing circumstances, but because they were not linked to any of our predefined outputs this was not necessarily picked up unless they were severe enough to be flagged in the risk register or to necessitate a change in contractual commitments.
Emerging adaptive monitoring approach
It therefore became clear that we needed some more systematic means of monitoring whether projects were headed in the right direction for institutionalisation. We therefore took the following steps.
The characteristics of our short-term progress monitoring indicators are:
领英推荐
Reflections
We started introducing these M&E enhancements towards the end of 2021, and our approach is still evolving from quarter to quarter. Unfortunately, just as we are getting into a good working rhythm, the programme will conclude at the end of 2022. However, there have been clear benefits to this approach and it has really affirmed the value of remaining open to change in our M&E approach. These are my key take-aways so far:
Conclusion
The introduction of short-term progress monitoring indicators has notably strengthened the meaningfulness of our M&E work. We are able to show progress toward institutionalisation in a structured way. If I have the opportunity to take on similar work in future, I would include some form of short-term progress monitoring in the M&E plan from the outset, and ensure resources in order to implement it.
Much remains to be refined:
As we move toward the end of the project, many products have already been handed over. We hope to check on whether they are being used as intended, but also take more of an “outcome mapping” approach. We are already seeing that we could not have predicted all the ways in which our products would be used; therefore the role of the M&E portfolio in the final stretch of the project will not only be one of checking for what was expected, but also exploring and observing how the system interacted with these new elements regardless of what was intended.
Appendix: Client engagement scale
One of the short-term progress monitoring approaches that is emerging as a useful one across several projects, is the monitoring of client engagement. For a technical assistance product (such as a report, model, or framework) to be used, it needs to be understood and bought into by the intended users. In the public sector, the official adoption of the product by the relevant authorising structure is also crucial – and this requires them to see the value and fit of the product within their broader strategy. To check in on this, we developed the following approach:
When setting up the indicators, we identify the most critical 3-4 units, decision-making structures, or senior managers in the city who need to be engaging with the technical assistance.
At the start of each quarter, we identify the level of engagement that is required from each unit (in the upcoming quarter) in order to ensure the uptake and institutionalisation of the product once we exit.
At the end of each quarter, we assign a rating and motivate for that rating. If a higher or lower level of engagement is needed for the coming quarter, we adjust the target.
Client engagement rating scale?
Example
If we are developing a software solution to support better maintenance of the facilities at public parks (hypothetical example – not a real project), we may identify the unit responsible for the management of these facilities and the IT unit. In addition we may have a senior management team who need to endorse the work and consider whether it should be replicated elsewhere in the city. We use a rating scale target and then measure their level of engagement. We may indicate that:
In a recent quarterly report, this approach brought the low level of engagement of a specific senior manager into sharp focus. We were able to report on it with clear evidence (rather than anecdotes) and a clear articulation of why this low level of engagement is a concern in light of our intended pathway to institutionalisation. This could then prompt a conversation with the funder about how this can be addressed.
Data | Evaluation | Learning | Impact
2 年What a comprehensive and insightful review - when projects are complex, it's so easy to get stuck in the details and spend all your time trying to make left from right. The short-term goals make total sense! Congrats to you and your team.
PHD Candidate - Leveraging data for the Voluntary Sector | Educator and Researcher - Monitoring and Evaluation | Qualitative Data Analyst | Legacy Chairperson -South Africa Monitoring and Evaluation Association Board
2 年Good job on documenting this process Cara - it is insightful. That 3000m analogy to make the case for more monitoring in a complex environment is great!
10% emissions cuts needed every year from now to achieve 1,5°
2 年Heather Bryant Prof. Dr. Wolfgang Meyer Miché Ouédraogo, Ph.D nice example discussing institutionalisation from support agency perspective.
10% emissions cuts needed every year from now to achieve 1,5°
2 年Cara Hartley Eleanor Hazell good paper for adaptive management stream?