Monitoring progress in a local government technical assistance programme: Some notes from experience

Monitoring progress in a local government technical assistance programme: Some notes from experience

Author: Cara Hartley - PDG Senior Consultant

This piece describes the evolution of an approach to monitoring whether technical assistance projects to city governments are on track to deliver the intended benefits. City environments are complex and lasting benefit from the technical assistance requires the adoption of any product, decision-making tool, model, or research findings into the city institution. To know whether we were progressing towards this goal, we found it necessary to move beyond the conventional monitoring of pre-determined standardised outputs and outcomes. This piece describes our emerging practice of identifying and measuring indicators of progress on a quarter-by-quarter basis. It is aimed at fellow M&E practitioners as well as funders of technical assistance.

Introduction

Since I started working full-time in monitoring and evaluation (M&E) around 2014, most of my M&E experience has been external. I have mainly served as external evaluator, or as the designer of M&E frameworks for application by my clients.

After working mainly externally, I now have an internal M&E role. In November 2020, I became the M&E lead for a three-year programme consisting of a package of technical assistance projects to several city governments. As M&E lead, I get to design as well as implement the M&E tools – including monitoring tools.

In this role, I work with several technical assistance project teams across several cities. Together we make sense of what they are trying to achieve – and whether they are on track to achieve it.

This brief post shares one example from this experience, which we have dubbed “short-term progress monitoring”.

Moving beyond standard output monitoring

Cities are complex environments. Any city government is complex and it takes wisdom and savvy to navigate them effectively. In addition, it’s been a particularly challenging three years for South African cities: COVID-19 struck mere months into this programme’s implementation. Subsequently our cities have navigated the July 2021 unrest; the uncertainty of the local government elections; and the recent flooding in KwaZulu-Natal. All these events impact on the institutional environment within which we are seeking to support changes in policy, planning and management.

Interventions in complex environments require more, not less, monitoring. Consider the difference between trying to run 3?000 meters on an athletics track, or trying to run 3?000 meters along an unknown footpath in a nature reserve:

  • On an athletics track, your environment is relatively contained and predictable. Bends in the road can be seen from far away; and the road is level. You may pay attention mainly to factors such as your pace and levels of fatigue; and perhaps the position of other runners on the track.
  • On a nature trail, if you wish to achieve a relatively good time on your 3?000 meters, you need to be looking out for unexpected twists and turns; be mindful or uneven terrain; and perhaps even look out for hikers, bicycles, or animals. Depending on the condition of the path, you may need to stop to clear plant material out of your way or circumvent puddles. You need to be monitoring far more variables, constantly, and adjusting your path accordingly.

In the world of M&E, it is well known that your understanding of the intervention will shift over time. It is therefore recommended that theories of change are updated regularly (e.g. every year) and that course corrections are made to the intervention accordingly. I had not previously considered what it would mean for monitoring.

Thinking about “institutionalisation”

On a quarterly basis, we needed to report to the funder on whether our projects were “on track”. The mere accumulation of standard outputs – such as the number of workshops held or reports produced – could demonstrate whether things were going according to our original plan. But it could not really answer the question of whether the city was likely to benefit from our work in a lasting way. In a complex environment, it is quite possible to deliver predetermined outputs (meeting contractual obligations), but for them to sit on the shelf, and for this only to become apparent through an annual survey or by city counterparts volunteering the information. We could not wait for this; we needed a more regular indication of whether our work was “landing” in the city.

Over time, our programme management team started using the language of “institutionalisation”. We recognised the need to “institutionalise” our work, if it was to have lasting benefit for the cities. By this we meant integrating the product, decision-making tool, model, research findings, newly established committee, etc., into the city in such a way that it remains in use. This was closely related to our growing emphasis on “value for money”, and particularly the “effectiveness” dimension of this.

We were already drawing on an implicit understanding of what was needed “institutionalise” our work. When discussing project risks, reporting on project progress, or reflecting on whether we were delivering value to the cities – we would often touch on indications that the work was still relevant; that it was synergising with internal city processes and would slot into these well; we would reflect on whether the appropriate stakeholders were engaging with the processes, etc. However, this was unsystematic and typically retrospective. We relied heavily on what the project leads chose to highlight from quarter to quarter. Some important processes might be stalling, and project teams might be trying their best to adapt to changing circumstances, but because they were not linked to any of our predefined outputs this was not necessarily picked up unless they were severe enough to be flagged in the risk register or to necessitate a change in contractual commitments.

Emerging adaptive monitoring approach

It therefore became clear that we needed some more systematic means of monitoring whether projects were headed in the right direction for institutionalisation. We therefore took the following steps.

  1. We developed project-specific theories of change for each project in our programme (previously there was only a generic, cross-cutting theory of change).
  2. We identified the most important milestones in the project (including but not limited to the predefined outputs) and how these are intended to contribute to the theory of change.
  3. We introduced short-term progress monitoring indicators linked to these milestones

The characteristics of our short-term progress monitoring indicators are:

  • They are very specific and unique to the project. For instance, they can include specific managers (by name) who need to take specific steps.
  • They are never at the output level. Because they need to give us information about how our project is “landing” in the city institution, they are never at the output level, i.e. we never have full control of them. They tend to sit at the level of immediate outcome, causal assumption, external assumption, or risk.
  • They can be designed for either once-off or repeat monitoring. For instance, a new policy needs ongoing close engagement by our city counterparts for the 18 months of its development – so we will monitor their engagement on a quarterly basis. But that new policy then needs to be endorsed only once by the city council – so we will monitor / track whether that takes place in the intended quarter and stop tracking it once it does.
  • The format of the indicator can vary. The following are common: simple count; rating on a scale; yes/no; percentage. (At the end of this piece, I share one of the rating scales that has proven useful across more than one project.)
  • A quarterly target is attached to each indicator, and we check on the target monthly (soft) and quarterly (hard; with reporting to the funder).
  • Targets are set for the medium term, but constantly reviewed. We seek to define and set targets on indicators for the upcoming 3 to 4 quarters, but there is the opportunity to change them as circumstances change – including the creation of new targets if new areas of work, or areas of concern arise.
  • We collect evidence against our indicators, but try to keep the reporting burden minimal. Project leads need to show evidence (such as meeting minutes) substantiating what they report on an indicator on a quarterly basis, but we try to keep the reporting burden minimal. If there is no non-burdensome means of generating evidence, we may allow clearly reasoned motivations for the value being assigned. This provides transparency, if not verification.
  • We use our judgment on how widely we share our targets and achievements. We use these indicators mainly to help our programme team (internally) and our funder to understand progress. In some cases, the short-term progress monitoring indicators and targets are shared with our city counterparts as well. In other cases, where we deem them to be a bit sensitive or where we simply don’t have a clear platform for discussion of them, we do not share them.

Reflections

We started introducing these M&E enhancements towards the end of 2021, and our approach is still evolving from quarter to quarter. Unfortunately, just as we are getting into a good working rhythm, the programme will conclude at the end of 2022. However, there have been clear benefits to this approach and it has really affirmed the value of remaining open to change in our M&E approach. These are my key take-aways so far:

  • The achievement of short-term progress targets is much more meaningful than the mere accumulation of predefined outputs over time. Together they tell a fuller story, which can then be supplemented with our other, annual monitoring and evaluation tools. (I have not touched on these other tools here – perhaps the subject for another piece.)
  • This approach allows a much more transparent discussion of project progress. It has strengthened and improved the comprehensibility of our reporting; for instance our quarterly reports have more continuity in the goals, milestones and challenges that they cover (rather than an ad hoc sharing of highlights and challenges).
  • Most project leaders find the setting and monitoring of short-term monitoring indicators to be a useful exercise, to help them reflect on where their work is headed. It certainly also allows me a much richer understanding of their work and its likely outcomes.
  • While each project does have long-term objectives which were committed to at the start, those can seem a bit abstract in a long term project with a complex environment. It easier and more concrete for project leaders to commit to short-term indicators (while not losing sight of their long-term objectives as “beacons”).
  • In some projects, it appears to sharpen the project leaders’ focus on fostering institutionalisation with their work – the adage that what gets measured, gets done, remains true.
  • Because what gets measured, gets done – it is extremely important to have the flexibility to change what we choose to measure if circumstances change.
  • Where many different workstreams run concurrently, it is challenging to decide where to focus our M&E resources.
  • It is worth noting that in terms of what our funder requires from us, these short-term progress monitoring indicators are an add-on. We integrate them into our narrative discussion of progress. In terms of the funder’s framework, one could say that we have taken a more structured approach to monitoring short-term causal assumptions.

Conclusion

The introduction of short-term progress monitoring indicators has notably strengthened the meaningfulness of our M&E work. We are able to show progress toward institutionalisation in a structured way. If I have the opportunity to take on similar work in future, I would include some form of short-term progress monitoring in the M&E plan from the outset, and ensure resources in order to implement it.

Much remains to be refined:

  • Whether or not to share this monitoring targets and results with our city counterparts;
  • The appropriate level of granularity at which to track, especially in projects with multiple concurrent workstreams;
  • Whether to track risks in this way, or leave that for the risk register and find some other means of better integrating M&E with risk management;
  • How best to keep tabs on work that has already been handed over but still needed to be fully institutionalised by the client while we moved on to other streams of work. Projects differ greatly in the timing of delivery and handover of products, with some only handing over a major product after 2 years and others delivering many independently functioning products in different workstreams over the same period.

As we move toward the end of the project, many products have already been handed over. We hope to check on whether they are being used as intended, but also take more of an “outcome mapping” approach. We are already seeing that we could not have predicted all the ways in which our products would be used; therefore the role of the M&E portfolio in the final stretch of the project will not only be one of checking for what was expected, but also exploring and observing how the system interacted with these new elements regardless of what was intended.

Appendix: Client engagement scale

One of the short-term progress monitoring approaches that is emerging as a useful one across several projects, is the monitoring of client engagement. For a technical assistance product (such as a report, model, or framework) to be used, it needs to be understood and bought into by the intended users. In the public sector, the official adoption of the product by the relevant authorising structure is also crucial – and this requires them to see the value and fit of the product within their broader strategy. To check in on this, we developed the following approach:

When setting up the indicators, we identify the most critical 3-4 units, decision-making structures, or senior managers in the city who need to be engaging with the technical assistance.

At the start of each quarter, we identify the level of engagement that is required from each unit (in the upcoming quarter) in order to ensure the uptake and institutionalisation of the product once we exit.

At the end of each quarter, we assign a rating and motivate for that rating. If a higher or lower level of engagement is needed for the coming quarter, we adjust the target.

Client engagement rating scale?

No alt text provided for this image

Example

If we are developing a software solution to support better maintenance of the facilities at public parks (hypothetical example – not a real project), we may identify the unit responsible for the management of these facilities and the IT unit. In addition we may have a senior management team who need to endorse the work and consider whether it should be replicated elsewhere in the city. We use a rating scale target and then measure their level of engagement. We may indicate that:

  • The parks service unit needs to be co-creating the solution, and eventually shift to independent use.
  • The IT unit also needs to be co-creating the solution, and eventually shift beyond independent use to independent enhancement.
  • The senior management team needs to be kept up to date, except at key moments where they need to be engaged and responsive.

In a recent quarterly report, this approach brought the low level of engagement of a specific senior manager into sharp focus. We were able to report on it with clear evidence (rather than anecdotes) and a clear articulation of why this low level of engagement is a concern in light of our intended pathway to institutionalisation. This could then prompt a conversation with the funder about how this can be addressed.

Mutsa Carole Chinyamakobvu

Data | Evaluation | Learning | Impact

2 年

What a comprehensive and insightful review - when projects are complex, it's so easy to get stuck in the details and spend all your time trying to make left from right. The short-term goals make total sense! Congrats to you and your team.

回复
Tikwiza S.

PHD Candidate - Leveraging data for the Voluntary Sector | Educator and Researcher - Monitoring and Evaluation | Qualitative Data Analyst | Legacy Chairperson -South Africa Monitoring and Evaluation Association Board

2 年

Good job on documenting this process Cara - it is insightful. That 3000m analogy to make the case for more monitoring in a complex environment is great!

Ian Goldman

10% emissions cuts needed every year from now to achieve 1,5°

2 年

Heather Bryant Prof. Dr. Wolfgang Meyer Miché Ouédraogo, Ph.D nice example discussing institutionalisation from support agency perspective.

Ian Goldman

10% emissions cuts needed every year from now to achieve 1,5°

2 年

Cara Hartley Eleanor Hazell good paper for adaptive management stream?

回复

要查看或添加评论,请登录

PDG Development Consultants的更多文章

社区洞察

其他会员也浏览了