Triple Loop Learning in Collective Impact Evaluation
Mona Jones-Romansic
I help collective impact initiatives, collaborations, and organizations solve complex social and environmental problems.
It can be easy to get bogged down in the details of an evaluation process. From deciding which evaluation method best fits your activities and answers your questions, to determining the channels for data capture, to developing accurate and measureable indicators. However, collective impact evaluation offers a unique opportunity for grantmakers to evaluate how they show up in the larger system they are working to change. Engaging in collective impact exposes participants to different perspectives on the issue and different points of entry to the solutions. In doing so it should, and does, expand each participant’s thinking about the problem. As a grantmaker engaged in collective impact, you may find yourself asking questions about your organization’s theory of change, wondering if your organization is asking the right questions, and re-thinking how and where you make grants. This post explores how grantmakers can apply triple loop learning evaluation to their collective impact work and gain new understandings that can improve their overall organizational impact.
First, a look at how evaluation is typically managed in collective impact. Because collective impact brings together a diverse group of stakeholders, often with different experience in evaluation, different areas of focus within the broad topic, and initially different definitions of the problem, it can be difficult to land on an evaluation process that fits all needs. Remembering that collective impact asks the stakeholder group to collectively identify a common agenda and shared measure of success is key. As participants work towards establishing these shared definitions of success, it is also vitally important that they remember the difference between working on one part of the system, as they may be used to doing, and the system as a whole. Working on solving systemic problems means there is no clear pathway or solution that can be identified from the outset. Each intervention introduced into the system initiates a ripple effect of changes. These changes may uncover new solutions and/or make it clear that an intervention needs to be adjusted or abandoned.
In the rapidly shifting context of systemic change traditional evaluation methods, with their rigidly pre-determined outcomes, can fall short. For this reason FSG recommends utilizing developmental evaluation along side traditional methods such as formative or summative evaluation. Formative evaluation is useful when looking at the progress a collective is making towards its shorter-term goals or indicators. Summative evaluation, by definition, is best used later in a collective impact endeavor to measure the “sum” of its progress towards longer-term outcomes. However, each of these methods utilizes fixed outcomes and is typically performed infrequently. Developmental evaluation, by contrast, happens much more frequently, is responsive to a rapidly changing environment and is designed to allow for innovation, both in how the group is working together, and what they are trying to accomplish. For more information about evaluation in collective impact settings see FSG’s three part publication on the topic.
As a collective’s work progresses the context or landscape that participants are working in may start to look very different to them. This can prompt questions about how they do their work and their place in the larger system. How can grantmakers utilize the evaluation of the collective to answer these questions? Triple loop learning evaluation offers a framework that grantmakers can employ to look at a changing system and reflect on how this shifting context influences their internal and external decisions. To understand the triple loop model, lets look at the formative and summative evaluation methods mentioned above. Formative evaluation can roughly be compared with single loop learning. Used early in a initiative it asks the question, “are we doing things right?” In other words, are we implementing outlined activities and reaching short-term outcomes such as changes in beliefs or attitudes? Summative evaluation can be roughly compared with double loop learning. This method is appropriate later in an initiative to ask, “are we doing the right things?” For example, are our activities producing the desired longer-term outcomes such as real behavior changes? Triple loop learning can be employed throughout the life of a collective, as part of developmental evaluation, to ask, “how do we determine what is right?” This contextual question support a cycle of continuous learning, adaptation, innovation and what Kania and Kramer describe as the “collective vigilance” required for collective impact endeavors to be successful. Olive Grove’s triple loop learning tool illustrates the learning cycle and provides a sample of relevant triple loop learning evaluation questions that can facilitate a grantmaker’s reflection and increased impact.
About Mona Jones-Romansic
As Senior Consulting Director at Olive Grove, Mona is known as highly intuitive and insightful with a systems-focused approach that surfaces the fundamental obstacles facing her clients rather than simply addressing symptomatic issues. Her facilitation style creates a safe space for her clients to reflect, turn towards challenges honestly and openly, and leverage their unrealized strengths for greater impact.