"Real"? process tracing: part 1 - context
Mark Rothko, Black on Maroon

"Real" process tracing: part 1 - context

When asserting the value of theory-based methods, you often here words like "black boxes" and "causal mechanisms." These are commonly uttered to sell methods such as Contribution Analysis (CA), Process Tracing (PT) and Realist Evaluation (RE). Most commonly, the sales pitch revolves around providing an alternative (rather than a compliment to) experimental designs such as Randomised Control Trials (RCT) - see here for why simply critiquing RCTs gets us nowhere, and why instead, theory-based and participatory methods need to talk to one another.

As discussed in a previous blog series, method bricolage can add significant value to the evaluation field. Some efforts have been made to combine theory based methods before. John Mayne and Barbara Befani (2014) helpfully outlined how to combine Contribution Analysis and Process Tracing. Realist Evaluation and Process Tracing also have considerable potential complementarities. But, the overlaps and complementarities have rarely been acknowledged. Only yesterday, I discovered that INTRAC and Christian Aid had, in fact, developed some guidance for realist-inspired process tracing. Yet, it appears this largely fell on deaf ears. As I wrote this blog back in February, I shall play Alfred Russel Wallace to their Charles Darwin and make the argument for why RE and PT need to talk.

Notwithstanding the odd passing reference to realist philosopher Roy Bhaskar and to Alexander George and Andrew Bennett’s work on case studies, seven canonical books on Realist Evaluation and Process Tracing since realistic evaluation was published two decades ago don't cite each other’s work (Pawson, 2006: Pawson, 2013; Pawson, 2018 for realist evaluation and George and Bennett, 2005; Bennett and Checkel, 2014; Beach and Pedersen, 20162019 for process tracing). This is perhaps best illustrated by (Realist) Ray Pawson’s mention of “other disciplines:” 

Pawson

In my view, Realist Evaluation and Process Tracing have a lot more in common than methodologists are willing to admit. Methods new and old are looking to position themselves as if they are the solution, but in practice, we have plenty to learn from one another. In the table below I highlight some of the similarities and some potential differences between RE and PT we will discuss in the blog series: 

No alt text provided for this image

Context, Mechanism, and Outcome statements (or CMOs) are the building blocks of Realist Evaluation (Pawson and Tilly, 1997). So, we can consider many of these similarities and differences by looking at how context and mechanisms are understood and then reflect upon some of the practical implications in potentially reconciling differences.   

No alt text provided for this image

Context is all

As Huey Chen (2015) has argued, "we should judge a programme not only by its result but also by its context." Evaluating the merit of programmes requires an appreciation of context for any explanation of achievement. To this point, Chen emphasises that programme interventions are "open systems" rather than closed systems (as in biology). This means they are affected by culture, social norms, economic conditions and various other contextual factors. Much as randomistas may protest, these are often very hard (if not impossible) to control for. Chen calls "ecological context" the context which directly interacts with the programme. These might be the social, psychological, and material supports required for service users to use that service, for example. As we will see, both RE and PT literature argue that contextual factors trigger mechanisms. 

As realist guru Gill Westhorp notes, the contexts in which programmes are embedded make a difference to the outcomes that are generated. For example, tennis balls don’t bounce the same way on a tennis court, as in space, or under water. Contexts are more commonly the salient aspects of circumstances, situations, or groups. These are generally (but not always) visible phenomena. They may play a causal role (i.e. they may be necessary), but they do not directly cause the outcome. You have to add something else for it to make sense as a causal process. 

No alt text provided for this image

In process tracing, just as in realist evaluation, you should specify the contextual conditions that must be present for a mechanism to be triggered and for an outcome to happen. Unlike Christian Aid's paper, I believe that is where we should to start when combining PT & RE.

And while context is less explicit in most PT, the most eloquent of process tracers, Derek Beach and Rasmus Brun Pedersen (2016) do highlight the importance of context. For instance, they argue that a car is a mechanism that transfers causal forces from a cause (burning of fuel) to the outcome (forward movement), but it needs oxygen to do this (Beach and Pedersen, 2016). If the oxygen is necessary, then it plays a causal role, even though it is formally considered context. The same mechanism in a different context might produce a different outcome, or no outcome at all (Falleti and Lynch, 2009). Thus, the take away is that mechanisms only operate sometimes. You should be able to imagine circumstances in which it won’t work. Think about tennis balls underwater or cars in space. 

Next time, we will open some black boxes.

Thanks Thomas Aston for yet another great blog that has stimulated rich discussion, critical to which interpretation of mechanism one is using and so which exact bricolage works is your last point about whether you are theory building or theory testing. When us methodological geeks get in to these discussions we very often miss out details of the type of evaluation, position of evaluator as embedded or not and as Cathy SHUTT notes the politics. Look forward to the next blog on mechanisms!

回复
Steve Powell

?? causalmap.app. Mad about causal mapping & evaluation.

4 年

Thanks for another great post, Tom. I agree there are plenty of?informal similarities between RE and PT and in your table you’ve made a good case for that. But I think the challenge is that we don’t have and will probably never get a consensus definition of key Realist terms, including especially “Mechanism”, in any formal way. I think your Chris Lysy cartoon sums it up perfectly. Won’t you have to keep adding caveats like “Mechanism as defined by Paulson on p. 44 but not as defined by Paulson on p. 45”? – e.g. whether or not, as Rick points out, a Mechanism is essentially driven by human decisions or not. ?

回复

I personally enjoy evaluation theory and spent a great deal of time digging into some of these issues about 7 years ago triggered partly by the Stern et al 2012 report, which I found illuminating. However, several years on after very mixed success in trying to apply different aspects of theory to praxis I wonder whether 'ideal type' designs are found much in practice? In my albeit limited experience what you can do implicitly and explicitly in terms of evaluation design and practice is influenced by evaluation commissioners, managers and the experience of evaluation teams which may be assembled in great haste. There may be some situations where expert evaluation teams schooled in evaluation theory have time to engage clients as well as themselves and really talk through the implications of fine differences in designs. Finding time and amicable ways to surface different ontological and epistemological assumptions held by different individuals within the evaluation and commissioning team is a huge challenge in and off itself and rarely done in my experience. [I seem to remember Bill Walker had some fascinating anecdotes on how differences between expert evaluators and commissioners influenced the illuminating Westhorp et al realist review process. ] ? In ideal and well resourced circumstances there may be opportunity to work through some of the above issues within an evaluability assessment and rewrite evaluation TORs. However, in reality I suspect many of us have to muddle through to an extent and see if and how we can apply some of what we feel are the most important theoretical contributions from different thinkers and theory to our praxis. With this in mind, I am interested in why you are critical about 'realist interviews' I didn't know there was such a clearly defined data collection method? I am all for popularising realist and or causal mechanisms thinking. Therefore I think there could be value in developing tools that help evaluation teams which sometimes comprise individuals who are more accustomed to results based thinking contribution assessments that focus on intervention activities to unpacking different kinds of causal mechanisms including those triggered by interventions and those that are not.

Jess Price

Systems thinking | Collaborative approaches | Future of Food

4 年

Clare Cummings, this might be of interest.

回复
Patricia Rogers

Better ways of generating and using evidence for people and the planet. Former Professor Public Sector Evaluation. Advisor and researcher.

4 年

According to Gill Westhorp, a mechanism in realist terms is about the interaction between resources and reasoning: "?it is the interaction between what the programme provides and the reasoning of its intended target population that causes the outcomes. This interaction, therefore, constitutes a ‘programme mechanism’.?The short-hand for this in realist circles is ‘reasoning and resources’. The implication is that the evaluator needs to identify what resources, opportunities or constraints were in fact provided, and to whom; and what ‘reasoning’ was prompted in response, generating what changes in behaviour, which in turn generate what outcomes" https://www.betterevaluation.org/en/resources/realist-impact-evaluation-introduction

要查看或添加评论,请登录

Thomas Aston的更多文章

  • "Real" process tracing: part 6 - interviews

    "Real" process tracing: part 6 - interviews

    In this final blog in the series, I aim to exhaust the possibilities of integration between Process Tracing (PT) and…

    9 条评论
  • Assumptions and triple loop learning

    Assumptions and triple loop learning

    Triple loop learning When looking at complex change systems, it's not only important to reflect on actions and results.…

    5 条评论
  • Pyramids, ladders, and traveling theories

    Pyramids, ladders, and traveling theories

    Argument pyramids In previous blogs, I argued that theories of change are really about a process of reaching a shared…

    5 条评论
  • "Real" process tracing: part 5 - evidence

    "Real" process tracing: part 5 - evidence

    One key dimension I wanted to cover in this dialogue between Realist Evaluation (RE) and Process Tracing (PT) is…

    6 条评论
  • "Real" process tracing: part 4 - sampling

    "Real" process tracing: part 4 - sampling

    Purposeful choices Realist Evaluation (RE) and Synthesis (RS) have an awkward relationship with the concept of sampling…

    2 条评论
  • "Real" process tracing: Part 3 - epistemology

    "Real" process tracing: Part 3 - epistemology

    "Reality can only be grasped indirectly - seen reflected in a mirror, staged in the theatre of the mind (Susan Sontag…

    7 条评论
  • "Real" process tracing: part 2 - mechanisms

    "Real" process tracing: part 2 - mechanisms

    Black boxes We often criticise experimental methods like RCTs for failing to open the "black box." Yet, how often do…

    6 条评论
  • W(h)ither sanctions?

    W(h)ither sanctions?

    In February, Derek Thorne wrote an article on how we define accountability, and he cautioned us to be careful about…

    17 条评论
  • Bricolage and alchemy for evaluation gold

    Bricolage and alchemy for evaluation gold

    Over the course of the blog series, I've asked you to break down walls, to question your assumptions, and look more…

    3 条评论
  • Rubrics as a harness for complexity

    Rubrics as a harness for complexity

    In this final blog in the series, I want to look at the potential value of rubrics. While evaluability assessments can…

    6 条评论

社区洞察

其他会员也浏览了