What Usually Frustrates Me Whenever I Review Chunks of Project Reports: Our ‘Grasping at Straws’ Habit in Establishing Cause-and-Effect Relationships
‘The number of protection cases decreased due to the awareness raising activities conducted ’, ‘Increase in girls enrollment rate as a result of extensive sensitization campaigns organized ’, ‘Enhanced academic achievement of students because of remedial education support provided’, ‘Attitude of participants changed because of the comprehensive 5 days training facilitated ’, ‘ Capacity of community-based structures strengthened because of the non-food items distributed’, ‘ The number of grave violations against girls and women reduced due to enforced legal framework ’, ‘ Hygiene and sanitation promoted in a given district as a result of the latrines constructed ’, ‘ The quality of education enhanced in project target primary schools because of the school blocks constructed ’ etc. all these are the common slogans we never hesitate to incorporate in our project reports depending on the thematic areas that our respective projects aspire to implement.
I would say this is a ‘grasping at straws’ habit we project holders struggle with. Nowadays, in most circumstances, I feel that this is an ominous incongruence that we are experiencing between the achievements we fantasize about incorporating in our project reports and the reality happening at a field level. ?Let me be straightforward, a cause-and-effect relationship between variables is not something we claim spontaneously as if it is taken for granted. I can assure you that I have been one of the pacemakers committing this substantial empirical mistake in almost all the project reports that I have compiled. However, today is the right time to start demonstrating our quality services provision for those vulnerable and I am convinced that applying genuine statistical measurements in our project reports is one of the essential parameters of quality project delivery. Today through my article I will exhaustively challenge this practice of establishing causal relationships in our reports out of nowhere. I expect you also to challenge me for some of the misunderstandings that I could probably create.??
*****************************************************
Am I saying establishing cause- and- effect relationship is not permitted?
Please do not rate me erroneously, I am not saying that. We can certainly use a causative relationship between the variables we are measuring in our report, however, what I am rationalizing is we need to be curious enough when using such complex measurements. To acquire that curiosity, it is essential to ask the following inquiries:? what makes you decide to use that methodological dimension?? What justifications you would reserve to qualify the relevance of cause-and-effect measurement against the nature of your project indicators/variables as depicted in the logical framework of the project? ?What specific statistical method do you apply to measure the variables? Are you pretending as if you are measuring a causal relationship while the fact is you are identifying an association? How would you collect, analyze, and interpret the data? … all these and other fundamental questions need to be rigorously asked and answered ahead of establishing a cause-and-effect relationship in a project report.?
What qualifying conditions we should consider to claim a cause-and-effect relationship?
Latent vs. observed variables: ?The capability of a project holder to delineate the characteristics of both observed variables that he/she intends to measure really matters. In a specific term, we need to be very careful whenever there is a potential for a ‘latent factor’ to exist between the two observed variables that we aim to establish a cause-and-effect relationship. To expand this a little bit, I will borrow the explanation made by Judea Pearl and Dana Mackenzie (2018) in their book titled ‘‘The Book of Why: The New Science of Cause and Effect’’. The authors mentioned the common example of the relationship that exists between smoking cigarettes and lung cancer. They highlighted the importance of considering the genetic factor, which they call it latent factor, before concluding lung cancer is caused by smoking cigarettes. ?This is because genetic factor has the potential to cause both lung cancer and the desire to smoke.? In a nutshell, the authors argue that under such circumstances it is not systematic enough to simply establish a cause-and-effect relationship in a linear way between smoking cigarettes and lung cancer. To articulate differently, it means apart from cigarette smoking there is a latent factor that is also responsible for lung cancer. Kindly I am not here to encourage smoking, however; I am trying to explain we cannot just simply claim causality. This will lead us to another important methodological aspect of conducting ‘casual modeling’ under the framework of ‘Randomized Controlled Trial’ (RCT) which involves a detailed analysis of the relationship that exists between the two observed variables and latent variables. (If you are interested in deep diving into casual modeling, please refer to the book I have mentioned above). Now, let me ask a question, did we do such kind of systemic exercise when we claim a cause-and-effect relationship in our reports? How was the data collected and analyzed?... Take time for yourself to reflect and keep the answer for your assimilation as appropriate. ??
Would temporality be sufficient ground? : I agree with the notion of time as one of the essential assets in the process of establishing a cause-and-effect relationship. Particularly the temporal order of variables in their occurrence is a helpful component in identifying cause and effect. However, I learned through time, the prior occurrence of the variable we claim as a cause before the other variable we claim as an effect cannot be a piece of sufficient evidence to conclude a causal relationship. Relevant to this, I would like to link you to an interesting metaphysical testimony illustrated by Richard L. Epstein (2011) in his book titled ‘‘Cause and Effect, Conditionals, Explanations: Essays on Logic as the Art of Reasoning Well’’.? Richard conceptualizes the role of temporality, in the process of distinction between cause and effect, from two dimensions:
1st dimension - in circumstances where the actualization of the cause has happened at first instance and the effect operationalized later after some time, temporality is an important parameter in determining causal relationship. ?
2nd dimension – Richard highlights scenarios where both cause-and-effect variables could simultaneously happen. Richard has illustrated in his book the practical example (page. 58) ?of 'writing on paper'. Whenever we grasp our pen and try to write something on paper both activities (i.e., the movement of our hand and pencil) are happening simultaneously in terms of time, whereas from a causation perspective, it is obvious that what makes the pencil move is our hand movement. According to Richard in such a context the issue of temporality by itself is worthless to determine a cause-and-effect unless another dimension is further considered.
When I revert to the concern of our project report, I can mention some practical examples. In some of the education projects I have managed, there was a controversial issue of providing remedial education support for adolescent girls to boost their academic achievement. ?Assume that after the end of a one-year remedial education program, you are expected to conduct an assessment. In your assessment exercise, you decided to compare the academic scores of the adolescent girls at the beginning of the year before starting the remedial program and at the end of the academic year immediately after the end of the remedial support program. Consider that the outcome of your assessment indicates an improvement in the academic scores of the students between the two measurement periods. Can you imagine? … we used to absurdly indicate in our report that the improvement in academic scores was due to the remedial education support provided. This is entirely unacceptable, and it is a manifestation of our tendency of ‘grasping at straws’ to claim a causal relationship.? The problems with the above scenario are fundamentally three: 1st we are ignoring some of the ‘latent factors’ that could potentially contribute to the academic scores of the students, 2nd we are not certain enough if the improvement in the academic scores of those students has happened far before the wind-up of the remedial education program. In other talk, what if the improvement in the academic performance of students started in the first two weeks after the start of the remedial education program? would that logic of associating temporality to determine causality between the two sufficient? ?3rd above all, we did not conduct any statistical test or any empirical procedure besides a layman comparison. This will lead me to my final qualifying aspect to claim a cause-and-effect relationship.
领英推荐
Are we measuring ‘association’ or ‘causation’?
As a project holder, we need to be sensitive enough to understand ‘association’ and ‘causation’. Both the intention and method employed to measure ‘association’ and ‘causation’ are extremely different. This is a visible confusion we create while we compile our project reports. I believe the right claim we should demonstrate compared to the reality in the contents of our report is ‘association’ not ‘causation’.
For your reference, I have included the below tabular presentation to illustrate the delineation between ‘association’ and ‘causation’ (NB: The table is not mine I just grasped it from an open internet source – but very helpful !)?
Call to action!
I firmly believe that the role we are exercising as a project holder is not simply there either for maintaining occupational identity or merely for replenishing our egoistic needs. But also, it is a big commitment to protect vulnerable children, young people, elders, and others who are in adverse situations and requires a quality outcome of our respective projects. One of the quality determinants is generating empirically valid data and evidence that will inform realistic programming. Hence, let us cease the ‘grasping at straws’ habit of establishing something without empirical evidence.
In case of any interest in the books I reviewed
?
?
MHPSSO | Personal Psychotherapist | Researcher | Mental Health Enthusiast | Social Media Evaluator | CPiE
1 年I totally I agree with you Fisehatsion Afework. But I believe that the shortcoming of these projects are not just common slogan we incorporate that we don't implement but also the mindset of the Individual at the field level taking advantage of the loop holes within the MEAL component. Well thought out and tighter M&E components surely play an important role. But most of all I believe that the mentality of individuals at the field level and their ability to comprehend what they are doing, has a far more lasting effect than what we see today. If we can build a generation that understands the consequence of every action, our reports would be fitting for what is actually being done in the field.
Master of Development Studies specialized in DRM and Food Security and Master of public Health from Gambella University and Addis Continental Institute of Public Health respectively
1 年Totally agree with this perspective of understanding Fisho
Humaniterian Practioner
1 年Your views and observation is perfect. In my opinion, the problem emanates from: a) inadequateness of the indicators and means of verification during project planing phase. b) The relative difficulty to express changes w/c is expected always after the investment of resourcs. The way forward is to create a realistic measurement tool and doing it properly including baseline and endline survey that we should use the findings properly during report writing not just writing based on simple generalizations. Thank you for bringing such an interesting issue.
Senior Communication & Knowledge Management Specialist
1 年Thank you, Fish! ?? This article serves as a compelling wake-up call, urging us to move beyond the 'grasping at straws' habit and embrace a commitment to empirical evidence, ultimately elevating the quality and impact of projects for vulnerable communities. ??