Consultants only publish positive results [Phil of Science series]

Consultants only publish positive results [Phil of Science series]

This is a taster from the upcoming series of blog post "The True, The False, and The Useful" on philosophy of science applied to theories of change and consulting work. Consider it as a single before the album is released.

What theories do you use to make sense of the world and act on it, as you consult your clients? How do you know if a consulting intervention has been successful? Will that give you any increased predictability on the next intervention? What would it look like to have a newer, stronger approach or theory in our consulting work that is better than one we previously used? And how could we tell?

Over the last few years I have been mulling over questions of philosophy of science and epistemology as I grow my consulting practice. In conversations with friends and colleagues as well as in reflections upon our work, the issue of how we know what we know is getting increased prominence. As a self-declared carpenter-of-the-intellect with a background in social sciences (before my MSc in sustainability science), I am all too aware that there is a risk of being attracted into a black hole of skepticism, extreme relativism, and ultimately even nihilism as the philosophical questions may open up too many doubts without providing practical guidance that could help us make sense of the world around us in a better way. Aware of this, we will use philosophy of science as a tool for reflection and will always keep in mind practical applications and try and think like a carpenter -that is, in a utilitarian way: does it help me build a better shelf after all?

I read "Emergent Strategy" a few weeks ago, which is a lovely little book. The moment I stopped reading it as a scientific exploration of complexity theories and shifted to seeing it as a spiritual manifesto, it made all the difference and I could deeply relax into the book's insights and beauty. Obviously, a few reflections came up. The more predicable questions were about on the theoretical side. No one has anything near a good theory of causality in complex adaptive systems such as large social structures. Nobody. Still, there are some good resources, ideas, and theoretical approaches that give some type advice that is sufficiently workable and that -people who use them report- at times harnesses good results. Theories of How Change Happens; theories of Socio-Technical Transitions; theories of Diffusion of Innovation and tipping points; theories that see social systems as complex systems; etc. etc. These approaches make sense, are often reported to be very useful, but still can't be seen as anywhere near a unified theory of causality in complex social systems. As frames of reference, they are still helpful, but not necessarily true. In later post we will explore at length this apparent paradox: how could a theory be helpful today, even though we see clearly some holes in it and if one day down the road new (better) theoretical frames will turn out to be much more accurate?

Think of a cherished theory that guides your approach to consulting: be it a coaching approach, a theory of complexity, a theory of diffusion of innovation -you name it. Chances are this theory gives you a useful map of the territory in your consulting interventions, else you would not be using it in the first place. Chances are, also, that if the theory is applicable to the social realm it is founded on some hard-to-prove assumptions, a paradigm that is often unspoken, and selective use of data. I am not saying this in a pejorative way: social sciences have a different way of collecting evidence and the decision as to what constitutes evidence in the first place is heavily driven by the theory. There is no equivalent to experimental knowledge of the 'hard' sciences the way Deborah Mayo and many others characterize the process of scientific inquiry in physics. In social sciences the data (from the Latin datum: 'given') is not so given after all. Chances are that a newer understanding will take the place of the old frames of reference, because that's what often happens in science, and the new theory will help us more to make sense, predict, control, and act in the world.

That sounds all good and easy to agree with. But not that simple, as it raises a few questions. From the perspective of today and to the best of our current knowledge, what theories serve us best (or let's say better than others?) What theories should be abandoned, and when can we discern it is time to drop them? And whether a theory is better or worse, what would be the cost of working with a client to solve a social challenge versus what is the risk of doing unnecessary harm through unintended consequences? Actions generate side effects. Sweeping the floor both cleans it and lifts up dust. How would we know when the entropy of our acting is greater that its intended positive effects? Are we exposing ourselves to risks?

Let's trace this back and narrow down on one question -while we will use an entire series of posts to explore them all. Imagine a consultant who criticizes an approach to work with groups for being "erroneous", "wrong", "inaccurate". On what basis could he do that?

  1. Logical fallacies in the theory itself -the new theory is better than the competing one. Say that an earlier approach to working with a team has helped in the past, but over time more and more logical inconsistencies have showed up. Anything today that uses theories such as "rational decision maker" has been proven wrong by Kahneman; theories that argue for "dynamical equilibrium" in economics are rendered obsolete by ecological observations where such equilibrium states do not appear even in simple conditions; etc. The criticism is legitimate of course, but the moment you point to inconsistencies you still have to acknowledge that in the past people have managed to do good work in spite of the fact that the theory is eventually proven wrong.
  2. But successful application eats philosophical questions for breakfast, doesn't it? You can always argue that people in the past have done good work from a weaker theory until a better understanding came along. In a conversation with prof Don Huising earlier this year, I asked him what epistemological stance should we take when we use a theory that is not about predict and control (past and present) but rather of working with an uncertain future (present and future). How could we tell if the theory works? He quipped: "The proof of the pudding is in the eating" -that is, if the approach helps make progress towards a goal deemed worthy, the theory is useful/true. Another conversation with a friend who works with action research in her PhD confirmed that the perception of satisfaction compared to agreed-upon goals from a "client" who receives the intervention is enough of a measure for success. In action research, rather than "is it true/ is it false?" a better question becomes "is it useful?"
  3. Can we be rigorous and infer causality then, and try and gauge what happened because of the consulting intervention? Would something have happened anyway? Case in point: the newness factor often takes the credit for some progress in a client setting. The argument goes like this: the mere act of bringing in a new consulting approach creates some change, often positive, and that increases productivity and ameliorates the conditions in the short term. But how can we have rigorous evaluation? I suggest a few questions to help us investigate appropriate causality.

If you really want to gauge the strength of a new theory, a first approach could be a rigorous evaluation. Some ideas are shared below. If you wanted to be rigorous about it, you should do a taxonomy with

  • 1) all the interventions where theory X that informs your consulting has been applied and it was successful;
  • 2) all the interventions where your theory X was used and yet NOT successful;
  • 3) all the interventions where another theory that informs your consulting work was used for solving a comparable problem, and it was successful;
  • 4) all the times another theory was used and yet did not succeed.

Can you find appropriate causality, ceteris paribus (all other things being equal)? What would have happened if not for the application of X as a theory to inform your consulting intervention? Clearly this is very difficult and needs to use some creative criteria for assessment, such as counterfactuals, stories both quantitative and qualitative, and approaches such as developmental evaluation. But so far I don't see much rigor, possibly because we don’t have such a taxonomy or such rigorous reporting. It is highly likely that consultants are acting like the proverbial researchers who only publish good results. Why tell my prospect clients about the times when an intervention did not work?

I see at least three levels here:

1) how scientifically sound the theory is, and useful to solve problems (much more on that on later posts, in a whole series);

2) what are the external conditions and other intervening factors that could have affected the interventions. Imagine a company that has hired 20 consultants, so you have 20 different approaches and interventions going on at the same time in an organization to solve a certain wicked problem. Say that at some point things magically get better; my bet is that each of the 20 consultants will try and infer causality to their own intervention as the deciding factor that made things tilt towards improving the situation. Even though there could have been time contingency, external factors, another consulting approach, etc. Taleb's Fooled by Randomness gives many examples of how we attribute causality (and claim credit for success) when there is little more than mere chance at play;

3) skills of the consultants / community of practice. Elinor Ostrom said in her book Working Together that you should not really criticize the community of practice if a theory is strong and practitioners are sloppy, and the other way around: the theory would still be sound but the practitioners have been careless or simply unskilled in their approaches. I contend though that while this is relevant in academia, the distinction between a theory that informs consulting and the community of practice that uses it are closely tied to a point where you hardly distinguish one from the other. They are too tied together. Consulting approaches are tied to the communities of practices that use them. They rise and fall together with the community of practice. 

Next blog posts will dig much deeper in philosophical aspects, even examining some history and philosophy of science in detail to come up on the other side with a higher clarity on what theories can better help us make sense of the world and act in it. Stay tuned.

___________________________________________________________

Some resources that have informed this piece.

Alan Chalmers: What is this thing called Science?

Enzo Campelli: Da un luogo comune. Elementi di metodologia per le scienze sociali

要查看或添加评论,请登录

社区洞察

其他会员也浏览了