The Complicated and the Complex. What do different theories say in differentiating these worlds?
Marco Valente
Consultant and member of the executive team at Cultivating Leadership
In this blog I provide a small subsection of the vast literature of how complicated vs complex problems have been defined. This can help you seeing parallels, noticing where the definitions converge or not, and for giving a map of the territory with a few signposts on where to find more.
Beware of two things in the following paragraphs.
1) There will be lots of repetitions in the ways the categories have been constructed across different theories. I will try to avoid repeating the same things and focus on what is unique / different.
2) There will be differences that will make a categorization so unique that a same system you have in mind will be considered as “complex” by one schema and as “complicated” by another. And that is to be expected. Most important for me is to give you a broad picture and let you see what schema can help you make sense of what types of systems.
When you venture into complexity literature (as I have done over the last years, as a natural evolution of my earlier curiosity in systems dynamics) you may go on a hunt for a clear, unambiguous definition of what is complexity. I will spoil it for you, to spare you some misery: you won’t find any. Let’s be more precise: You are likely to find many definitions of complexity, and yet there is no such thing as a universally accepted definition of complexity around which there is strong consensus. The many definitions of complexity deserve a blog post of their own. Mel Mitchell’s book “Complexity, a guided tour” makes a very strong case for why this lack of a universal definition should not be seen as a weakness. This blog is mostly a guide to the landscape of how complex problems have been defined in their relational opposition to something else by various theories. You will get enough of a basic idea to be curious to explore original sources by yourself.
Tame Problems vs Wicked problems
Main reference: Rittel and Webber 1973, Planning problems are Wicked Problems
The gist of it: Some problems that science and engineering deal with are “tame” or docile because the mission is clear, the definition is unambiguous, and there is consensus on whether the problem has been solved. Other problems like social planning are “wicked” because there is no such clarity and have to be considered as wholly different.
The real difference
The seminal article by Rittel and Webber walks through ten dimensions that can help us see whether a problem is “wicked”. A problem is wicked if
1. If a problem has no unanimous formulation, meaning that there is debate over the very definition of the matter 2. If the problem does not have a clear stopping rule (imagine if a Turing machine could clearly know when the job is done). 3. If the solution is not a true-vs-false but involves a value judgment of good-vs-bad, which is bound to different across people and opinions. 4. Where there is no immediate test that a solution is final, because even that solution itself will create other unintended consequences. 5. Every attempt at a solution will create a certain path-dependency that will leave some traces in the complex system we are trying to influence. 6. There is no enumerable set of possible solutions, in that the option space is not finite like that of a board game. 7. Every wicked problem is essentially unique -which implies that solutions must be context-aware and context-dependent. 8. Every wicked problem can be considered as a symptom of other problems, and the levels of analysis and levels of solutions will always be multi-layered. 9. The reasoning behind analyzing a wicked problem and suggesting solutions are richer (and looser) that those of a strict scientific discourse with clear-cut confirmations or refutations of hypotheses given conclusive evidence. 10. The planner has no right to be ‘wrong’, meaning that, as per point above, we should not aim for scientific truth (or final refutation) but rather for improving the state of affairs, even if given partial definitions.
Personal comment: This classic article is a gem that articulates why social problems cannot be solved in satisfactory ways by an engineering approach that treats them as mathematic puzzles.
Ontology of Complicated vs Complex problems,
Main reference: Dave Snowden: Multi-ontology sensemaking.
The gist of it: Complicated and Complex problems are different in the nature of the way causality is at play.
What is the real difference?
Complicated problems can be “cracked” through the lens of linear causality, or multi-linear if needed. In complex problems the lenses of linear causality cannot give you a reliable answer. Complex systems are dispositional but not causal. This means that you can figure out the predispositions, the propensities of a complex system (what it tends to do, what type of results it tends to produce) but in no way can you predict or accurately map the way a set of causes will lead to a particular effect.
Here Dave Snowden explains why it is essential to consider the ontological differences of complicated versus complex domains. While we need an appropriate lens (epistemology) to look at the problem depending on its nature, complexity is not solely in the eye of the beholder, but there is indeed an irreducible level of un-know-ability in the way causality is at play differently (ontology). A major risk is often in wanting to explain away a truly complex, intractable problem with lenses that tend to reduce it.?
Personal comment: I find this definition particularly rich, in that it helps us see how the very nature of causality in complexity is different.
Kind vs wicked learning environments,
Main reference: Hogarth, Lejarraga, and Soyer: The two settings of kind and wicked learning environments. Here
The gist of it:
In simple games and situations where you get an immediate response on your actions, the learning environment is “kind” to you. Complex problems present you with an environment is un-kind to your learning, or “wicked”, because experience cannot be a reliable teacher.
What is the real difference, they say?
Here it’s crucial to look at complex problems as if they were learning environments. In them, the ways to make sense of the patterns is called pattern recognition, and the timeliness and accuracy of the responses that we get back from the system is the key discriminant. Imagine you have been trained extensively in a field such as chess or firefighting, where pattern recognition is crucial and the feedback that you get is immediate and accurate; in these cases the learning environment is “kind” to you. This is the realm of deliberate practice, the ten-thousand-hours rule, and specialization. If the pattern recognition does not get an immediate reward for being a good match with the type of problem at hand, instead, the rules of the game are unclear or incomplete, and feedback is often delayed, inaccurate, or both. These are “wicked” learning environments. The problem arises when you are under the illusion of learning the right lessons from experience, when in reality we see “through a glass darkly”. Think of a leader who got promoted for having allegedly contributed to his improvements on the bottom line, even though the successes were due to market trends, whilst he showed poor judgment. He got promoted despite his inability, but believes to have been promoted because of his non-existent skills. Causality was not visible, and he believes to have learned from experience, but has not. Worse still, he has gained confidence in his bad skills!
From a seminal article by Hogarth and Soyer, we can define “[wicked learning environments] as situations in which feedback in the form of outcomes of actions or observations is poor, misleading, or even missing. In contrast, in kind learning environments, feedback links outcomes directly to the appropriate actions or judgments and is both accurate and plentiful. In determining when people’s intuitions are likely to be accurate, this framework emphasizes the importance of the conditions under which learning has taken place. Kind learning environments are a necessary condition for accurate intuitive judgments, whereas intuitions acquired in wicked environments are likely to be mistaken”.
I learned about it from Epstein’s book Range, which I found really good, and this particular concept is originally expressed in an article by Hogarth. If you want to read more about being fooled by thinking we master the wicked environments when we don’t, there is a vast literature on cognitive biases around this “illusion of skill”, such as Khaneman and many others.
Isolated Problems vs Messes
Main reference: Russell Ackoff: The future of organizational research is past. Here
The gist of it: In relation to management theory in particular, Russell L. Ackoff, a Wharton emeritus professor of management, said, “managers don’t solve simple, isolated problems; they manage messes.”
What is the real difference?
Ackoff defined a mess as “a system of constantly changing, highly interconnected problems, none of which is independent of the other problems that constitute the entire mess. As a result, no problem that is part of a mess can be defined and solved independently of the other problems”.
The real difference then is between problems that could be treated as if they were in isolation from the rest, and problems that are deeply entangled. I will let this quote by Ackoff speak for itself.
"Managers are not confronted with problems that are independent of each other, but with dynamic situations that consist of complex systems of changing problems that interact with each other. I call such situations *messes*. Problems are abstractions extracted from messes by analysis; they are to messes as atoms are to tables and chairs. We experience messes, tables, and chairs; not problems and atoms. Because messes are systems of problems, the sum of the optimal solutions to each component problem taken separately is *not* an optimal solution to the mess. The behaviour of a mess depends more on how the solutions to its parts interact than on how they act independently of each other. But the unit in OR [Operational Research] is a problem, not a mess. Managers do not solve problems; they manage messes.” ?
The main advantage of the definition of problems versus messes is to separate which problems can be solved in isolation and which ones resemble tangled webs where all the strings are interconnected. At times it could be safe to assume that there are such things as problems isolated from others that can be optimized by themselves, for instance if there are clear boundaries around them and relative isolation from neighboring systems.
Sail boats vs kayaks
Main reference: Ann Pendleton-Julian, Design Unbound
The gist of it: Navigating a sailboat is a completely different game than paddling with a kayak through whitewater.
What is the real difference?
Ann Pendleton-Julian shares a lot of her definition with Snowden’s complicated vs complex problems of Cynefin fame, and adds her own flavor to her distinction when she uses a compelling metaphor to describe the difference between the two. Complicated problems are a lot like a steamboat / or a sailboat, whereas highly complex problems are like paddling a kayak amidst the turbulence of whitewater. A sailboat can set the direction in advance and stay close to the route, perhaps needing some adjustments along the way. Adaptation to a new sea is not that big, on the assumption of calm winds.
But the kayak’s world is different: the kayak world is radically contingent, everything is connected to everything, the environment is rapidly changing, everything that we need to pay attention to is happening in the here and now, and context is everything. If we can set a course for a sailboat, the sense of direction of staying afloat in a kayak is completely emergent.?
At min 8:50 of this video the author shares her view
Personal comment: I love the metaphor of the kayak versus sailboat, because stories and analogies can give a good grasp of complexity without resorting to scientific explanations.
Technical challenges vs Adaptive Challenges
Main reference: Ron Heifetz, Leadership without easy answers.
The gist of it: there are technical challenges that can be solved by the power of analysis and expertise, and there are adaptive challenges that work on us personally, challenge us emotionally, and involve our sense of identity.
The main difference?
When I explain this in my coaching work I use this shorthand. Technical challenge: you work on it. Adaptive challenge: it works on you.
Imagine a set of challenges that can be “solved”, or that we can make progress on using our technical expertise alone. Imagine a different set of problems that challenge us personally because we don’t know how to show up, or how to be when we face that problem. Heifetz has pioneered the notion of adaptive work in the context of his leadership studies, and it makes a lot of sense to see problems through this prism. Also, there is no simple demarcation between the two worlds. A challenge can have the same features and include a technical aspect and a deeper layer of adaptive challenges with it. Imagine an organizational change in which, for efficiency purposes, HR reshuffles our office allocations, some move to new cubicles, some get new desks, etc. While it is straightforward to look at the technical layer, a host of adaptive challenges can come with such a change. Some people had their sense of identity tied to having a nice view (am I being demoted now that I have a worse allocation?), others can suffer from having fewer possibilities for interacting with colleagues, etc.
The notion of technical versus adaptive challenges is a very helpful one every time we do consulting, leadership development, or coaching work, because there are often problems whose “technical” solution is already at hand, but that hide a human, adaptive dimension to them, which we benefit from uncovering. Also, it is common for people or cultures that want to engineer away solutions to dismiss the adaptive component of certain challenges.
???
Mediocristan and Extremistan (Gauss curves versus Power Laws)
Main refence: Nassim Taleb’s collection of books Incerto
The gist of it: There is a world that follows the predictability of a Gauss curve distribution, and a world that follows a power law distribution.
Imagine that, without any prior background information, you schedule a meeting with a Dutch entrepreneur at a restaurant. You want to guess how tall he is. Say you make a bet with your colleague, and for each centimeter that you got wrong you will lose 1 EUR on your bet. There are actuaries where you can see the distribution of heights for people in different populations. You know that Dutch men are among the tallest in Europe, so you take the average or median height and place your bet there. Rutger will be 181 cm, that is your estimate.
Now imagine the same situation, but this time your bet is to estimate how much Rutger earns annually. Entrepreneurs vary a lot in their income, and even though you can check the average or median salaries, you have no idea whether Rutger is struggling to stay profitable or if he founded a unicorn startup that was just sold to a tech giant. Height distributions follow a bell-shaped curve, meaning that you can predictably find a high concentration in the middle of the distribution, and thin tails at the edges (most Dutch men of the age of our guest will be between 165 and 190 cm, and predictably few outliers will be a lot taller or a lot shorter). The distribution of wealth follows a power law, and that also implies that it will be impossible to guess how rich is the richest Dutchman sitting in the restaurant, even if you had data about the average wealth of other countrymen like him.
Trader and philosopher Nassim Taleb popularized the notion of mediocristan versus extremistan, but he was not the first person to conceptualize the difference between Gauss distributions and power laws. The distinction between the two matters a lot, insofar as we tend to make estimates and models based on assumptions that a) there is a predictability of the world we model and b) that some events are extremely rare because our Gaussian assumptions dismiss them as n-sigma events. A case in point was the fact that the expert consensus on the highest-rating stocks on the market prior to 2007 was that each of them had a 0,012% probability of defaulting, when in reality about a fourth of them collapsed in the intervening months.
This distinction between Mediocristan and Extremistan gives us a different lens through which to think about the risks of events that are seen as highly improbable. There are boundary conditions where we can assume a platonic world, one that follows the rules of geometry and mathematic predictions (for instance a casino, where even though we face risks in our roulette gambles, they are calculable). I like this differentiation in particular because if the system at hand has clear signs of not being liable to gauss-like predictions, we should approach our scenarios with more humility and try and see early warning signs of systemic fragility. We should also be aware of the irreducible causal opacity that will not go away if we had more data.
?
Overall reflection.
You might have noticed there are a lot of parallels and some distinctions, to the point where I would argue that while 1) not all classifications would probably have their own version of “complicated” vs “complex” land exactly where all the others are, at the same time 2) you can easily see a lot of overlapping too. This suggests that at least the ways in which these two concepts have been treated makes sense for enough people across a variety of disciplines that the categories hold up. It helps sense-making and decision making to see these differences in the nature of the problems / systems.
As a personal comment, I use an eclectic approach to define the nature of the problems at play, depending on what seems most important to see and act upon. For instance, I really like the approach by Hogarth and Soyer when we need to think about our own learning stance, because it highlights our capacity to learn from experience – and the many times when we do not see clearly that experience is not used as a good teacher. I started to admit the number of times in which my past expertise is not a reliable guide in complexity, which is refreshing and uncomfortable. Another thing that seems particularly fruitful in this definition is that thinking about learning environments links the objective complexity of the opaque non-linear causality at play in a system (out there, so to speak) with the meaning-making capacities of the individual, highlighting then this interplay between the system and the observer. I love the definitions by Snowden and Taleb because they point to this layer of “irreducible uncertainty” at an ontology level. The laws of causality are just different and we cannot predict what will happen, even if we had perfect information -we never do anyhow. Taleb’s notion brings forth the ethics of risk mitigation and fragility, which I find essential. ?
Professional Development and Facilitation | Group, Team and Individual Coaching | CPCC, ACC
9 个月This is very helpful. Thank you.
Principal at Harvest Moon Consultants
2 年Cynthia Kurtz: Self-Organization and Organization
Foresights - Insights - Hindsights | SenseMaking | Strategic intelligence #patientknowledge #patientintelligence
2 年??#FYInspiration Andrea M. Bassi Gilbert Probst KnowlEdge Srl #TacklingComplexity Alexander Fink Hanna Jürgensmeier ScMI Scenario Management International AG Blauen Solutions - Pivotal Connections in Context