Complexity theories and Systems Thinking: parallels and differences

Complexity theories and Systems Thinking: parallels and differences

This post is for facilitators, consultants, academics, and everyone who is working with complexity theories or have just begun exploring and reading about this field. Perhaps you have come closer to complexity theories from a previous background in systems thinking -as I did. Or have a vague notion about both terms but are struggling with the difference -for instance while applying traditional systems dynamic tools to complex problems and feeling that some things don’t really make full sense just yet. 

Out of my own need to make sense of their common roots and departure points, over the last months in my readings and work I have tried to understand what is similar and what sets them apart. Note: this is the second of a series of blog posts on the subject, and each of these paragraphs would deserve a more in depth exploration. More blog posts to come in the following weeks.

If anything else, my blog post is well-referenced. So if this adds more confusion than it intends to clarify, at least I provide articles that you can read and draw your own conclusions. Please don’t believe a word I am saying: look up the references and research by yourself.

But first things first, a few definitions to outline what we are talking about.

By systems thinking I refer to the school of thought that started with Jay Forrester and has expanded systems dynamics into other fields such as management: Peter Senge’s Learning Organizations, Donella Meadows, David Stroh (a colleague of Senge), etc. Let’s call this ST, knowing though that in practice we are referring to systems dynamics, plus some of its most popular applications of management.

By complexity theories I refer to a variegated school of thought, but especially in its applications to management with the use of the Cynefin framework (Dave Snowden) and its leadership applications (Berger and Johnston have written the best book so far for leading in complexity, you should look it up). I also refer to Santa Fe institute’s Mel Mitchell for its very informative computational angle, though not much of its wisdom can (nor should) be extended to human systems, unless with a lot of caveats. Let’s call them CAS (complex adaptive systems theories), and it will be mostly Cynefin plus relevant background theories that have informed it.

So let's get started.

1) A quest for certainty? Ordered vs un-ordered ontology

In his paper on Multi-Ontology sense-making (opens a pdf), Snowden draws a difference between a complicated and a complex state in that a complicated systems follows an ordered ontology and the latter shows an un-ordered ontology. Ontology is a branch of philosophy that investigates the nature of things, often in contrast to how we know things (epistemology). The nature of the system itself is different and its behavior and laws of causality behave in very different ways. The core assumption that CAS make (especially Cynefin) is that ST gives adequate lens to make sense of -and act in- complicated systems (because they are ordered in nature) but does not have sufficient explanatory power to make sense of how a complex system actually behaves (because these are un-ordered instead). In a complex system, CAS would say, we don’t even know what we don’t know (as Donald Rumsfeld famously said in his elusive answer to a journalist), you interact with the system before figuring out a handbook of sorts, because only by poking and nudging will the system begin revealing itself.

While Dana Meadows, Senge, and others were well aware of chaotic behaviors that some systems can show (for instance with the type of chaos known as “extreme sensitivity to initial conditions” of the proverbial butterfly, which so many quote without having seen the actual math behind it) often in their tools they seek certainty and even in that case, the core assumption is that you can mathematically predict the behavior of that type of chaotic pattern if you knew the initial conditions with precision.

You could ask yourself

  • Is this thing I am dealing with resembling more a spaceship or a teenager? If I had all the right knowledge in place, could I predict and control its future development (like a spacecraft)? OR does this resemble a living thing whose future direction I can not control but can still do my best in an ever-changing relationship to positively influence towards an uncertain future (like raising a teenager)?
  • Could I hope to figure out a steady state for this system (an equilibrium of sorts) or am I more likely to see a constant co-evolution, in an endless flux?


2) Causality vs dispositionality of systems

It follows from above that if a system is obeying some type of order, having leverage over certain variables will get us the results we want, often by pushing the acupuncture points and towards an ideal state of the system that we have predetermined as a desirable one. This is a common assumption under ST which would work if the system were indeed obeying such causal laws like it does in a complicated realm. ST contributed in major ways to our understanding of some systems, from linear causality to circular causality, which can help us see technical and to some extent natural systems in the complicated domain. Links between cause and effect are either identified already, or in any case identifiable. In CAS the assumption instead is that truly complex systems are dispositional and not causal. What is means is that while you could figure out some general patterns of behavior and interactions, causality can only be inferred in retrospect and we can not predict where the system will be in the future -not even if we had all the knowledge of initial state, key driving forces at play, strength of correlation between variables, etc. Having an un-ordered nature, a complex system will not obey causal laws in the same way a complicated system does. An inclination of the system can be thought of as a predisposition, a very common habit, but when the system is indeed un-ordered, we need to forgo our hopes of predictive accuracy about what the system will do next. So long Laplace, and thanks for everything.

We need to forgo our hopes of predictive accuracy about what the system will do next

3) Where is information ‘located’?

Can you actually get the “full picture”? Can you effectively centralize information so that at any one point in time a clear and accurate map of the system is available? Over this question there is a visible disagreement between the two schools of thought.

In various articles and books, ST suggests quite explicitly that yes, you can have a full picture of everything that is happening in the system: “Get the whole system in the room”, “get the full picture”, “let the system be known to itself” are commonplace in ST literature and practice.

At least since the times of economist Hayek, the angle from CAS postulates that in complex systems there is no agent in it who has access to the entire picture. They use the compelling example of markets: no single actor has access to all shifting prices and market exchanges at any given time. Information is localized and cannot be fully centralized.

This has important implications about on one hand ST’s aim to “get the full picture” and on the other hand CAS and Cynefin’s approach to distributed cognition: tap into the wisdom of the crowds and their micro-narratives while avoiding the pitfalls of groupthink and other biases (I am realizing there are many such biases when my colleague and I prepared for this workshop..) From the perspective of CAS, there is in fact a distributed ecology of sense-making within an organization that can be captured by harnessing stories at a granular level and borrowing from ethnographic approaches.

“We work with the distributed nature and partial perception of human sense-making to our advantage” (his blog post here )

To be fair, I am sure that ST in its current interpretations is very much aware of this earlier shortcoming, since a natural evolution of ST has been to adopt (and at times even create) dialogic methods for participatory decision making that do assume that knowledge is in fact distributed and cannot be centralized (the suite of methods and overarching approach from the Art of Hosting Meaningful Conversations is testament to that, as is this more recent piece by Senge et al) But still, any approach that suggests we could "get the whole system in the room" and assumes / hopes to objectively see all the relevant data centralized in one place must be coming from a perspective that does not consider the crucial difficulty of actually doing it.

Ask yourself:

  • how much knowledge can we get about the state of the system in order to act?
  • how is information centralized vs decentralized? is the current situation conducive to a coherent and desirable behavior? and would the locus of knowledge about the system make a difference?
  • if the system is truly in a complex space, how could you distribute cognition at appropriate scale?


4) Approaches to the future

A major evolution in working with future scenarios that was heavily informed by systems thinking has been the development of backcasting from desirable futures (not just likely ones: see my series of blog posts on this). While at the time a major development, some have criticized the fact that as times become more volatile and uncertain, it makes less and less sense to begin with a fixed future in mind. So to sum up the two schools of thought here:

For ST, a vision of the future is set in advance, an assessment of the present informs you about the current state, and the gap between future vision and current reality generates a creative tension (I am using Senge’s language here) which inspires action. Strategic planning is about acting in that temporal gap to fill it, often beginning by acting on leverage points, and setting milestones in between.

CAS work relatively much more with the evolutionary potential of the present, and not much with the future vision nor with understanding of past behavior or root causes. Their point is that a vision of the future (especially an exact picture) does not work, unless you use it lightly to provide a sense of directionality but not clear targets. Snowden often calls this a “more stories like these, fewer stories like those” approach, while Chris Corrigan uses the metaphor of a vector which I find intuitive. It is still a vision -here it becomes a matter of semantics- but is not predetermined in exact accomplishments and even less so in how people will behave in the organization. Additionally, while recognizing the importance of setting a clear vision, Berger and Johnston make a compelling case for why the traditional strategic planning made of milestones can actually even be counterproductive when the scenario is rapidly changing and uncertain. For instance, this can blind us to the fast-changing landscape around us, and make us miss emerging opportunities. They summarize a vision in a complex world in the following way:

“In a complex world, a vision is not a photograph of a future destination, and a strategy isn’t the map that charts the course. A complex vision is a compass that points towards a future direction, and a complex strategy is a set of safety guardrails inside which people can innovate and learn.”

Ask yourself:

  • Is this system operating under an ordered ontology or an un-ordered ontology? (you would then ask: "how would I know?" to which you can begin answering by asking, again):
  • For this system, can I create a vision of the future over which I have sufficient agency, control of some key variables?
  • what boundaries and 'guardrails' do I need to set in place to demarcate the lines that must not be crossed and an overall sense of direction for my system's future vision?
  • If not: can I set a vision that has at least enough of a sense of direction? One which is both directional and also open to the emerging potential of the possible rather than the fixed certainty of the probable? (But how to craft a compelling vision when so much about the future is uncertain? A whole post will later be dedicated to summarize Berger and Johnston's great book)

5) The rational decision maker (or so we dreamed)

Imagine you are looking at a system through the lens of ST, and since one of the end-goals of the game has always been to support informed decision making (e.g. the models developed by Jay Forrester, Meadows et al's report for the Club of Rome, etc), you have to make assumptions about the actors, what they know about the system, and how they will decide the best course of action. Here, ST acknowledges bounded rationality, but on the assumption that once we get the whole system in the room, informed by a shared vision, you can start by leverage points (which are assumed to be the best, most rational decisions).

CAS takes as one starting point the postulate that humans are driven by irrationality just as much. First of all, they begin with an acknowledgment that our narratives are creating the system just as much (adding an extra layer to the complexity of the system, made of its agents’ stories, whether or not they are factually true. For this see Midgley’s reflections of different levels of complexity). In addition, the recent insights from psychology and behavioral economics added a lot of scientific depth as to how and why we are so easily deceived in our perception of the world (see Khaneman, Ariely, the Invisible Gorilla, etc.) 

Historically, ST does not borrow much from social sciences really, nor was informed at the time by behavioral economics, other than the works on mental models and organizational learning that have been contributed by the likes of Schon and Argyris. Even then, removing the obstacles to learning and resistance to failing, the assumption of rationality would still survive. As far as I am aware, Snowden’s anthro-complexity has been an early prominent voice that problematized the assumption of the rational decision maker. To be fair, given that Senge, Meadows, and Ackoff were writing their pieces in the 80’s and 90’s, this is to a large extent understandable.

6) Core beliefs about models and their usefulness:

After consideration and lots of readings, I have come to see this one as a major difference between ST and CAS. One of the core tenets of ST is/was to build a model that would represent how the system works: this is visible especially in CLDs (Causal Loop Diagrams) and Systems Archetypes. A CLD is a representation of how a complicated system is behaving in terms of feedback loops, and stocks and flows; a systems archetype is a model that describes basic patterns of how a system could behave over time, and the assumption in the ST literature (especially I am referring to Senge and Dana Meadows) is that many systems’ behaviors could be more easily understood as falling under the behavior of a dozen of basic archetypes. Some of them are for instance the “escalation” (spiraling up of a trend, like in the arms race) “tragedy of the commons” (individual self-serving behaviors damaging a collective resource) and “success to the successful” (the rich-get-richer trap).

While there is an undeniable usefulness in applying a CLD to understand some behaviors of a physical system, the approach to models of the ST and CAS are completely different. 1) ST does not problematize epistemology at all, and assumes that the models are accurate representations of the way the system is behaving and could adequately represent its nature -which is a big assumption that philosophy of science would call "naive realism"; 2) using systems archetypes can have a limited explanatory power if the system is indeed showing a pattern that can be represented by a ‘classic’, clichèd behavior (like the first three mentioned above) but, just as with the point above, the risk here is to believe that the archetype actually satisfies our need for understanding and blinds us from the behavior of what is actually going on, which leads to 3) CAS start from the postulate that the only valid model of a complex adaptive system is the system itself. There is still explanatory power in using maps that represent what the system is inclined to do, but there is an awareness that the map is a heuristic to help navigate the territory and does not claim to explain the entire behavior of the system (also because you cannot get full, centralised knowledge of the system at hand). 

7) Systems evolve. A lot.

At least within the bounds of a pattern of behavior that is consistent over time, ST assumes that systems do show (or better say: can show) regularity of behavior. From that lens, we can go with the assumption of same number of variables and actors. This has shown to be problematic for at least two reasons.

1) For one thing, systems rarely seem to be in anything like a state of equilibrium. A steady state seems to be the exception rather than the rule. If the systems were indeed ordered, in most cases equilibrium dynamics would reign, which recent insights from biology challenges as a very problematic assumption to make (this video explains it very well and this link to resources if you want to go deeper).

2) Some types of systems (especially social and natural ones) actually evolve and create completely new variables and new actors. There are newborn agents, states, and variables altogether. That adds a layer of newness in that the system evolves and adapts itself to the new conditions, generating behaviors that could not be predicted nor controlled if you had posited an initial set of actors, variable, and rules of interactions. Reflecting upon his journey that begun many years ago with Systems Dynamics, Peter Allen recently put it this way:

“The key step that complexity added was to recognize that the ‘system’ itself could potentially redefine itself, evolve and change – qualitatively – creating new variables, new mechanisms and new emergent features and characteristics.

Any system at a given moment has emerged from a past in which it was not what it is now. Complexity is about evolutionary emergence of structure and form. This involves ‘learning’ and ‘forgetting’ not just functioning – recognizing changed features and elements, requiring perhaps changed values, aims and goals. Complexity admits that the ‘functional structure’ may change – and life is not just a mechanical system running forward in time!” 

--------------------------------

About me: I am the founder of Plecter, a consultancy based in Malm?, Sweden that designs and applies dialogue-based tools in support of decision-making. I draw from research on complexity theories, collective intelligence, and ecological sustainability to support decision-making to face complex challenges and navigate better our uncertain times. I only work with people who have the intention to make the world a better place -and I am well aware that everyone has their own definition of what that even means.

-------------------------------

To read more about it. My post starts from a few sources, among which:

(Question for the few readers who actually might want to dig this up: Does it add value if I add all the references of the hyperlinks, as if this were an academic paper?)

Hi Marco, Just a quick note that the links don't appear to be working. Great post, thank you.

回复
Dave Snowden

The Cynefin co

3 个月

Excellent summary, some minor issues on antecedents and it’s worth pointing out that ‘more like these’ is formally known as vector theory of change. But these are very minor, I’m going to pass this to training as it will be a good pre-read

Elizaveta Paka

Senior Talent Acquisition Manager - Moscow International Veterinary Conference | Bringing top talent across the globe to make your vision come to life | Healthcare, Life Sciences & Education

5 个月

Amazing article, Marco. You were right though, mentioning it might leave me even more puzzled in the beginning, which makes it even more curious to explore further. Thank you so so much for the references - I tend to do my literature review in a snowball manner... by picking sources that inspired people whose writing I found the most impactful.

Linda Doyle

Co-director & Facilitator at Movement Ecology Collective movementecology.org.uk

1 年

Nice article Marco. Found it very useful. I would appreciate some resources on Midgley's reflections of different levels of complexity if you have any. Thanks!

Great article, Marco ?? Now for the comments… ??

要查看或添加评论,请登录

社区洞察

其他会员也浏览了