Safety and Complexity

Safety and Complexity

Light behaving strangely

Shine a beam of light through a soap bubble and it could behave in an unexpected way. The light may split into branches like a tree.” “No one predicted this to happen, it was a complete surprise in the lab.” The soap membrane had random variations in its thickness, so the researchers expected the laser beam to separate out into disordered speckles.

(Read more: https://www.newscientist.com/article/2247500-soap-bubbles-can-split-light-into-otherworldly-branching-streams/#ixzz6R1S25e6P)

I found seeing a recent reference to an engineering group set up to look at SAFER COMPLEX SYSTEMS intriguing, as it immediately conjured up more curiosity than comprehension. Safer than what? How complicated is “complex” and why “systems”? This triggered a need to put sensible answers into some sort of context. In a way, this contextualisation illustrates well the historical journey we have come on, in the evolution of safety thinking, which I personally found quite helpful and share below in case it is of interest to fellow seekers after safety truth.

What are systems?

It has always been a requirement in the way our brains have evolved, to deal promptly with things that happen in our external environment. But to do this in a timely fashion, we need to compensate for the inherent delay in processing sensory signals. Thus, we have to update and correct (unconsciously), theoretical (prior) predictions (in real time), with actual observations. But when “systems” were relatively simple, the ancients eventually “knew” how to predict the not so simple behaviour of solar eclipses based on learned rules. This became more and more sophisticated in the centuries following, resulting in the basis of a scientific approach producing experimentally calibrated predictions as received wisdom and Laws.

No alt text provided for this image

 These classic Newtonian type Laws were thus invaluable and accepted as inviolate; discrepancies detected in their applications (as Simple systems), being interpreted as due to imperfections in observations etc. The Victorian industrial revolution was built on more ambitious applications of these universal laws; they built machines, which used this understanding to produce predictably and efficiently. Although intricate and extensive (Complicated?), these machines could still be described and understood using simple rules and standards, based on explicit laws and diagrams detailing the individual and sequential components and couplings. One could argue that even nuclear power plants and NASA’s moon shots were achieved with very complicated machines, although the prevalence of unwanted behaviours, latterly caused them to rethink their approach to include “human factors” and think less deterministically. This recognised that our (now sociotechnical) systems had become “complex”.

What is complexity?

There are several approaches to discussing complexity in systems, which I have found helpful. Cynefin, because of my Welsh background and VUCA with its practical relevances are helpful visualisations. A third, now more notorious, (but equally insightful?) is the Rumsfeld “unknowns”. Cynefin describes a spectrum of complexity from simple to chaotic, quantised into four reference states (see below). VUCA describes systems in terms of four types of challenges to control or prediction, - i.e. Rate of Change (Velocity), Uncertainty, Complexity and Ambiguity. The problems posed to managing / operating in each of these regimes can then be categorised as needing increasing sophistication in the approaches taken to understanding and predicting their behaviour.

No alt text provided for this image

 All illustrate the way our ability to comprehend and predict behaviours varies with the inherent properties of the systems. They also suggest the different approaches that are necessary, or more likely, to be helpful in addressing and designing what is “safer” in the operation of such systems,

What is Safer?

This is an intriguing question as there at least two dimensions to take into account. Over the years our ability to design and use tools has evolved from flints to Hadron colliders, whereas our tolerance of “risk” from using them, has decreased dramatically. Paradoxically the understanding needed to predict these risks has become almost impossible; with modern systems being combinations of complex and now artificially intelligent machines, which are verging on the chaotic. So over time the sophistication of the approaches developed and employed to reduce these risks (make it safer?), have necessarily had to try and evolve to keep pace. Common sense and learned practices may have sufficed for simple systems, but as systems became more complicated and societies / users more sophisticated, we required more formal and diligent attention to occupational and societal exposure. So new rules and regulations at the end of the 19th century started to specify barriers and protections against misuse, or malfunction. For the system itself, the emphasis was on efficiency and reliability and reducing human exposure / operating errors. Thus, in the early 20th century the “safety” assessments focussed on eliminating the “root causes” of unplanned upsets. Even for the more complicated space and nuclear systems, the emphasis was on using checklist and logic trees to predict the reliabilities of components (FMEA) and systems (FTA). The step change post World War II was the recognition of uncertainty in the expression of risks in terms of probability of occurrence. In this quantitative era, it was possible to discuss “safer” options in terms of lottery likelihoods. Although the belated realisation of the unattainability of “zero” risk, quickly moved the debate to discussing tolerability, acceptability and cost benefit ratios.

Today, as we can see in the current COVID 19 crisis, we have almost given up on trying to base estimates of safety on anything more than individual opinions (expert or otherwise). How safe is a 2 meters separation? Well its safer than 1 meter! Is it? On what basis? So, in a way we’ve come full circle and the environment has just become too complex to predict with any degree of confidence – we are back to stone age suck it and see? Clearly, we are lacking the tools to give us the insights and confidence to make credible attempts to define and design “safer” systems and provide reassuring support for difficult political decisions.

What methodologies do we have?

It is interesting to note that a number of the more successful approaches to try and understand how complex systems worked, decided that a degree of abstraction was necessary in describing them. For example, in the late 1980’s, Software engineers developed the idea of a linked series of “Structured Analysis” Boxes to describe what was happening. This simplified the more conventional Process Flow, Piping and Instrumentation and circuit diagrams normally used to specify systems. This also provided a way of looking at how their various subroutines combined in increasingly complex software programs. This abstraction of entities / tasks / actions into “boxes” has also been employed in Rasmussen’s work on Accimaps and Leveson’s STAMP control loop approaches to understand how different functions interact (or not!) in complex systems. While being useful for complex systems which obey predetermined rules in set sequences, the complexities of current realities (Pandemics, Healthcare, real world operations) require an element which can accommodate and recognise the effect of real world variability. And not in just the operating conditions , but the way in practice, systems are modified in real time: such that predetermined responses and designed sequences are impossible to predict “a priori”; and the shape of the system “emerges” to adapt to the current situation. This is often referred to as “work as done” rather than “work as imagined”. At the moment there are few methods (such as FRAM), available that can satisfy this need. In the current crisis, the use of simple (albeit multiple parameter) spreadsheet models has led to the decision makers being inevitably "behind the curve", as they can only be calibrated / updated by observation and lagging indicators. This is only exacerbated if the spreadsheet is constructed on previous pandemic parameters. Similar failings in Wall Street spreadsheets led to the failure to spot the fallacies behind the 2008 financial crash. As shown in the diagram below, the more complex the system, the more error is inevitable and potentially ctastrophic with conventional models.

No alt text provided for this image

Paradoxically, the rules of the simpler systems, typified by Lord Kelvin’s dictum of what you can’t measure, you can’t manage, seem to be counterintuitive in complex systems. The more confidence you show in presenting a precise quantitative prediction (a la Newton), designed to reassure the public, the less “honest” you can be in caveating that this is still only a hypothesis, albeit possibly the best estimate we have at the time!

Numbers based on sound science are desirable, numbers based on outdated assumptions can be positively dangerous. We need the humility to acknowledge uncertainty and not be surprised when the laser light behaves in a completely different way than the current “science” would predict. (and not follow it blindly?)

Work is continuing to develop more realistic, open minded, functional models, such as FRAM, which although currently still qualitative, can allow for a more realistic (more candid), representation of variability and emergence in predictions of behaviour in complex systems. Current work with them has shown they can improve the operation of high hazard, high stress operations in Healthcare etc

To achieve “Safer Complex Systems” then, requires an understanding of safety, complexity and how our increasingly incomprehensible systems work. Unfortunately, currently there are few approaches available that can help us do this. I hope that this initiative to develop safer complex systems will include helping us develop further these more advanced "modelling" approaches, as they are clearly urgently required.

DS 2/7/2020

 

 

Gilles Savary

Head of Flight Academy for New Air Mobility | Professional Speaker | Risk Awareness | Decision Making

4 年

Growing complexity ultimately leads us to a paradox:?The safer our present (statistically) is, the more uncertain we may feel about our future. Thank you David Slater for sharing this.

Stephen van Dijck

Specialist ATO for UPRT and Aerobatics. Keynote speaker.

4 年

Nicely put into perspective and eloquently written David. Thank you. I will share this.

回复
Don Hodkinson

Work Health and Safety and Risk Manager retired

4 年

Wow, I think this says what I have long time thought, but only far more eloquently and intelligently. Thanks. :-) :-)

Syamsul Arifin

Safety professional at integrated energy company | PhD student | author/writer | speaker

4 年

Nice sharing David

Dave C.

Trying to make sense of how humans really discern risk and make decisions on how best to deal with it.

4 年

Very interesting thoughts! I hope the BBS zealots don’t stumble across this article - might make their methodology look a little silly

要查看或添加评论,请登录

David Slater的更多文章

  • When Safety Meets Science: Building a Foundation on Data, Not Hype

    When Safety Meets Science: Building a Foundation on Data, Not Hype

    Organizations today are in a relentless pursuit to boost performance and ensure safety, yet too often we see a medley…

    12 条评论
  • New Research Shows AI Strategically Lying

    New Research Shows AI Strategically Lying

    The article presents a thought-provoking and concerning exploration of the potential for advanced artificial…

    7 条评论
  • The Turing Analogy in FRAM

    The Turing Analogy in FRAM

    If it quacks like a duck and walks like a duck, its probably a -----------? Abstract This note suggests that the Turing…

    2 条评论
  • The FRAM Function as a Turing Machine

    The FRAM Function as a Turing Machine

    The Functional Resonance Analysis Method (FRAM) Hollnagel (2012), offers a unique lens to examine the complexity and…

  • Simplify – The Siren Call of Safety: “Safety” Science or Pseudo-Science

    Simplify – The Siren Call of Safety: “Safety” Science or Pseudo-Science

    In the pursuit of safety, the allure of simple, visually appealing tools and frameworks often eclipses the complex…

    25 条评论
  • Is your system SAFE?

    Is your system SAFE?

    Introduction. Is your system safe? A simple enough question, but one I suspect only politicians and salesmen attempt to…

    51 条评论
  • The Human in the Loop – the problem or a solution?

    The Human in the Loop – the problem or a solution?

    In the realm of complex sociotechnical systems, a long-standing assumption has been that human functions are orders of…

    22 条评论
  • Do FRAM functions mimic Neurons?

    Do FRAM functions mimic Neurons?

    (If it walks like a duck ---------?) The functional processes of neurons firing in the brain and the way functions…

  • TOWARDS MORE INTELLIGENT AI

    TOWARDS MORE INTELLIGENT AI

    Artificial Intelligence (AI) systems, particularly machine learning models, have made remarkable progress in recent…

    4 条评论
  • STPA – A Stairway to FRAM: Enhancing Complex System Modeling

    STPA – A Stairway to FRAM: Enhancing Complex System Modeling

    In the landscape of complex system modeling, two methods stand out for their distinct approaches to understanding…

    10 条评论

社区洞察

其他会员也浏览了