Visualising safety: strengths and weaknesses
Jean-Christophe Le Coze
Author of ‘Post Normal Accident’ | Head of research on Human & Organisational Factors
Swiss Cheese Model and Trade-Offs, James Reason
It seems unnecessary to introduce Reason’s contribution here. Anyone in the field and likely to read this post knows the approach fairly well (see Larouzée and Le Coze, 2020 for a presentation and discussion, figure 1).
Figure 1. Swiss Cheese Model (Reason)
Additional representations have been made available by Reason in his articles and books, specifying some of the underlying elements of this central idea. Several of them attempt to capture the dynamic trade-off between safety and production; see, for example, figure 2, in which companies performing hazardous operations are represented as navigating between bankruptcy (too much emphasis on safety) and catastrophe (too much emphasis on production). Following an accident, safety is focussed upon for a while; then, it declines, reaching a state where incidents and accidents are again more likely, and then occur; a new cycle begins, repeating the previous cycle.
Figure 2. Safety as a cyclical trade-off (Reason)
One last important comment is that, in Reason’s words, his approach is systemic in the sense that it does not target individuals but rather organisational factors, or latent failures. As indicated above, the production of a systemic model such as this one marks a shift of interest for the author from the topic of human error.
Based on this introduction of the model, I now turn to what I consider to be some of the strengths and weaknesses of Reason’s contribution. This is based on my personal experience of using the approach for accident investigations but also in the study of daily operations. There are already some critiques addressed by various authors in different articles and books, e.g. Turner, Pidgeon, 1997, Dekker, 2002, Shorrock et al, 2004, Hollnagel, 2004, which tends to demonstrate the popularity of the approach. But first, I want to stress some of the strengths of the model:
- The model provokes an immediate intuitive understanding and provides a very clear similarity with the technical approach of ‘defence in depth’, which translates very well, metaphorically, from technology to organisation;
- It expresses and reduces the complexity of the problem of accidents by indicating the many potential (but unfortunate, or ‘normal’) combinations of holes that characterise an accident sequence;
- It allows the user(s) to imagine that there are practical recommendations to be derived from the model, by targeting and improving some selected defences;
- It indicates a distance from the targets (the damages), so that incidents can be expressed by their level of proximity to a catastrophe, and offers in principle therefore a possibility of a normative assessment;
- It distinguishes between proximal and remote individuals, who play a different role in the genesis of accidents, and is, in some senses, systemic rather than individualistic in this respect.
However, the approach is weak on certain aspects:
- It does not explain clearly what the holes are in reality – users are left to translate this for themselves – it is only suggestive and not analytical;
- It does not indicate how holes are likely to align, although some other representations of Reason’s introduces the notion of a trade-off cycle between safety and production (e.g. figure 1), providing a kind of dynamic illustration but remaining at a suggestive level;
- It relies on an underlying philosophy of failure and errors (whether ‘latent’ or ‘active’), introducing the notion of blame either at the level of proximal or remote actors;
- It is not explicit or insufficiently specific about the slices or planes (or defences) although they are to be associated with different scientific fields (psychology, management, sociology, etc.);
- It does not explicitly introduce the safety management activities (risk analysis, learning from experience, management of change) in relation to the slices, although it is the most common approach in companies;
- o It therefore leaves a lot of room for interpretation about how slices are to be considered (e.g. functions, actors, procedures) and how far the slices should go back in space and time,
- It offers a linear and sequential view of accident trajectories, as a sequence of events following each other over time, and cannot account for multiple and/or circular causalities with different time spans.
Of course, no approach can claim to be without limits, and limits are not intrinsic to the model but to the contexts of use and the background of the users (something that, in any case, is always relative and not absolute). These limits also reflect the historical periods during which models were produced and the experience of the author who produced them (e.g. psychology, aviation, healthcare, oil and gas for Reason). Let’s discuss now Jens Rasmussen’s contribution.
Migration and sociotechnical systems (Jens Rasmussen)
As one of the most popular and influential authors in the field of safety (see Le Coze, 2015), Jens Rasmussen’s insights and associated graphical representations are well known. There are probably better known in academic circles than in the industry, in comparison with the Swiss Cheese Model (Stanton et al, 2011). Among many of his models, two, in my view, stand out as safety (accident) models. The first one is the model of migration and the second one is the sociotechnical system view (STS).
Figure 3. Migration and Sociotechnical System (Rasmussen)
Again here, based on my experience as a user of the model, their strengths are that:
- They indicate the importance of taking an interdisciplinary and functional (or vertical) approach to thinking about industrial safety problems.
- They characterise the notion of variability and adaptation for an organisation in its environment and in relation to the idea of exploration at the borders of safety performance.
- They emphasise that socio-technical systems are dynamic and change under external constraints.
- They replace the notion of error and failure with the notion of variability and adaptation both for operators and managers (accidents can be seen as ‘expected’ or even ‘normal’ due to the explorative nature of systems as an adaptation of individuals to external constraints).
- They introduce the idea of self-organisation, which allows a very clear understanding of how situations can potentially arise out of the centralised control of organisations.
- They show the distributed nature of the problem of safety and accident by locating different actors in time and space (especially through the Accimap’s mapping deriving from the STS view).
- They suggest an intuitive and appealing analogy between cognition and organisation, which is very illustrative.
But this combination of models is obviously not without its weaknesses:
- It implies a hierarchical vision of the STS, with authorities at the top, who seem to be in a position of top-down control over companies, which seems doubtful these days to say the least.
- It leaves implicit the idea that the flows of information are rather cybernetic in principle and sequential in nature, first without the apparent possibility of direct communication between the lower levels and the higher levels, second without indicating any filters between the levels of the column.
- It implies that there are real safe performance boundaries that can in principle be defined, as they are represented with the lines, and one doesn’t know if they may be known in advance (this remains indeed a strong assumption).
- It does not explain how to combine or to add the scientific disciplines together, how they interact with each other; how one discipline, for example, can or should be reduced to another.
- It does not provide any information about the safety management functions of an industrial system, which can be a problem when translating it into the most common frameworks.
- It suggests more than analytically distinguish precisely the dimensions that should be concretely monitored or observed in the explorative behaviour of organisations.
These two lists of strengths and weaknesses again reflect my own perspective on the models, and Rasmussen’s models have been praised more than criticised so far, much more often regarded as classics of the field without commenting on their potential drawbacks. But both Rasmussen’s and Reason’s approaches reflect a certain state of knowledge during the 1990s based on the investigation reports on Chernobyl, Challenger, Clapham Junction or Piper Alpha. They reflect the available concepts of that time, focusing on human error, even if they both distance themselves to some extent from the notion of ‘error’ in different ways. Reason moves away from active failures to include latent failures, in a systemic shift targeting actors other than frontline ones, but nevertheless remains in the ‘human error’ paradigm. Rasmussen introduces the idea of variability which contributes to the shift towards a more positive view of operators taken in the 21st century, away from the negative ‘human error’ perspective.
To find out more about exploring strengths and weaknesses of visualisations in safety, and possible avenue for alternatives, see:
https://www.academia.edu/4777946/New_models_for_new_times_An_anti_dualist_move
References
Dekker, S, W, A. 2002. Reconstructing human contributions to accidents: the new view on error and performance. Journal of Safety Research. 33. 371-385.
Hollnagel, E. (2004). Barriers and Accident Prevention. Aldersghot, UK: Ashgate.
Larouzée, J.C., Le Coze, JC. 2020. Good and bad reasons: the swiss cheese and its critics. Safety Science. https://www.academia.edu/42190472/Good_and_bad_reasons_The_Swiss_Cheese_Model_and_its_critics
Le Coze, JC. 2015. Reflecting on Jens Rasmussen’s legacy, a strong program for a hard problem. Safety Science. 71. 123-141. https://www.academia.edu/4799799/Reflecting_on_Jens_Ramussens_legacy_A_strong_program_for_a_hard_problem
Shorrock, S., Young, M., Faulkner, J. (2004). Who moved my (Swiss) cheese? The (R)evolution of human factors in transport safety investigation. In: ISASI 2004 Proceedings.
Turner, B, A, Pidgeon, N. 1997. Man-made disaster. The Failure of Foresight. Butterworth-Heinmann.
System Safety Engineering and Management of Complex Systems; Risk Management Advisor...Complex System Risks
3 年Let's keep referencing the same authors over and over and not identify the illogic in their discussions? What's the point? Here we go again with techniques associated with Mil Std’s in the 60's and 70’s... These methods were meant to help make DoD/DOE decisions based upon outdated single hazard/failure logic and cause and effect logic. They are very basic system safety methods to provide some rationale, only! Like many other attempts to misapply system safety outside of DoD context: like: RCA, What if? 5 Why’s, Or FMEA into HAZOPS, FTA, ETA, Bow Ties, Fish Bones, Cheese attempts and other magical Risk Matrices, System Safety Precedence, all morphed into magical methods that are mostly illogical now.... There are about 500 methods and techniques to evaluate and illistrate system risks and experienced analysts should be able to mix and match analytical methods to suite analytical objectives.... Actually, most any science or engineering technique can be applied to evaluate a system risk... So you wonder why decision makers are confused... You need to do a good job of enabling decision makers to understand risk-based decisions... How do you think that will happen?
HSE Transformation and Innovation
3 年I loved Rasmussen's boundaries model but every time I've tested its use people just don't get it. Visualising is key. The same with systems mapping - it's difficult to characterise that the connections are what have meaning. Thanks for this.
Director I Global Health and Safety Leader
3 年Jean-Christophe, as we consider visualization of safety, do you think safety can lean from the concept in Pragmatism of semiotics? Pragmatism had a big influence on quality and shaped quality management, but appears less studied in safety science.