To Causal Code or Not to Causal Code
Carsten Busch
Safety Mythologist and Historian. The "Indiana Jones of Safety". Grumpy Old Safety Professional.
An apology to all Shakespeare fans out there for abusing one of the most famous quotes of their favourite bard. The subject of this blog spun out of a lunch meeting with the brilliant Kristian Gould . At some point we spoke about causal categories in our organisations’ incident management systems. A good opportunity to share some experiences and reflections, and perhaps initiate some discussion.
When my organisation was implementing a new incident management system, one of the questions was: what taxonomy for causes to use? Should we go with the standard that came with the software package, or design one of our own? That question had me in a conflicted position at the time. I definitely did not want the standard version. However, for me, the question was not as much about “what taxonomy”, but rather more fundamental: did I want a causal taxonomy at all!
Taxonomies (lists of categories) are used in a database when it is important that an item is identified in an identical way each time you come across it. For example, if you want to identify the geographical place where something happened, you do not want to be the victim of spelling quirks (e.g. Londen instead of London) or too specific or abstract information (e.g. Soho or England), both of which may happen if you just leave the field as free text. Provided that users select the correct item from the taxonomy, you will have a good accuracy when doing statistics from the database on the number of occurrences in a certain geographical location.
So, why would you want to use a causal taxonomy? After the above, the answer might sound simple: of course so that you can count how many times a certain causal category is selected. When you look at that information over a certain period then you can see what the weak points are in your organisation (namely the causal categories named most frequently), and you might be able to identify some good actions for improvement. Sounds like common sense, doesn’t it? Some commercial investigation tools, e.g. Tripod, even employ this approach, enabling organisations to create “profiles” that show their weaknesses and give them opportunities to benchmark themselves against others – or units internally.
Leaving the issue of benchmarking (yuck!) aside for the moment, the main problem is that you cannot count causes in any meaningful way. I have written about this before in both the Safety Myths book and in the book on measuring safety . It boils down to the fact that causes are not phenomena that can be identified (and thus counted) in any objective way. They are constructs, figments of our minds, used to make sense of things. What we call cause depends on what we select from a broad possible choice of factors (for a simple example, see this clip ). How “many” causes we select depends on what we choose to include, on the depth and breadth of our investigation and on how we choose to identify them. In short, causes are all about choices we make. If we were to do statistics on this, it wouldn’t say much about what the weak points of our safety management system (or even “safety culture”) are. It would mainly tell us about the choices we have made when thinking about causes.
This led me to conclude that for the sake of registration there was no need to have a causal taxonomy in our database. I would be perfectly happy with just a free text field where people could write down their assessment of what happened and leave it at that. I would never want to do statistics on that part of the database – nor would I want to tempt others to do so, just because they can since the data is there.
In the end, I decided that we needed a causal taxonomy after all. As it turned out, my organisation had little or none tradition for thinking about causes or why things happened. Previously, in the old incident management system, they would go directly from report to action. This led in many cases to very superficial actions, at best addressing symptoms, and sometimes even misguided actions, wasting resources, time or having counterproductive effects. There was little thinking about underlying issues. Instead, people jumped to conclusions based on superficial information or just made assumptions about what usually happened, instead of thinking about what actually happened.
Remember, besides substantial or functional uses of taxonomies (like reporting categories consistently for the sake of statistics), we can also think of other uses. Having a causal taxonomy can also have a symbolic use. It signifies that causal factors are something we should pay attention to when handling cases. The categories being there serves as a “nudge” to do so. And, given the fact that there was little tradition and awareness of this in the organisation, the taxonomy might also serve as a set of examples of how to approach the issue.
领英推荐
Come to this decision, the question was “what” and “how”. The larger your taxonomy, the more overwhelming it becomes. Remember that most users are not experts and that this is not their everyday work. If you do choose a causal taxonomy, best do it in rather broad categories that you then can have specified in free text (such that people can explain what they were thinking when they selected these). The categories should be specific enough to get people thinking, but abstract enough to not pigeonhole them. All in all, it may be advised to have a limited set of categories (unlike this recent paper that suggests no less than 113 ones). In our case, we went for a simple Man – Technical – Organisation taxonomy with a few items under each of those. Most things, be it success or failure, happen through the interaction of these factors. Also, thinking in terms of MTO was- familiar to many of the users.
A final question: should it be mandatory to have “causes” in every case in the database? I think not. If you have a database that gathers many thousands of reports, there are many reports that do not need thorough handling. Many you perhaps want for following trends in your statistics and so on, but you do not want to handle and investigate them on an individual basis. In the end, we decided to have the field mandatory in the database after all. Again, this was a choice driven by a largely symbolic intention: to remind users that it is important to think about factors that led to the reported event or situation. However, we offered users also a way out of this – if they did not deem the case “worthy” of full handling, we prepared a couple of “get out of jail for free” categories for them to satisfy the system and document their decision.
This just shows, things have several sides and sometimes we do stuff that is contrary what we really want, but we can see a benefit in doing it after all. At least for a while, because when people have learned about thinking further, we might very well remove the categories… Let’s see what the future brings!
- - -
In case you want to read more about stuff related to what many safety professionals tend to call “causes”, check out my books on Safety Culture, Measuring Safety and Safety Myths:
Safety Culture, Human Performance, and Safety Science Manager: Process & Operational Safety- HSE&C at bp
1 年Appreciate the pragmatic approach here! Definitely fits with my experiences. We may not always be crazy about a particular process or tool, but sometimes they serve they a purpose for a while as organizations develop in their skills.
Advisor to Senior Executives on Safety and Organisational Culture
1 年I’m completely in the camp of ‘not to causal code’. What I see more often than not is that the goal is to fill in the codes correctly rather than finding the root causes of the incident. Which brings me to a second point: they often try to find THE root cause, whereas we should be looking for all (as in multiple) contributing factors.
Driving Organisational Learning to Improve Safety @ the DLR??
1 年Thanks for the article, Carsten. Very insightful and something I think about (less articulately) quite often. In my organisation, under our Learning Review process, I used various sources (including Tom McDaniel) to create a Performance Influencing Factors (PIFs) taxonomy based on the categories of Work, Worker, Workplace (Cultural) and Workplace (Physical). Whilst I consider this taxonomy to be less problematic that a causal one, since influences are considered contributory only, it still suffers from the same challenges of being a socially constructed simplification of a complex reality. My feeling is we need something to help us with thematic analysis, but perhaps Generative text-based AIs that review rich narrative reports are the best way forward, rather than distilling a 2000-word event narrative into 5-6 influencing factors and then only trending on them.
Safety Mythologist and Historian. The "Indiana Jones of Safety". Grumpy Old Safety Professional.
1 年The second issue that I picked up from Kristian's contribution is "most only assess to a direct cause level". From experience, I can confirm this. Which makes it even more useless to do statistics on (please see my critique on Heinrich's 88-10-2 ratio). So we did NOT provide a separate taxonomy for direct causes (I've worked in organisations that required both a direct and a underlying/root cause for each case, but IMHO that's just nonsense).
Safety Mythologist and Historian. The "Indiana Jones of Safety". Grumpy Old Safety Professional.
1 年Kristian brings (at least) two important subjects to the table. First, the "Do not harm" bit which I think is a lovely way of thinking about our tools (see also: https://www.dhirubhai.net/pulse/tools-fault-carsten-busch/). I case of the cause taxonomy that we implemented, we have striven as far as possible to avoid normatives in the descriptions (it was hard to do the same in examples we provide as guidance, but I think me managed quite nicely regarding the categories)