Learning from normal work: How to Proactively Reduce Risk When Nothing Goes Wrong
An interesting article from Marcin Nazaruk , exploring learning from everyday work.
Skipping a bit, so check out the full article.
First he says that while it’s important to learn from failure “it is too late”. For one, diminishing incident rates “can no longer accurately reflect safety performance … and simply focusing on behaviours and unsafe conditions is not enough to further reduce risk”.
It’s argued that a common assumption is completing a task without incident is a success. But this doesn’t mean the task was executed entirely as expected. Since the majority of tasks are completed without incident, a belief grows that no further improvement is needed.
When incidents do occur, it “it feels natural to believe that it was a result of something going wrong”, like a ‘failure’ to follow a procedure. But, when the task is completed without incident, “it is often assumed that all procedures were followed”.
He notes that this assumption is misplaced, since “researchers who study work performed without incidents (i.e., normal work) find the same factors that are identified in incident investigation reports”.
Work occurs under many challenges, like missing or inadequate tools, time, staffing and more. People adapt to the challenges by finding ways to accomplish their work in an acceptable manner. Adaptions usually allow the work to be completed, again usually without failure.
“In other words, things go wrong for the same reason that things go right” [*** I’d add that this works better at an abstract level. While it does have evidence to support it, other research also contradicts this statement…so in my view, it’s one of those ‘it depends’ moments]
Next Marcin unpacks the differences between hazards and constraints. Hazards are typically in the context of something with the potential to cause harm – like physical objects or hazardous energies.
But the “level of risk in any given task is not limited to how well these physical energies are controlled’, like incorrect procedures, insufficient time or unfamiliar situations; which are said not to be hazards themselves.
Instead, these factors can be called constraints, error traps and performance-shaping factors. Importantly according to the author, these facets “are rarely identified and addressed as part of risk assessment”.
Some examples from the paper are shown below:
While constraints must also be addressed, the hierarchy doesn’t necessarily target constraints. Like, if a procedure is out of date, then assigning procedures in the hierarchy (admin), won’t address the quality of the procedures.
Unsafe Act, Adaptation or Both? Which Lens Is Most Helpful?
Next Marcin covers the perspectives of unsafe acts vs adaptation. He says dealing with constraints often requires adaptations.
People workaround inaccurate procedures, fabricate or modify their own tools and more. In retrospect, these adaptations may be called ‘unsafe acts’ or procedural violations, with a belief that they “must be eradicated without giving deeper thought to what prompted them in the first place”.
But, the same behaviour can be interpreted differently based on our own lens, and provides an example that if a leader sees adaptations as violations, then they’re more likely to use punitive consequences. This may impact learning.
?
Learning From Normal Work: An Example
Next examples are given about learning from work, in one example an operator was crouched on a large lathe machine; exposing the operator to a machine hazard. I’ve skipped heaps here.
Marcin provides a brief comparison about how one could approach this work after it’s stopped. Questions like whether the operator knows how they can get injured or if they understand the hazards, or know the rules. But “this approach will not aid in understanding what this person was adapting to”.
These questions may also be loaded and put the worker on the defensive. Another perspective is viewing the task as a form of adaptation – and then ask what it is an adaptation to. This may reveal several constraints facing the worker, like equipment design etc.
Skipped a bit more, but it’s said using a different lens may reveal new insights, since in this case, the hazards may already be know by the operator.
Next a couple of tools are discussed.
Refreshing Safety Conversation
This focuses on listening to the challenges faced by operators and their needs, more than emphasising rote rule following.
People can partner up and solve the constraints together.
领英推荐
?
Walk-Through/Talk-Through
This tool involves the break-down of the task into steps, and then each step is discussed to explore the constraints and what makes each step difficult.
Marcin observes that while breaking down tasks is typical of JSAs, “the focus of WTTT is not on identifying hazards, but rather on constraints that contribute to risk”.
The WTTT “is a simple but powerful technique”, and may aid in the verification of tools.
?
Learning Teams
Learning Teams are also briefly discussed, where it’s argued that they can give “more insight than a simple conversation and a WTTT”.
Some examples of learning teams are given, which I’ve skipped.
But he points out that the things that were found in one learning team are likely the same stuff that would have been found in an investigation, had an incident occurred.
And hence, “the conditions leading to an incident do not unexpectedly materialize seconds before the event but are present most of the time” and “the conditions that will create the next incident exist today”.
Further, it’s said that trying to change behaviour of operators without changing how the work is setup will have limited impact.
Moreover, improvements that eliminate the risk of an incident in the case study were in managerial control anyway, not the operators.
And much of the stuff that was identified in the learning team in the case study “could not be categorized as hazards and therefore would not show up in the risk assessment”. [*** or at least, how they’re typically configured and used in health and safety; process & systems safety do have techniques which focus on constraints, shaping factors, feedback/forward & control mechanisms etc.]
?
Risk Assessment/Job Safety Analysis
Next the task-level risk assessment / JSA is briefly discussed. It’s argued that adding in another column for constraints can enhance the process. The article provides an example of what the JSA could look like.
A focus on learning from normal work is said to integrate well with behavioural programs also, as it focuses on constraints and more that existing behavioural approaches may not.
It also integrates well with leadership conversations/visits and incident investigations. On the latter, it allows a focus on the things that make work difficult that could contribute to another incident but not necessarily this one.
It’s said this is because “of the effect called “outcome equivalence.” It means that the same outcome can happen through different combinations of conditions”.
Marcin then covers some steps to systematically implement this approach – I’ve skipped this but check out the article if you want to see.
In concluding, it’s argued:
1.?????? Having zero incidents doesn’t mean the risk is sufficiently managed – and conversely, having fewer incidents might mean less insights about what is going on
2.?????? Zero incidents doesn’t tell leaders how well they’re managing risks
3.?????? Safety is co-created by different people, and the overall risk
Ref: Nazaruk, M. (2023). Learning from Normal Work: How to Proactively Reduce Risk When Nothing Goes Wrong.?Professional Safety,?68(11), 14-21.
Specialised coaching as a means to improve reliability of critical controls and prevent workplace fatalities.
2 周I was implementing this in practice 25 years ago and with great success. Unfortunately, the term we use instead of constraints was ‘barriers’ which was confusing. We would explore these constraints with the workforce real time while at the working interface so they could be discussed literally, not abstractly at a meeting at a later date. Very powerful.
Vice President of Operations | Dedicated to helping companies save lives, prevent injuries and protect clients from harm | Delivers Client Excellence | Board and Advisory Roles
2 周Ben - as always, a much appreciated summary. The premise of "how do you learn from the absence of incident/accidents" is very interesting. I was just reviewing claims data from a large organization and most cases their incidents have very low severity but about once or twice a year, something that seems really benign results in a severe outcome for one individual... On the face of it - it all seems very random... I like these suggestions and if we put some effort into understanding pre-cursor events that has Serious Injury potential it could have a real impact on the severity of incident/accident outcomes.
Thanks for sharing!
Leadership and Organisation Development consultant
2 周Ben Hutchinson Marcin Nazaruk Thanks for the article. I see a strong link to procedural drift and the importance of having 'drift conversations' using similar questions to the article designed to reduce defensiveness as the person asking positions this as seeking advice on the procedure not trying to find out why the individual is failing to follow it; questions such as: Talk me through the procedure(s) for this activity as you understand it? Which steps do you always follow, and why? Which steps haven't you followed, even on the odd occasion, and why? In your opinion, does any part of the procedure(s) need to be added to or amended? Are there any times you feel unsafe when carrying out this activity? When this happens, what do you do to keep yourself safe? And of course, the person who is asking is trying to identify drift to inform continuous improvement efforts. PS. An alternative approach I've found to work here if you need a deeper dive is the Critical Incident Technique.
Te ayudo a mejorar la Gestión de Confiabilidad Humana, Operacional, Factores Humanos, HOP, SBC/BBS, Gestion de Mantenimiento y Gestión de Activos en tu organización. Rompemos los mitos alrededor del Error Humano.
3 周Hello Ben Hutchinson. Good material from Marcin Nazaruk, the one you present. The first lines remind me of the Macondo accident report Where in the findings they identified gaps and poor management in technical, human, organizational and regulatory factors: "they must actively work to close the gap between the work imagined (WAI) in the drilling program, as defined by designers, managers or even regulatory authorities, and the work performed (WAD) by the well operations team. That is to say, since nothing happened, it was believed that they were done following the procedures. We all know what happened.