Nothing like Generated New Safety Terms to Make Your Day Safe?
Mike Allocco, Emeritus Fellow ISSS
System Safety Engineering and Management of Complex Systems; Risk Management Advisor...Complex System Risks
Not Invented Here?
It is so sad that safety people don't recognize rebranding of safety terms... As if the new buzzwords create some magical thoughts and capabilities... There is nothing new or magical with Safety I, II, III, Hoping, Agile, Lean, Differently, Zero, Black Swans, Gray Rhino’s, Quantum Risk, Safety Culture, Just Culture, Safety Whatever, on and on… Just read any book on safety from the 1960's on out...??After all, we need to create such new stuff because we have the solution that is new to the author. As if the solution has not been apparent to some that understand complex risk thinking??
What… just fix risks?
What is going on these days with safety sound bites from experts with no experience working complex risks? Nothing is a substitution for research skills and knowledge and experience... Don't be distracted from doing the work of the identification, elimination, or the control of risks to acceptable levels…
Many do not enjoy researching in the context of safety. One may think safety is easy so let us create a new term… A wonderous movement and sell books. You know, we need to modernize since things may be more confusing and complex to the author. We need to create new terms in other areas outside of existing knowledge dealing with big system stuff in different venues… such new thinking is in vogue especially with people that do not have real live experience dealing with complexity. We may as well recall all the high-risk complex systems because we have social or operational context to noodle?
Risk acceptability?
Eventually some may understand the concept of risk acceptability. I am hopeful since some of my colleagues do understand.
A risk is acceptable or not given the validation and verification (V&V) of the risk controls within the adverse sequences under review (system risks). What does this mean? Controls equate to specific latent or real-time hazards and/or security vulnerabilities, contributors, and primary hazards. These controls can do many things like controlling abnormal energy release, abate the adverse progression, or stop the progression at a point before no return, even mitigate human error or poor decisions. The operation is a go or not based upon continuous validation and verification of controls throughout the system and adverse progression life cycles, including V&V of all the associated processes, the analysis, references, documentation.
Further, there are system monitoring requirements that enable early detection, isolation, correction, contingency, recovery and re stabilization...?Based upon the defined and decomposed system... Safety Metrics are designed as an output of inclusive system hazard, threat, and vulnerability analysis... There are part of the engineering and administrative controls to enable system monitoring, observation, early detection, isolation, correction, contingency and recovery. They are risk-based proactive performance indicators that are tracked and trended applying real stochastic methods. This MONITORING is designed to enable continuous acceptable levels of risk... All other non risk-based efforts are moot...
Really design - in safety...
Do you know how to design the entity (system, process, procedure, method, task, operation, and links to accommodate humans: our mistakes, bad decisions, errors, and other stressors? Or design a human-friendly entity proactively to accommodate the human. Lastly, don't attempt to modify the human to accommodate a poorly designed entity.... Proactively conduct inclusive analysis and risk assessment, which enables the identification, elimination, or control of system risks throughout the life cycle.... Provide system monitoring associated with performance indicators as an output of analysis... Enable early detection, isolation, correction, contingency, recovery and re stabilization throughout the adverse sequence life cycle...
领英推荐
Understand progressions…
During analyses one perturbs the system and adverse progressions form based upon physics, natural and unnatural order, possible system states, abnormal energy release... Based upon this logic such sequences present system risks and require elimination or control to acceptable levels of risk... This is regardless of how likely one thinks the progression will occur....?
All known risks need to be addressed regardless of birds and rhino’s and data confusion... Sometimes safety can be complicated and all the buzz will not help...
And it is never just all about workers and management... It is about how humans integrate with things like inclusive complex systems…
Misplaced safety resources?
It is very distracting and misleading when existing terms are changed... We can take note in this media how many experts and published authors indicate new terms for well-established existing practices as if they created a new novel concept like: Safety I, II, III, Differently and HOP, on and on.... It becomes apparent that the "not created here" mindset is a distraction... Further, considering the lack of international research and knowledge bias.
So, there is high-tech?
There is much to go with any high-tech design considering not many understand system safety and system assurance and it is highly apparent given system accidents, large losses, and major recalls... How many rockets were vaporized? Lessons learned via expensive destruction... Does anyone have any idea what the learning curve is with high tech systems? We see outdated attempts at safety analysis that have been conducted back in the 70's.... Sometimes we get lucky and find a system safety engineer that understands concepts way past single hazards and single functions and the worker dealing with a system mess.
Single and simple mindsets...
Here lies a problem: single and simple mindsets associated with safety standards, different safety approaches, nonintegrated safety approaches, single hazards and controls, single-issue thinking, myopic views ... If one does not understand the following there will be uncontrolled system risks: How to replicate the system accident or other adverse processes, which can be very complex; How to design system accidents, adverse processes, and intentional adverse propagations; Defining consistent rules or axioms not to enable adverse process design; How to think in various ways to understand system risks; How to model adverse progressions; How not to rely on probability and stochastic processes; Apply temporal logic and understand system and adverse processes life cycles; Apply multi non learner propagations; Integrate hardware, software, firmware, logic, the human and environment and associated adverse interactions or inactions; Apply decision analysis noting that decisions will adversely affect the system throughout the life cycle; Apply and integrate many analytical techniques to solve complex risks; Design for early detection, isolation, correction, contingency and recovery; Enable systems to fail safe; Keep humans in the loop to monitor systems; System state and mode status must be known; Analyze human-system links and mitigate human factors – related risks; Understand human limitations; Allow humans to be human; How to apply system assurance criteria to enable acceptable levels or risk; Exclude illogic and disruptive thinking; Assure that the human maintains control over the system; On and on... No big deal?