Do Regulatory Bodies reduce the probability of major accidents?
It is becoming increasingly apparent, across many industry sectors, that the role played by National and International Regulatory Bodies, in driving industry towards improving their management of major accident hazards, is being called into question by the workforce and public affected by those events.
As a former member of a pre-eminent Regulatory Body, the UK HSE’s Offshore Safety Division and formerly the UK Department of Energy’s Petroleum Inspectorate, I took part in numerous Safety and Regulatory Inspections and developed transformational design and risk assessment processes, particularly in the area of fire and explosion prevention and consequence mitigation. During my time with the Department of Energy, I was involved in the follow-up to the Piper Alpha disaster and conducted inspections of the two drilling rigs attempting to remotely kill the flowing wells and was on board the lifting vessel that recovered the accommodation module. It was quite a sobering experience to have the true meaning of accountability for safe operations through regulatory intervention translated to myself personally by the workforce affected at that time – we were considered by the industry and the public as jointly culpable in failing to make the required changes to the industry. It is only after a major accident that the true understanding of the deficiencies in your organisation (and you as a Regulator) crystallise as opposed to it being a list of broad improvement measures, that are rarely fully implemented – due to political and economic constraints.
My fellow Inspectors, prior to Piper, were faced with unsuitable industry practices – such as placing production above safety, coupled with a lack of design tools to evaluate accidental hazards. It may be quite shocking for younger safety professionals to realise that in the mid-1980’s Fire and Explosion Risk Analyses and Emergency Escape and Evacuation Analyses, were not design pre-requisites and the ALARP principle had not been implemented in any meaningful way. Most installation designs relied on code based (API, DnV, BS) standards – looking at operational conditions with an element of extreme loading, but limited assessment of e.g. confined space gas explosions, jet fires, escalation analyses and non-linear time-based finite element response analysis.
The lack of suitable accidental loading design tools was all the more worrying given the risk appetite prior to Piper. The week before Piper A happened, I was the Department of Energy’s Duty Inspector (a 24/7 accident response role) and had to deal with a major gas explosion/fire to another operator’s gas compression module. Luckily the corrugated steel, weather cladding failed before a major overpressure could occur – otherwise we could have had a multiple fatality event the week before Piper. I remember handing over to the Duty Inspector who was covering the week of Piper and hoping his week was going to be quieter.
The UK Offshore sector today is so far removed from the 1980s, after the physical plant changes jointly agreed with industry post Piper Alpha, which lessened the potential for a fire/explosion with a consequent uncontrolled escalation. Equally, the transformation away from a production biased culture by the industry (the Department’s Petroleum Inspectorate also approved economic field development plans) also contributed greatly to the change. However, the UK offshore industry will always be dealing with the changing dynamic risks posed by e.g. ageing plant, industrial disputes on working time rotas, fatigue management and corporate memory loss.
What is concerning though is the recent plethora of major accidents around the world, which suggests a weakening of the collective approach to managing major accidents. I am of the opinion that some events are partly attributable to National Regulators failing to act on weak signals from incidents and in some cases, their inability to act after major accidents.
In this article, I will discuss briefly a number of well publicised, recent major accidents across industry sectors. The key questions to be answered are:
· Should we consider ineffective regulatory intervention as a root cause of a major accident? And if so,
· What are the underlying causes of this root cause? And therefore,
· What should we recommend as effective preventive actions?
The first example I wish to discuss is the Grenfell Tower fire tragedy which happened in the UK in 2017. The investigation demonstrated a fragmented regulatory regime, with no effective implementation of controls based on similar historical events that had happened in other parts of the world on multiple occasions prior to the Grenfell fire.
The most striking example being the Melbourne, Lacrosse building fire, which was reportedly sparked by a cigarette on an eighth-floor balcony and raced up 13 floors to the roof of the 21-storey building in 11 minutes. The rapid fire-spread was blamed on flammable aluminium composite cladding that lined the exterior concrete walls - the same type of cladding was installed on the Grenfell Tower in 2016, as part of a £10m renovation. Of the 170 buildings audited by the state building authority after the accident, 85 were not compliant with Australian building codes, which require that external cladding minimise the risk of fire.
The learnings from this event were well known in the construction industry, yet did not lead to the adoption of inherently safer designs and materials in the UK and the UK regulators could have intervened and applied a precautionary principle to this risk. Implementation costs, competency and/or resources within the approval bodies and a lack of examination of prototype testing can all possibly be sited as underlying causes of the accident, but a root cause was the collective failure of the regulatory regime to critically examine the basis of the design, major accident analyses and consequence modelling. Had an effective national regulation regime been in force in the UK, then a modification to the cladding and construction details could have prevented the initial localised event from tearing through the building.
What was also concerning, was that the mitigation actions applied by the fire response teams to advise people to not evacuate, on the basis that traditional buildings of concrete/steel construction would prevent rapid escalation - this assumption was never challenged, or if it was the message not translated across the country. Watching a building burn and still advising people exposed to say in their apartment, was an error of judgement, but understandable, given the lack of effective risk management displayed by the Regulatory Bodies.
A second example is the Notre Dame fire. It is clear that any basic risk assessment using a most probable worst-case fire event, could easily identify a loss (in emotional/reputational terms) which would be above the most catastrophic risk matrices limits. If the regulators had insisted on such an approach, it would almost certainly have determined that a 24/7 fire watch – with a readily available discharge system - was warranted. The additional cost over the duration of the construction pales into insignificance given the multi-billion loss for re-build and also the irretrievable history loss – the personnel could also have doubled as night-time security personnel! Where was the joined-up thinking? What risk standards are applied to historic buildings? It is not as if we haven’t seen similar events before – the Windsor Castle fire of 1992 being a case in point.
Another major accident example that is probably more extreme in demonstrating the lack of effective intervention by the various aviation safety regulators, in particular US Federal Aviation Agency’s National Transportation Safety Board, occurred during the weeks after the first Boeing 737 Max air crash in Indonesia. Below are extracts from media reports on the incidents:
“In the Oct. 29 crash of a Lion Air 737 Max off the coast of Indonesia, a malfunctioning angle-of-attack sensor that had just been installed sent erroneous signals indicating the plane’s nose was pointed too high relative to the oncoming air. That prompted MCAS to push the nose down more than 20 times until pilots lost control and it plunged into the Java Sea, killing all 189 people aboard.
On March 10, the same safety system on a 737 Max operated by Ethiopian Airlines was activated after an angle-of-attack sensor on the jet failed suddenly at liftoff. After about six minutes in which MCAS pushed the nose down several times, the plane went into a steep dive and crashed at high speed with 157 passengers and crew aboard.
In both accidents, there were steps pilots could have taken to avert a crash, but they failed to do so, according to preliminary reports. One possible reason was that the erroneous angle-of-attack readings triggered numerous alerts and warnings that may have been distracting.”
The design weaknesses were known from earlier incidents involving angle of attack sensors, such that earlier designs allowed – expected - manual intervention to override the instrumentation signals. Why was the decision made to rely almost exclusively on equipment with a guaranteed probability of failure with catastrophic consequence potential? These systems should have been fault tolerant and fail-safe – a basic principle of safety instrumentation control.
This incident highlights issues with the design, but of equal concern was the failure by the aviation regulators to recognise the design flaws and intervene after the first accident. The conclusions in this paper point towards several underlying causes, but the closeness of the Government with industry is a common theme and was recently reinforced by a suggestion by Donald Trump that one action Boeing should consider is to re-brand the Boeing 737 Max – give it a new name – well that would certainly enhance safety?
Finally, and without going into specifics on this one, I have presented several Masterclasses on HSE and Process safety and used many excellent US Chemical Safety Board accident videos. It did occur to me though that in one particular video, the US Chemical Safety Board presented a major explosion event and a summary of at least 3 similar events that the industry and the company had failed to deal with. It occurred to me then, that a root cause was surely the lack of effective regulation as there was a series of similar smaller events that had enough information, upon which the regulator could have possibly have intervened to withdraw licences, if companies did not change design/operation practices.
In conclusion, the following issues are proposed for discussion to my fellow safety professionals and regulators as matters that could impair the effective regulation of major accident hazards and which should be considered by any nation state when determining whether to change the regulatory regime and intervention approach in their country of operation:
· Dealing with companies that have a dominant market position and/or a business of National importance
· Regulating state-controlled enterprises typically results in a more informal application of state compliance requirements and the exchanging of individuals between state-controlled enterprises and the regulator.
· Resource constraints place upon Regulators by National Governments
· Snapshot inspection and audits which have limited impact on high risk situations.
· · Structure of the Regulators and the clear separation of economic and HSE regulation
· Risk blindness – it happened elsewhere but won’t happen here
· Regulator investigation recommendations – slowness of implementing changes due to finalising the report combined with ineffective follow-up.
· Legacy issues with existing hazardous facilities
· Increasing complexity to safely operate – understanding the performance versus risk balance
· Sharing and learning – legal constraints across companies/countries impacting open reporting by the Regulator
When regulators conduct their own investigations, the regulatory regime should always be considered a potential root cause to be included – so perhaps such investigations should include independent team members who can identify those issues for change expected by the public. As safety professionals we should challenge the regulatory regime at every opportunity - after all we are the public!
Quality Control and tech service specialist.
3 年I had an experience with the piper alpha disaster. In the afternoon of the disaster I received a telephone call from the coast guard if they could use Ciba-Geigy dyestuffs in the North Sea to mark the oil line. Being a 25 year old lab technician I decided it was too big a shout for me with the environmental concerns and passed them quickly on to the Health and Safety manager who also declined politely.
Corporate Process Safety Leader & EHS Manager - Hexcel Corporation
5 年The strength of regulator enforcement will continually grow and ebb, affected by the occurrence, or lack of, major events. It's only as strong as the public outcry that drives it. It's subject to the same budget balancing as anything else on a governments agenda and will rapidly diminish once public outcry dies down or the effects of the event fade. Perhaps international standardised and agreed regulations such as within maritime would be a step in the right direction. To answer the question directly though. Yes, regulator presence might not always be enough, all too easy to pinpoint in hindsight, but every little level of assurance helps. Regulation itself is what drove companies to employ dedicated safety professionals, so in a way we are the "internal regulators" far more accountable due to our greater level of involvement with our company operations than the regulatory body. And, we are certainly just as fallible to most of the same flaws you have placed upon the regulator but, again, we play our part to reduce events, as do the site workers and everyone in between. Everyone is responsible for safety, even if placing a single point of accountability is a lot tougher.
HAIKAI
5 年Highly informative? Roland Martland.? I would suggest? that "one" can leverage the regulatory system into a liability mode, If so, then? the regulatory system will be? exposed like any other "elements" to? being traced into a root cause analysis.? If you force the regulatory system to be liable for their national and international safety monitoring/auditing oversights you? could open the door to a "risk share"? approach between the industry-regulatory? complex.: Accident/incident effects and liabilities to be shared between regulators/industries according to the principle of? risk shares.? Problem remains that the mother of all of this safety war is led by politics and finance., not safety ontology?
Process (Chemical) Engineering Specialist & Advisor
5 年thought provoking Roland Martland
Professional Process Safety Engineer
5 年An adage that has stuck with me is that 'people will do what you inspect them of, not what you expect of them'.? While there are companies that will design and operate to the best safety standards regardless, there are others who need to be 'nudged'? by the knowledge that the regulator will inspect them and enforce regulations.??