Situation awareness in outage work – A study of events occurring in U.S. nuclear power plants between 2016 and 2020

This study explored human performance in the context of scheduled outage work in nuclear power plants using a situation awareness (SA) taxonomy.

Most SA research has focused on individual information processing, knowledge states, decision making and resulting outcomes.

However, there are also frameworks that focus on “team situation awareness”, “shared situation awareness” or “distributed situation awareness” (the latter being similar to distributed cognition). These frameworks focus more on how different team members hold different situational information and how information is exchanged within a system. [** I’ll soon be posting a summary of a distributed situation awareness study using a systems view, which better highlights how it’s a system that loses situational awareness and not an individual.]

58 nuclear plant events were analysed and they drew on Endsley’s 3 level SA framework, below:

No alt text provided for this image

Level 1 is about a person perceiving relevant cues in their environment. Just perceiving a cue isn’t enough and one must also comprehend the cue, level 2. Finally, level 3 allows one to project future actions against these cues.

Issues at level 1 could be that relevant cues aren’t readily available or visible in the environment, such as when “relevant data is not signalled or communicated to the people involved, perhaps because of a lack of available system indicators (e.g., failure warnings) or interpersonal communication failures” (p3), or in other situations where cues can’t be observed (hidden within plant and the like) or is difficult to detect or discriminate (e.g. due to poor physical conditions like poor visibility, poor lighting, obscured line of sight or high noise levels).

Further, the inability to observe or monitor data in the environment can be fuelled by other reasons like distraction, stress due to high workload or being too narrowly focused on a given task, or when data is misperceived, leading to disorientation, or forgotten.

For level 2, cues may not be appropriately comprehended and given saliency due to a range of factors, including a mismatched mental model, defined here as “a person’s generic knowledge about how something is/how something works” (p3). Drawing on a mismatched mental model which is ill-suited to that particular task can contribute to a level 2 comprehension issue, such as relying on a “mental model for a task that worked well in another situation that is not well-suited for the situation it is applied to, because they have not recognized that key parameters relevant for the safe execution of the task have changed” (p3).

For level 3, challenges to an individual’s ability to project future states based on cues is said to be more difficult to study than prior levels. However, this can again include mismatches in mental models and other factors.

Finally, a host of personal and contextual factors also influence SA. For one, maintaining SA across the levels requires significant cognitive resources as with varying degrees of expertise.

Note: This paper uses “error” to describe human performance variability. Out of pure laziness I’ll be doing the same. Interpreting the data is also influenced heavily by hindsight and outcome bias, so try to consider the findings from the operator/s caught up in the moment rather than from the investigator post-event.

Results

Key findings included:

·????????14 events (24% of total) were classified as involving largely level 1 SA errors, one of which was a latent condition

·????????30 events (52%) were of a level 2 category, six of which were latent

·????????14 events (24%) were level 3, nine of which were latent

They break down the error & SA types in various tables which I’ve largely skipped. One extract is below:

No alt text provided for this image

Level 1

For level 1, most performance variability of perception involved an inability to monitor or observe safety information in the situation (92% of all of level 1 factors).

10 of 13 events involving an inability to monitor or observe were linked with a failure to follow work procedures. However, interestingly, only 3 reports indicated *why* work procedures weren’t followed. In one example the work team was so focused on getting the job done (which we pay people for) that they didn’t sufficiently perceive environmental cues, nor use of the procedure. In another case, perceived time pressure was provided as an explanation.

[Hint: you will find this observation everywhere and especially so in construction. E.g. An ICAM makes a bold statement like operator “lacked situational awareness” or “failed to follow procedure”, but then provide almost no context around whether the procedure was known, what the operator knew about the procedure and steps, whether the procedure was relevant and had utility, how often the procedure is/isn’t used etc.]

?Level 2

Here most performance variability examples were linked to level 2 SA.

The most common factor was a lack of or mismatched mental model, which led to people not taking some action (that in hindsight, should have been taken) to prevent an incident from happening.

This data highlights the prevalence of lacking or insufficient work procedures (15 out of 18 events). Examples included how procedures didn’t include critical steps or actions to take, or checks/balances for operators to be aware of.

Other factors included inexperienced personnel in field tasks or plant room operators not taking adequate steps to address plant operation—these operators didn’t know what they didn’t know, and procedures and work processes/resources apparently did little to better enable them in these tasks.

Level 3

All 14 events were linked with the mismatching of a mental model. They note that 9 of the 14 events were “associated with latent situation awareness errors that had been committed prior to the event” (p7).

As with level 2, procedures again in some instances failed to properly enable operators in safe and reliable work sequences.

In 2 cases a successful experience with procedures in previous contexts “led to the belief that the same procedures would work in the current situation, and thus the inability to project how differences in the present situation could lead to unique safety issues or hazards” (p8).

Discussion

In sum, they note:

·????????Most errors involved levels 2 and 3 with people not taking some necessary action to prevent an unwanted event, when in hindsight they should have

·????????This contradicts some prior research which found more errors at level 1 around not adequately observing or monitoring the environment

·????????This finding may be due to the outage work environment, which requires large numbers of contract workers with less contextual experience

·????????They note that outage work is heavily driven by proceduralised work, yet this and other research highlights “insufficient procedures contribute to many of the incidents occurring during planned outages” (p8)

·????????Thus, “Our findings lend support for the prevalence in which insufficient procedures contribute to human errors during outages” (p8)

·????????Further, insufficient procedures also contribute to poorly calibrated mental models about how to undertake safe and reliable work, or how to verify the execution of work

·????????They argue that, perhaps, “that outage workers generally view work procedures as dependable, which could prevent them from identifying that a particular procedure is lacking, or from using human performance tools that would help them challenge procedural sufficiency” (p8)

·????????That is, people may be too dependent on rule following and either may not have the requisite expertise and/or not applying enough discretion and adaptation to the context

·????????They highlight previous research indicating the role that the environment and context plays in shaping human perception and comprehension, such as poor workplace and work design, pressures or trade-offs etc.

·????????The findings also highlight that “outage work is highly collaborative both within functional groups (e.g., technicians working together on a task) and between functional groups (e.g., operations and engineering working together on a task) [and] … it was more often the case that an error was associated with a group of people than a single individual, suggesting that shared situation awareness (Salas et al., 1995) is a very relevant unit of analysis in this context” (p9)

·????????The above, therefore, “requires moving from a cognitive perspective of situation awareness, as was applied in the present paper, to a transactional perspective” (p9)

While SA is often cast as an individual phenomenon, e.g. a person "lost” awareness of the situation and cues, this data and other research highlights it’s really more the system and environment that loses awareness. That is, people required certain cues at certain times which weren’t delivered to them in the format they needed when they needed it.

In short, SA is better considered in the context of workplace and system design.

Link in comments.

Authors: Solberg, E., Nystad, E., & McDonald, R. (2023). Safety Science, 158, 105965.

Gareth Lock

Transforming Teams and Operations through Human-Centered Solutions | Keynote Speaker | Author | Pracademic

2 年

You mean something like 'Joint Cognitive Systems' Ben? :)

要查看或添加评论,请登录

Ben Hutchinson的更多文章

社区洞察

其他会员也浏览了