Are We Safe or Complacent?

Are We Safe or Complacent?

A recent article in the ACS Chemical, Health, and Safety used extensive survey data to show that safety in academic and industrial research laboratories was improving. While a very interesting article, it struck a negative chord with me and raised an issue that forms the basis for this article. How does an organization know it is safe, that its safety culture, systems, and practices are effective? Assuming no recent spate of accidents has occurred can the organization assume it is safe or is it just complacent. I have written several articles on the overall subject (“We Are Comfortable with Our Current Safety Procedures”: How Do You Prevent Something You Don’t Recognize?, https://www.dhirubhai.net/pulse/we-comfortable-our-current-safety-procedures-how-do-you-palluzi , “My Laboratory is Very Safe.”: The Dangers of Myopic Looks at Laboratory Safety, https://www.dhirubhai.net/pulse/my-laboratory-very-safe-dangers-myopic-looks-safety-richard-palluzi).

?In this article I want to focus on the use of surveys to help determine the effectiveness of an organization’s safety practices, procedures, and culture.

Surveys give people’s opinions. As my mother always told me, opinions are like heads, everyone has one. Opinions are defined as “the beliefs or views of a large number or majority of people about a particular thing.” Or, more germanely as “a view or judgment formed about something, not necessarily based on fact or knowledge”. And surveys are notorious for generating incorrect or miss leading views. Depending on how the question is asked, on how we want to be viewed , and peer pressure to answer a certain way can all – among other factors influence the answer. Anonymous surveys are often not. I have seen managers recognize handwriting, ways comments are worded, or even getting the name of the respondent ?from IT despite the survey being touted as anonymous. All create some pressure (real or imagined) to “vote the party line”.

Years ago, I worked for an organization that annually surveyed all its members on management’s commitment to safety. Initially, the questions were poorly phrased along the lines of “Overall my management is committed to safety” resulting in an almost uniformly good response. Later, when the question was modified to be something more open like “My management has shown that they always put safety first” resulted in a brief period of less excellent replies to the survey. Within a few years the results were almost uniformly excellent again. Why? Because every time a group expressed any concerns the manager assured them they were wrong, restated their commitment to safety, and encouraged group members to rethink their answer next year. Everyone rapidly got the message.

In another organization I was asked to review their MOC system as part of an audit. Management gave it high marks for its 100% compliance rate as routinely reported. The EHS group gave it high marks as it ensured every step of the review process was conducted and documented. Facilities gave it high marks as it eliminated safety issues and also reduced what they felt were “needless” requests for change. Sounded good so far. Users, after some prodding, complained bitterly that the system made any change a tedious months long process to be avoided at all costs. Several user “interpretations” of what was or was not a change had arisen that, unsurprisingly, always resulted in not having to use the system. Even a cursory audit of the actual equipment showed many modifications had been completed with no MOC process as being replacement in kind or trivial or something that somehow was done but to which no one was willing to acknowledge responsibility. When I raised the apparent disconnect, I was presented with a site wide survey that gave the system high marks suggesting my comments were not warranted. I pointed out the questions were slanted towards the desired action and that personnel had mentioned that any negative assessments always resulted in increased attention to the group that produced them and attendant even longer delays in approval. (A?comment by one senior operator that everyone always gave the system high marks because they were terrified any changes would make it even worse remains a favorite tidbit.) In another organization, a system that I found badly lacking got uniformly high survey responses. Questioning the operators revealed a common thread that they loved the new system because it did not ask them to do much work to get an almost automatic approval unlike the old system that required a more structured review.

My point? Survey data is useful in highlighting a problem. If everyone in the survey agrees something is wrong, I suspect something is truly wrong. Whether or not the survey question is relevant to the cause of the problem is questionable. Whether or not the survey question is interpreted the way it was intended is questionable. Whether or not the survey question is answered honestly is always in doubt. Whether or not the respondent is knowledgeable enough about the issue to respond is a factor difficult to quantify. (I was annually asked if a certain procedure was “effective”.?I continually tried to leave the answer blank as I never used the system. I was forced by the survey to say something?so it always got a yes.)?

A positive survey result is, at best, of very limited utility, as it may not reflect reality. Even if the questions are well phrased, the respondents do not feel pressured to give a desired answer, the personnel are knowledgeable about the systems, and everything else is good, the organization may not recognize a potential issue and feel very safe.

What you don’t know or, better yet, what you don’t analyze properly, may well hurt you.

要查看或添加评论,请登录

Richard Palluzi的更多文章

社区洞察

其他会员也浏览了