Assessing risks: common pitfalls (2/4)
In my previous post you can read the explanation about risk assessment. Today: four common pitfalls that happen often, in my experience. More about that in this post.
1. Probability: as mentioned, scoring happens on a scale. This scale can run from, for example, 1x per day to 1x per year. This is especially common with operational risks from activities that occur regularly. For activities that occur less often, such as drawing up annual statements (done 1x per calendar year), a different scale may be necessary. For example, every year, every three years, every 7 years, every ten years, and every twenty years.
If you use the 1x per day to 1x per year scale (for an activity that only occurs 1x per year), you can only select the lowest frequency. But for this activity that would mean that things go wrong every time. I think this is inconvenient. I find working with different scales for different activity frequencies just as inconvenient, so I've been using a different solution for years: percentages.
The table below shows four examples of scales, both a 4-point scale and a 5-point scale. The percentages indicate in how many of the cases of the activity (or process) in question the risk can occur. This way, it does not matter how often the activity itself occurs, you can just apply the percentages. So design your own scale and get started!
2. Impact: I have noticed that the probability factor is often taken into account when determining the impact. Instead of purely assessing the impact, it is already downplayed (or inflated) on the basis of the previously assessed probability. It goes a little like this:
Hmm, since this is not going to happen that often, the impact will be not too bad.
I therefore always ask the following question: assuming that the risk occurs, what could be the possible impact? It is precisely by making this distinction that you enable people (and yourself) to separate the two factors and actually score separately.
In addition: the impact is not only financial. In fact, the financial impact is often a result of some other impact. Such as, for example, a fine as a result of a data breach: if you do not have your security in order (risk), then the possible impact is a data breach (impact), resulting in a fine. The table below shows an example of different impact categories divided over 4 scales.
In particular, choose which categories are relevant to your organization and which distribution across the scales applies to you. The financial impact for instance can differ greatly between large and small organizations.
3. The scale: often people do not know which scale is useful. The quick answer: 4 or 5.
Why not more than 5: on a 6 or 7-point scale you are mainly discussing whether the score should be for example 5 or 6. This is because the difference between 5 and 6 becomes quite small; the larger the scale, the more the scoring is 'spread out', leaving thin margins.
Why not less than 4: on a scale of 3, the opposite is true: there is little to choose. Especially in the Netherlands, people will then go for the middle bracket remarkably often, we do not like the extreme scores. This is also reflected in surveys and such.
How to choose between 4 and 5? That is often a matter of (personal) preference. The advantage of a 4-point scale is that you actually have to choose between high and low. There is no medium score, such as 3 on a five-point scale, which often feels like a safe choice. The advantage of a 5-point scale is precisely the option to be able to add some detail and to choose that middle ground. If you (and your conversation partner) can make clear choices and are able to provide good reasons for choosing the middle, then a five-point scale is a good option. If you want to prevent people from sitting safely in the middle, choose the four-point scale.
4. Do not lose yourself in the process: I see that many organizations are so busy completing the steps and creating beautiful overviews such as the heatmap, that they forget the end goal. The risk score is an estimate based on experience and professional insight; do not think about it too much and do not give it too much value. Creating a heatmap is not mandatory at all, there are plenty of alternatives. For example, a list of the risks from high to low works just as well. But keep your end goal in sight: controlling the relevant risks. Determining the risk score is therefore a tool to decide where your energy and time should go. If you can do that through hard data, that would be a better alternative.
Artwork through Unsplash
Risk Practice Lead at Senscia
3 年Probability is meaningless unless it is bound to a timeframe. The probability I will die today could be expressed as 0.001% (I'm just guessing). The probability I will die within the next 100 years = 100%. The probability that I will die in the next 10 years is a meaningful risk level I would like to take into account. Impacts are more tied to probability than we would like to think. Let's put it like this. 90 in 100 car crashes will be minor, like bumping into another car in a car park, scraping the door on some pavement furniture we didn't notice when we pulled up to a kerb. 9 car accidents might involve speed and the need to replace body parts on the vehicle, perhaps even a visit to the hospital for a small injury, like neck whiplash. 1 in 100 accidents (I am making these numbers up to illustrate my point) could involve total destruction of the car and a fatality. This is where plotting risk against a simple matrix that allocates one level of probability to one level of impact can be naive and misleading. One risk exposure (scratched paint) I can completely accept. The risk of a fatality I want to be ALARP. In reality, risks have a range of triggers, conditions and outcomes. And we haven't even started talking about aggregation (causal triggers for secondary risk events) which a risk matrix does not easily indicate. Looking forward to articles 3 and 4 as you uncover more pitfalls for us Karin.