The risk of Language

The risk of Language

Imagine you are the Risk type owner of Operational Risk in your company. What would you focus on? Most of the professionals today would give answers like creating or updating the ORM framework, on IT risk (including cyber risk and AI), outsourcing risks, anti money laundering or know your client risks and quite recently added, the culture of a company.

I would like to add another topic to this list, that is mostly overlooked: the risk of language.

Huh, I hear you thinking, language an operational Risk? In my opinion it is. Did you know that some of the biggest failures in history have actually been caused by incorrect use of language? And that, if you do not pay attention to this subject, most likely some of the future operational risks will be caused by the use of language too? Let me guide you through some of the examples.

Let’s start with the most obvious ones: legal documents. Everybody understands that the way you formulate a legal document, can determine if somewhere down the road, you may suffer a loss.

An example has recently been published on Linkedin (sorry, in Dutch). https://www.dhirubhai.net/pulse/ontslag-per-1-maart-duidelijk-toch-hoe-abn-amro-de-strijd-diebels In a termination contract the term “per” was used to indicate the date when the contract would end. So something like: The contract will end as per March 1st 2019. Now in court the question was asked if that would mean by the end of or before the beginning of that day. The judges (in the Netherlands) ruled in favour of the first: by the end of the day. In this case it had consequences for the severance payment that needed to be paid. Just because of one inaccurate word…..

Please do not make the error that this is limited to contract only: with the regulations in place, this is also applicable for brochures, web publications and alike. They too form "part of the contract" nowadays.

Another type of language risk is almost exactly the mirror image of the previous one. With personal liabilities for a number of functions like (non-) Executive directors, Compliance Officers, ea., there is a reason for these functionaries not to use precise wording where and when possible, but keep their formulations sufficiently vague. To give a couple of examples from the Barclays annual report 2017 (the same type of wording is used in almost control statements of publically traded companies):

No alt text provided for this image

So what is reasonable assurance? Are they 60% sure, 95% sure? And what is a sudden shock? Is the trade war between the PR of China and the USA covered by the word "sudden"? Can the prolongued sustaining of low interest rates by the ECB, be qualified as sudden, because it was against the market expectations? And then the word “could”: how certain is it that they influence credit ratings? 10%, 50%, 100%?

Of course they do this for a good reason: to prevent being held liable (as company and as individual) and thus to prevent a loss. So here the language risk is "to be insufficiently vague and broad".

 Now this same vagueness can be found in the internal domain of companies. Executives prefer to formulate in such a way that it gives them room to manoeuvre. Since culture is for a part set by example, also subordinates like to choose their wording in such a way that it provides them some leeway. So words are being used like “usually”, “more often than not”, “real possibility”, “likely”, “many” , “a few” et cetera.

One of the biggest blunders during the cold war was caused by using words like this: The bay of Pig invasion. The Joint Chiefs of Staff had written down their view on the invasion plan, by writing down it had a “fair chance of ultimate success” . By president Kennedy (and some members of his administration) this was interpreted as a statement of support, whereas the chiefs of staff meant it a warning (in my own wording: it was not a good chance, only fair) (source: War and Chance: Assessing Uncertainty in International Politics, by Jeffrey a Friedman).

Since then several researches have been done on how people interpret this type of wording, and it shows that the interpretation varies widely. Take for example the interpretation of the words : “Serious possibility”. If somebody would use this phrase to you, how would you interpret it? A research by Sherman Kent showed that some policy makers within the CIA interpreted it as 20% change and others as 80% change. Quite a big difference. And this is true for more words and for terms indicating numbers (like a couple: is that exactly 2 or somewhere between 2 and less than 10?). See for example the graph below taken from the original research by Sherman Kent and depicted in Business Insider https://amp.businessinsider.com/quantitative-perceptions-of-probability-words-2017-5 , , clearly showing that the interpretation can vary significantly from individual to individual. This reseach has since then be repeated several times, with always the same type of results.

No alt text provided for this image

Therefore, like Sherman Kent did within the CIA, I would suggest to standardise the interpretation of these kind of words within your company. For example by attributing a 60 to 80% chance to the wording “serious possibility” and 20 to 30% to unlikely. This is useful both for the internal as well as for the external communication in for example a management control statement.

 But the risk of using ill-defined wording does not end here, sadly enough it also has its influence on a more recent development: text mining (or if you prefer: Artificial Intelligence used on analysing texts). The purpose of text mining is for example to find patterns in texts and use it to propose solutions. An example in the Operational Risk area could be to find what type of mitigating actions would be advisable based on description of a certain risks. Here too, the language used to describe the risk plays a role. If vague words are being used in the risk description to describe the impact or likelihood of a risk, (e.g. there is a small chance that a certain risk will occur), the interpretation could vary from individual to individual.

Please do not come up with the answer that there is a risk assessment policy in place: most 1st and 2nd line individuals hardly know such policy exists and thus use their own interpretations to estimate impact and likelihood…..

The consequence for AI/text mining is that it will be trained with various interpretations, leading to potential mistakes in its answers. This is also a reason why an AI model trained in one environment will not necessarily work equally well in another: the wording in these environments may be different.

To summarize it: how language is being used in your company, is a significant operational risk factor. Sometimes it needs to be vague, sometimes exact. And sadly enough the line between these two is blurred. Let’s give it the attention it deserves!

Please note: This article is written in a personal capacity and does not represent the opinion of the companies I work(ed) for or with.

 

 

Bob Mohr

Riskmanagement | Compliance | Opbouwend dwarsdenker | Toezichthouder | Expert Behavioural Risk&Biology | Governance

5 年

Zou de Schaal van Mock https://nl.m.wikipedia.org/wiki/Schaal_van_Mock hier niet behulpzaam bij kunnen zijn?

回复

要查看或添加评论,请登录

Erik Zoetmulder的更多文章

社区洞察

其他会员也浏览了