Risk Data Quality Fundamentals
Testing for Risk Relevance --
Earlier this week, those canny prediction-algo engineers at Google teed-up an article in my feed that deeply resonated with me. The article in question was published a couple of weeks ago, by McKinsey. Music to my ears was not just the general theme describing that we need to #RethinkRisk, but also a clear message that, too many times, capital inefficiency is driven by ambiguous descriptions of project risk.
To quickly summarize, McKinsey’s article titled, “Managing project portfolios to unlock trapped capital”, explains that capital project risk management programs need to employ four principles to ensure teams do not inadvertently do more harm than good. These are:
For me, all four principles underscore the importance of risk data curation, throughout the project lifecycle. Boiling everything down to fundamentals though, let’s look at bullet #1 first since, from that, the other facets are informed.
A problem well-described is a problem half-solved
Among the lessons that I first learned in the construction industry was, “what gets measured, gets managed.” I am a Chartered Quantity Surveyor (QS) by profession. As a QS student, we spent many hours learning to employ a standard method of measurement. The simple intent here being that projects cannot be accurately quantified if they are not accurately described. That said, all too often, teams attempt to account for unknowns or quantify uncertainty using only abstract descriptions of risk. Whether that may be due to a shortage of time, or a lack of awareness, this common pitfall leaves risk descriptions wide-open to interpretation, double-counting or systemic underfunding across the portfolio. For risk analyses to be transparent, and robust in the face of audit, all teams should be able to explain risk inputs using one simple, coherent and unambiguous sentence.
The first of these four principles was also a topic reviewed and discussed with delegates, earlier this year, at the virtual Project Controls Symposium. Figure one helps illustrate the intent of using a metalanguage (or three-part risk statement) to help capture ideas devised from a risk bow-tie model. As you may note, risk bow-tie models, read from left to right, describe the cause, uncertainty and effect of risk. When agreed upon by the team, a three-part risk statement can be used as a credible and clear CUE for project risk management, enabling both the accurate quantification of risk and subsequent nudge for timely risk management (i.e. what Gollwitzer might describe as as "implementation intentions").
Figure 1: From risk Bow-tie Model to three-part risk statement
It might also be noted, by employing an appropriate CUE for risk management, we are addressing bullet #4 above, helping to reinforce the relationship between risk and organizational resilience. To use another adage: a problem well-described is a problem half-solved.
Figure two introduces some ways in which we may want to consider risk data quality or, more precisely, if risk data is ready for analysis. With reference to ISO 8000-8:2015, we can assess the credibility or quality of risk data used for analysis across the following three categories:
领英推荐
A.???Syntactic Quality – This is the, “degree to which data conforms to a specific syntax”. Here, think in terms of the consistent application of project controls definitions, metadata or column headings of your data under analysis.
B.???Semantic Quality – This is the, “degree to which data corresponds to what it represents.“ Here, think in terms of the relative experience team members have gathering the right data and, on a more basic level, the level at which calculations are correctly formulated.
C.???Pragmatic Quality – This is the, “degree to which data is found suitable and worthwhile.” Also a function of an analyst’s data-gathering experience, this attribute more specifically addresses the level to which a risk may be deemed relevant.
Figure 2: Data Quality & a test for Risk Relevance
Finally, with pragmatic quality in mind, figure two introduces a simple test for risk relevance. The following three attributes can be used to ensure risk relevance:
?????i.???????Authenticity – A committee comprised of project team members or an independent board (e.g. RRB or risk review board) representing the wider organization should ensure the team does not willfully avoid an inconvenient truth or taboo (i.e. address bias described against bullet #3 above). Judgement may be subjective or, ideally, decision rationale will make use of appropriate reference class data (#RCF). ?
????ii.???????Topicality – Here, a RRB will simply verify that a risk is on topic. The RRB will ask, “does the risk’s consequence impact the venture’s objectives or goals?” When satisfying the test for topicality, the team can be confident they have identified uncertainty that matters. Additionally, by delineating between risk that should be funded at a project or enterprise level, the team have addressed bullet #2, further helping unlock capital and ensuring the efficient allocation of resources.
??iii.???????Conformability – When testing for conformability, the RRB will verify that the risk statement logically conforms with the risk identification process. The identification process should, at a minimum, validate five characteristics:
By employing an appropriate CUE for risk management, we reinforce the relationship between risk and organizational resilience
Over the coming years, I expect (and hope) that advanced analytics play an increasing role in the delivery of capital asset programs. However, no matter how sophisticated our toolset may become, and no matter how much faith we may place in the data engineers who write advanced algorithms for our projects, doubt and healthy skepticism can always add value. With this in mind, and particularly when it comes to intangible or cognitive project attributes such as risk, it is imperative that we review and question the quality of even the simplest inputs under analysis.
Delivering VALUE from Uncertainty
3 年..as Doug Hubbard would say... you can measure anything... making sure we explain how our stakeholders can know more about the uncertainty they have, and how best to visualize that for decision making is the key.
I craft value-filled knowledge and transformation experiences. I also help manage mega projects.
3 年For my #network
Independent Consultant at Self
3 年If you have 10% error in your data, how will you know? What analyses, if any, will you be able to do?
Bringing PMOs & Capital Programmes to the next level of evolution...
3 年This is one of a kind James, can't agree more ??!