New Year thoughts on a risk analysis trend
Happy New Year! Like many, I spent recent days thinking about trends for the years to come. In terms of the project risk world, I’d like to share a thought about a potential turning point I’ve observed. Over the last 30 years, the use of stage-gate project processes has become ubiquitous. Stage gate is primarily a capital project #riskmanagement process. It supports management of what we call “systemic” risk (i.e., uncertainties resulting from the project “system”). But what I have long observed is failure of owners to use readily available stage-gate empirical project system research (historical data-based) in their risk quantification of #contingency and reserves. Hence, early estimates (Class 5/4 or FEL 1/2 in estimating parlance) are underestimated and suboptimal investment decisions and capital effectiveness result. Owners are failing to capture a large part of the stage-gate opportunity. This is perhaps most pronounced in public #infrastructureprojects where overruns of early announced budget values seem to be the norm. The turning point I see regards recognition of this failure and movement toward finally addressing it.
There are multiple beliefs that have contributed to the failure to quantify project system risks realistically. One is the false belief that the better and longer the deterministic list (register) of risks one makes, the better the risk quantification will be (and falsely believing Monte-Carlo simulation applied to such a list will make it work). Then, there is the misguided rejection of available, empirically-based risk tools aligned with stage-gate because they are “not my data”. That rejection is troubling because for 30 years, most owners have failed to capture and analyze their own risk, cost growth and schedule slip data (and those who had good data and tools prior to the 1990s abandoned or discarded it).
The turning point I see is increased acceptance of empirically-based risk quantification methods. In particular, there is groundswell of interest in #machinelearning and #artificialintelligence (ML/AI); i.e., analytics. Many owners are suddenly interested in capturing data! Their interest is buttressed by new ML/AI based risk analysis products from start-up firms. This interest in empiricism is unabashed great news! Unfortunately, the ML/AI risk products are a long way from properly analyzing systemic risks. Research shows most uncertainty results from the project system attributes; i.e., processes, tools, team development, decision making, etc., etc. that current ML/AI products (and data that is readily available like contractor schedules) do not address, particularly at early project stages. Increased interest in data and analytics is good news, but until ML/AI products are based on or driven by project system practices over the entire project life cycle (from Class 5/FEL 1 forward), they will not replace the currently available empirical risk-quantification tools. Use your data to optimize those tools.?To learn more about empirically-based “parametric” risk tools that address systemic risks effectively, read AACE International Recommend Practice RP 42R-08 and visit www.validrisk.com for more information. 2022 was the first full year of ValidRisk software being commercially available and we are looking forward to increased usage in 2023! Check it out.
Global Director - Risk and Consulting at Hatch
2 年Hi John, great article and best wishes for the New Year. Hopefully your efforts will also deliver a greater understanding and appreciation of systemic risk and the tools you and others have developed to quantify it. Whilst I have been using these methods for a while now, my eyes were certainly opened when I used it on a very large shut-down project to assess the capital risk profile for a FEL1 estimate. The actual project result was very close to the mean analysis result. Similarly on a large infrastructure megaproject. I ran an analysis just based on my observations and the project seems to be trending fairly close to the analysis result. The ease with which this can be done and the feedback for the project team and client on systemic risks is a tremendous value add to improve projects. A 'chestnut' I regularly experience is the adherence to 'backed-in' contingency and accuracy guidelines and the failure to appreciate that accuracy and contingency are actually driven by the project risks and uncertainties associated with the project and hence the result can be outside these 'baked-in' guidelines for an estimate class or FEL Phase. Very much an industry cultural issue. Cheers Greg
Driving portfolio thinking, innovation and analytics in Project Management | Utility Industry SME & Strategic Thinker | Manager, Portfolio Management at BC Hydro
2 年Thanks for sharing, John Hollmann. Glad to see more and more companies are paying more attention on empirical?information. As for leveraging advanced tools like AI to quantify systemic risks, I actually think they may have got to a point that certain (if not significant) value can be add. To my understanding, the key benefit of Al approach is based on training the model with sufficient data so that it can 'learn' and 'predict' what may happen. Therefore if we could feed in empirical systemic risk information (input/output) to an Al model, it could produce predictions of quantitative analysis. Not sure if you have read this post by Prof. Bent Flyvbjerg on a case study for "Artificial Intelligence (AI) can help decide ahead of time when a project is going off track in terms of schedule and spending": https://www.dhirubhai.net/posts/flyvbjerg_ai-machinelearning-projectcontrols-activity-7001592639133261824-TpOi?utm_source=share&utm_medium=member_desktop
Analytics | Infrastructure | Strategy | Public Policy | Cost Engineering | Risk Engineering | Project Management
2 年Another great thought piece John Hollmann. I completely agree with your sentiments, especially regarding the sweet and sour nature of industry falling back in love with historical data and analytics by way of ML/AI, but not necessarily taking the important step of considering the systemic fundamentals. ML/AI is fantastic at showing us the what (the frequency and severity of risks), but without a human led study of projects (i.e. asking the right questions to the teams delivering the projects) we miss the why (the fundamental root causes), and therefore these methods, while a great step forward, do not help formulate a probability of risks in a given context.
Owner, Validation Estimating LLC
2 年For anyone who has not looked into parametric risk analysis, AACE has just released a new Recommended Practice RP 119R-21 that makes it ridiculously easy to start using empirically valid methods. It converts the parametric model to tabular form. We don't know what else to do to stop teams from using "traditional" values for contingency that have little to no basis in reality. Really, why are made up numbers still the norm? That is not hyperbole; my 2020 study of N. American power utilities showed they were using preset 10/15/20% contingency for Class 3/4/5, and those values were off by a factor of 2X or more. What is wrong with our profession?
Independent Consultant for Project Planning and Scheduling, Schedule Risk Analyses and also Co-Founder of Turbo-Chart
2 年Happy New Year to you also John, I'm also concerned about the claims made by the AI/ML products about results that don't contain contextual information on why the data is what it is. Hopefully it will result in improvements in the data that we *do* capture and then build up datasets that contain the correct information for analysis.