Trusting Parametric Risk Analysis
Just finished a recent (2019) book focusing on issues with introducing AI in our lives. Its “A Human’s Guide to Machine Intelligence” by Kartik Hosangar, a Wharton professor. A quick read; not a lot of jargon. Gets one thinking about the challenges that are happening right now with putting “algorithms” in the figurative driver’s seat (i.e., decision chair) of our lives and work. I particularly enjoyed his review of research in how people view and trust/distrust the algorithms. For those of us in the project management world, these issues are critical to understand as AI comes on board.
The book had immediate import to an “algorithm” trust/acceptance issue I have faced. In 2006 I began helping clients implement a parametric cost and schedule risk analysis tool in their quantitative risk analysis (QRA) practice (now offered as ValidRisk). The method was also established as an AACE Recommended Practice in 2008 (RP 42R-08). A parametric model is an algorithm. It can be considered entry level AI, albeit resulting from manual regression of modest databases rather than machine learning from big data. But from an algorithm trust/acceptance point of view, Dr. Hosangar’s book applies.
The issue I experienced is that in the early years, the parametric model was faced with immediate resistance from peers. The idea that an algorithm could predict cost growth better than subjective judgment (or monte-carlo simulation applied to said judgment) seemed to trigger an almost visceral negative reaction in some. Not being a psychologist, I ascribed it to existential dread that the method might threaten their livelihood. Most risk analysts have little experience in or exposure to analytics per se (understanding a distribution not being enough).
Dr. Hosangar has improved my understanding of the psychology. He provides a good overview of the issue of trust in algorithms. For example, in chapter 7 he describes research about how we are “much more inclined to trust algorithms with estimates and predictions that are objective”; i.e., that do not involve a decision that involves emotion and bias. If an algorithm says you need 25% contingency, but opinion has always said 10% (optimism bias) and 25% contingency threatens the project’s chance of approval, it’s a situation fraught with emotion.
He also talks about the inability to “reconcile the idea of algorithms with the possibility that they can fail, even while accepting that humans are fallible”. i.e., we lose trust in an algorithm that fails even once, but accept that our judgment fails repeatedly. In the parametric model case, research shows that subjective QRA methods do very poorly when there are significant systemic risks, but, rather than trust an algorithm, the known failure is accepted.
A piece of practical advice Dr. Hosangar gives is that users “being allowed to make even tiny tweaks to an algorithm increased the chances that a person would trust it”. In the case of parametric risk models, this points to the need to build in the capability to “calibrate” the industry model; i.e., to study one’s own cost growth data and use that to “tweak” the model, no matter how inconsequential the calibration study’s findings.
The good news is that awareness of “analytics” has increased in the last 3-4 years. The reception of the parametric risk modeling method has improved. As algorithms, be they parametric models or ML/AI applications, increasingly find their way into our controls, estimating, risk analysis and other project work, I think many would benefit from understanding the psychology of trust that this book addresses.
Director of Capital Solutions @ Independent Project Analysis | Collaborative engagement using data to drive transformational system change
3 年Great insights John!
Owner, Validation Estimating LLC
3 年David Porter, per our communications in your posting on ML/AI endeavors. A hope I have for ValidRisk is that it be a bridgehead to algorithm acceptance albeit not the ultimate in data science.
| Associate Commissioner, Project Controls, DDC | Emerging Technologies and Business Practice Innovation in Capital Project Delivery |
3 年Hi John, As always, it is great to read your insights. I believe that the trust factor may become less of a challenge if adequate rigor is maintained with the AI practice; For example, use part of the empirical data to train the AI model and the other part of the empirical data to validate the model followed by publication of coefficients and confidence levels. This may help gain trust. I wonder to what extent did the parametric model of the early years resembled this approach. Nonetheless..... perhaps, the psychology of the cerebrum in engaging with current day advanced analytical models is really similar to lack of trust in the early years. But if we could get so many to put so much trust on weather forecast today, adoption of advanced models in project management decision making may be quicker than generally perceived.
CEO, Advisor, AEC/O Technology Innovator, Multi-Platform Software Solution Provider, Digital Disrupter
3 年I will echo the comments. I've been involved in designing and implementing parametric cost modeling solutions for 30+ years. I have found less than 5% of the estimating community is willing (able?) to trust the model. And I'm referring to deterministic parametric modeling where the algorithms are fixed, discoverable, and predictable. Stochastic parametric modeling using simple regressions is yet a higher level of trust and acceptance. It's hard to imagine how AI/ML results will be accepted.
Advancing Corporate, Operational & Project Excellence
3 年Thanks for your thoughts John... Reminds me of a book written in 1954, How to Lie with Statistics... It seems that the first thing people do is discredit the numbers... And trust is huge when it comes to numbers. However, haters are going to hate, potatoes are going to potate!