When too much precision becomes the problem for UK Infra cost estimates

When too much precision becomes the problem for UK Infra cost estimates

Happy Thursday, Merry Christmas, and a Happy New Year to all. This is my last blog post of the year, so I saved the best for last. This is ground-breaking and leading insight. Prepare to have your ruddy socks blown off!

Or not; in fact, I'd rather you didn’t! If you like/hate my blog reflections, feel free to comment, like, follow and connect.

I have seen several blogs and comments about the tension between the need for an estimate to be accurate and the issues and juxtaposition of an “accurate estimate” because if it were an accurate estimate, then it would be a calculation and not an estimate. An ‘accurate’ estimate can mean that the mathematics is correct!

But I like to discuss the precision issue—I want to go “rogue”! because I think there is also a precision problem, particularly within UK infrastructure project system (from producing estimates to assurance). The problem is we are too precise – which doesn’t sound like a problem, but it is (remember I did say ground-breaking blog)

The graphic below shows the differences between precision and accuracy. And I think the UK infrastructure sector may be suffering a from highly precise (repeatable) but low-accuracy project estimates/forecasts.? Kahneman in his book “Noise” makes reference to grouping that is estimates in a fairly tight grouping but way off target.

?

??

Having highly precise estimates gives project stakeholders a false sense of confidence in the accuracy and robustness of the estimates/forecasts. Decision-makers may unconsciously conflate precision with accuracy.

Two key issues are at play: “data” and “procedures.

The first issue revolves around the availability and quality of project information. Project maturity and definition are often low at the early stages of a project—and sometimes even at more advanced phases. This lack of detailed and reliable data can hinder the development of robust and accurate estimates. Even the best methodologies will struggle to produce reliable outcomes without good-quality information.

The second challenge lies in the highly prescriptive and repeatable procedures that those producing the forecasts/estimates follow. These processes are designed to ensure consistency and compliance with established standards. Senior cost managers play a crucial role in overseeing and reinforcing these procedures, ensuring that they meet key standards.

These standards are often set by UK institutions. They define what constitutes “good cost management practices” and outline the essential requirements for effective project estimation. Achieving alignment with these standards is often considered a mark of professional excellence.

What does that all mean? This means that you have a homogeneous approach that is repeatable and reinforced; therefore, you achieve high precision, or according to Kahneman, you have good grouping. This is a testament to the above.

Have we created a highly precise procedural framework that we like to administer and take comfort from (Apophenia/Illusory control?), but it doesn't quite address the key challenges in projects: data availability, behaviour and biases, risks and uncertainty, and project complexity?

So why is this a problem?

High precision can often mask low accuracy, creating the illusion of being "on target" when, in reality, the estimate is fundamentally flawed. At early project stages, the definition, data availability, and quality are typically insufficient to produce a truly informed or robust estimate. However, turning this low-accuracy information into an estimate is often highly prescribed, leading to highly precise but potentially misleading outputs.

This issue is exacerbated when estimates are reviewed or validated through a "second opinion"—a process that may yield another similar estimate, typically within a 5-10% margin. Even more concerning, this "second opinion" is sometimes elevated to the level of "assurance," creating a feedback loop that reinforces the belief that the estimate is accurate or robust simply because multiple parties (even independent ones) arrive at similar results.

What Can Be Done?

The good news is that improving project definition, maturity, capability, and data quality will naturally lead to more reliable estimates. Investing in both the early stages of your project and the development of your project team’s capabilities is essential. However, relying on a second “independent” opinion can sometimes do more harm than good, as it may create a false sense of confidence that your project cost is robust and ready to proceed.

If you do seek a second opinion, opt for organisations that think and operate differently. Diversity of insight and methodology is crucial for effectively challenging the precision and accuracy of your project estimates. A truly valuable second opinion should provide an external perspective, introducing fresh ideas and innovative methodologies. By leveraging data, benchmarks, and a data-driven scientific approach to forecasting, such organisations can challenge the "status quo" and offer a different lens on traditional methods, ensuring your project is evaluated from multiple angles.

Wishing you a Merry Christmas, Happy Hanukkah, and a prosperous New Year!


John Hollmann

Owner, Validation Estimating LLC

2 个月

Aleister, I believe the precision problem is really infrastructure's obsession with and reliance on tenders (which are "precise") before full funds sanction. It is not just UK. As covered in my new book Vol 2, the process and for-profit worlds put their focus on Class 4 definition (select phase) realizing that it is the defacto sanction gate (few cancel after that) so they better get it right. Then they sanction at Class 3 because research shows systemic risks do not go down much after that. Government projects delay sanction to Class 2 (tender), so infrastructure, having weak phase gate to start with, and obsession with precision, barely pay attention to early phases. This backfires because they announce "the number" at Class 4 in the press without having done empirically valid (realistic) risk analysis addressing the shaky scope. Result is massive overruns of course. Yes, its a precision problem, but its really about taking their eye off of the select gate. BTW: this refers to AACE Classes for phase gates to avoid the crazy quilt of phase names in different industries and countries.

Chris Carson FRICS, FAACE, FGPC, PSP, DRMP, CEP, CCM, PMP

Enterprise Director of Program & Project Controls, and Vice President at Arcadis

2 个月

Aleister Hellier, very insightful post, thanks for sharing. I've found that this problem is enhanced when a project has a number of components, for some of which there is better scope definition and/or we might have better benchmarking data and others not so much. We have to be careful that if we produce portions of the estimate that are really different estimate classes, we communicate that very clearly. My approach is to break the estimate up into the components so each is communicated at the appropriate class. But when an owner insists on a single estimate and class, I tend to try to estimate all the components at the lowest level of scope definition available for any individual components.

All estimates should have an accuracy which reflects the value produced based upon the maturity of the information at that point in time. The estimate should be in the form of a range with anticipated value, this range is not the accuracy but an anticipation of higher and lower values used to build the estimate. The IPA gives guidance and best practice on this. I read precision by providing a value down to the last penny. Of course as a project develops the estimate becomes more defined and the range narrows.

回复
John Hollmann

Owner, Validation Estimating LLC

2 个月

Aleister, good point about independent estimates. There are two possible outcomes; the original and check estimate are close, or they are not. As you said a close estimate (and businesses obsess over the "number") may encourage false confidence. If the check number is different, one is left with a conundrum of which is more realistic. That is why I recommend base estimate validation per AACE Recommended practice combined with an empirically sound QRA, again using an AACE RP. Then finally do industry benchmarking. None alone is enough.

Andy Nicholls

Director VMZ Parametrics Ltd Semi-Retired

2 个月

Aleister a great post and food for thought for estimators and decision makers- if “Noise” is not on your bookshelf then add it to your Christmas or birthday list!! Education in the area is long overdue. Happy Christmas/ Festive holiday, look forward to 2025.

要查看或添加评论,请登录

Aleister Hellier的更多文章