"Decision Impact"?: 10 reasons to implement the new generation of business-oriented metrics
Photo by Edi Libedinsky on Unsplash

"Decision Impact": 10 reasons to implement the new generation of business-oriented metrics

Most important decisions in companies are based on some form of forecasting: decisions to hire, to develop a new product line, to expand into new territory, etc. And of course, forecasting plays an important role in the day-to-day running of our supply chains.

How to measure performance?

The role of the forecaster in the organisation

Interestingly, the need to predict is so critical and requires such expertise that it has led to the creation of a dedicated profession: forecaster.

From an organisational perspective, companies have created and then specialized this expertise around dedicated functions, dedicated teams, and sometimes even entire dedicated departments. Most of the time, these structures are supervised by Supply Chain departments.

Such organisation of the forecasting function in companies have numerous merits, in particular, to bring together the experts in this subject in teams where they can share their practices.

However, this separation of missions poses a key problem. By separating the “forecasting” function from the “decision-making” function, many companies have, in a way, created silos that lead to sub-optimal performance.

The contribution of forecasting to business performance is complex to measure

And here’s why: although the forecast plays a key role in the decision-making process, it’s not the only one. Other elements also have to be considered, often in the form of constraints and business rules.

As a result, it is often complex to measure precisely the contribution of the forecast to the final outcome, i.e. the performance of the decision taken.

For example, when deciding to purchase goods from a supplier, the demand forecast is obviously really important, but so does the allowed pack-sizes, the minimum order quantity, the storage finite capacity, etc.

Everyone is aware of the high value of forecasting, but its real business impact is often difficult - if not impossible - to measure.

The challenge is to focus on what matters

Of course, all forecasters regularly evaluate the reliability of their forecasts, and many formulas exist for this purpose. These metrics focus on the intrinsic quality of the forecast produced and are generally called "Forecast Accuracy" metrics.

Doing so, they often leave out the analysis of final forecast-based decisions and their relevance to the business.

At Vekia, we’ve been making this observation for quite some time. And for sure, we are not the first nor the last ones to identify this important limit.

As we love taking up challenges, we naturally asked ourselves: how can we evaluate the quality of a forecast so that the decisions it leads to are the best ones? In other words, what makes a forecast good?

What makes a forecast good?

To understand what a good forecast is, it is necessary to go back to the purpose of forecasting.

In a recent IIF ECR Webinar (International Institute of Forecasters), Paul Goodwin reminded us that forecasts are not an end but “are done to support decision-makers so they can make better decisions”.

Fundamentals of forecasting

Let us briefly recall the state of the art and the fundamentals of forecasting.

The following definition of what an ideal forecast is is widely shared. An ideal forecast is a perfectly true prediction. For example: If 996 units sold were forecasted and, at the end of the day, 996 units were effectively sold, then this forecast was perfect!

However, it is obvious that despite all efforts, the future is never known with such certainty. Therefore, to measure the quality of a forecast, the practice is to measure its error. For example: If 996 units sold were forecasted and, at the end of the day, only 900 units were effectively sold, then this forecast made an error of 96 units

The main mission of forecasters is to generate forecasts that minimise this error. This error measurement is made possible by a dozen different "Forecast Accuracy" metrics to which are added countless variants developed by companies for their specific needs.

The best forecast is the one that allows the best decisions to be made

Unfortunately, this approach, reinforced by decades of practice, leaves out an essential point:

The purpose of forecasting is not and has never been to provide the best forecast ever! Its purpose is to enable the best decision.

The best forecast is therefore not the perfect one, but the one that allows the best decisions to be made. The mission of forecasters should therefore not be to minimise the error between a forecast and the reality but to minimise the decisions errors.

Here’s an example. Let's imagine a very simple decision process, taken from everyday life: every evening, a woman consults the weather forecast to decide whether or not to take her umbrella the next day.

If we focus on the forecast error, then when the forecast does not predict rain and indeed it does not rain the next day, the forecast was perfect. But, on the other hand, if the forecast calls for 10mm of rain and it turns out to be ten times more, then the forecast was wrong with a significant error.

What is being measured here is the intrinsic forecast error.

But let’s focus now on the decision made. In the specific context of the "to umbrella or not to umbrella" decision, the above error wouldn’t have had any impact on the decision taken. In both cases, the woman would have made the right choice and taken her umbrella. In terms of its use, the forecast was therefore perfect.

Thus, the quality of a forecast totally depends on its use and the decisions it triggers.

Yet, as we have seen, forecasters only have metrics that measure the intrinsic accuracy of the forecast. None of them takes into account its actual use.

This does not mean that these metrics are of no interest, far from it. But we must recognise that they are not the most appropriate ones from a business perspective...

Towards a new generation of “Decision Impact” metrics

Fortunately, it is quite possible to approach the quality of a forecast differently. To do so, a new generation of metrics must be introduced. Those metrics are called "Decision Impact" (denoted "DI") and no longer focus on intrinsic error but rather on the quality of the decisions made.

Building a digital twin

Aucun texte alternatif pour cette image

The proposed metrics requires the creation of a computer model (or "digital twin") that can, for any forecast input, simulate the decision process and evaluate the quality of the final decision.

It is then necessary to model the decision process (through a Decision Function denoted “DF”) and to define a measure of the decision quality (through a Decision Quality Estimator denoted “DQE”).

The quality of a decision can be expressed in many ways. However, we highly recommend expressing it as a financial cost as this allows multiples use cases that will be discussed later on.

Of course, the perfect modelling of processes and impacts will sometimes be difficult, if not impossible, to achieve. But a simplified model often manages to effectively "approximate" much more complex realities.

Leveraging 3 types of forecasts

Three different forecasts are needed to generate the proposed new metrics and to demonstrate their value:

  • The so-called "actual" forecast, resulting from the forecasting process in place.
  • The so-called "na?ve" forecast, resulting from the simplest forecasting method that would naturally be used if the forecasting function did not exist in the company.
  • The so-called "oracle" forecast, i.e. the ground truth, measured a posteriori and corresponding to the observations themselves.

These 3 types of forecasts are then consumed by the Digital Twin to simulate the related decisions and their respective quality referred to as a “Decision Impact”.

  • The cost of the decisions enabled by the actual forecast is denoted “DIa”, ie. Decision Impact of index "a" for “actual”. It measures the quality of the decision that the current forecasting process generates.
  • The cost of the decisions enabled by the na?ve forecast is denoted “DIn”, ie. Decision Impact of index "n" for “na?ve”. It measures the quality of the decision that the simplest forecasting process would generate.
  • The cost of the decisions enabled by the oracle forecast is denoted “DIo”, ie. Decision Impact of index "o" for “oracle”. It measures the quality of the decision that perfect knowledge of the future would generate.

DIa, DIn and DIo deliver valuable data about the forecasting context. But more than that, they’re the fundamental building blocks that could then be assembled to generate three insightful metrics.

The first metric denoted "DIn-o" is the difference between "DIn" (cost of the "naive" forecast-based decision) and "DIo" (cost of the "oracle" forecast-based decisions). This metric defines the complete playing field, addressable through forecast improvement

DIno formula

The second metric denoted "DIn-a" is the difference between "DIn" (cost of the "naive" forecast-based decisions) and "DIa" (cost of the "actual" forecast-based decisions). Therefore, it measures the added value delivered by the actual forecasting process. This metric relates to the notion of FVA (Forecast Value Added, by Michael Gilliland) and enhances it by providing a dollarized vision of the added value.

DIna formula

The last metric denoted "DIa-o" is the difference between "DIa" (cost of the "actual" forecast-based decisions) and "DIo" (cost of the "oracle" forecast-based decisions). Therefore, it measures the maximum value that could still be delivered by improving the actual forecast process.

DIao formula

Combined, these metrics can be rendered, analysed and interpreted through easily graphical representations such as stacked bars/areas or gauges.

Aucun texte alternatif pour cette image

10 new exciting perspectives

This new bunch of "Decision Impact" metrics opens up completely new perspectives.

Metrics benefiting the forecasters

Forecasters are the very first to benefit from these metrics. Among other things, the new insights enable:

  1. to have non-ambiguous “North Star” metrics that at last deliver a reliable view on the forecast quality. Indeed, it’s important to remind that traditional Forecast Accuracy metrics regularly contradict each other;
  2. to select and configure correctly the forecasting models that best support the decision-making;
  3.  to generate a value map to clearly identify value pools;
  4. to know precisely when to stop improving a forecast, as the yet-to-be-harvested added value does not worth it;
  5. conversely, to know on which perimeter would benefit the most from an improved forecast;
  6. to prioritize the most impactful sub-perimeter, if the yet-to-be-improved perimeter is too large given the available resources;
  7.  to evaluate a dollarized FVA for each contributor/step of the forecasting process.

Metrics benefiting the whole company

The benefits of these new metrics are not limited to the forecasters only. The new insights enable the whole company:

  1. to streamline and smooth the communication between departments about forecasts thanks to metrics that are, at last, easily understandable for each stakeholder;
  2. to evaluate and share the delivered value of the forecasting process (and of its subparts: per forecaster or step) in a fair, non-contradictory and easy to understand way.

As these metrics are more than pure forecasting metrics, the same approach could be applied to measure the impact of changing input constraints instead of changing input forecasts. For example, this allows the evaluation of the pack-size costs or the impact of the delivery frequencies… It’s then a great tool to point out desirable areas for improvement, etc.

Conclusion

Each and every company is engaged in a daily struggle against inefficiencies and wastes. Silos have long been pointed out as such, and are perceived as they truly are: fractures within organisations.

The forecasting function, on the other hand, naturally shouldn’t be such as a silo, as it has a central position within the company. It is indeed a key partner for many departments and is at the very heart of key business processes such as IBF and S&OP.

However, within the practices of forecasters, performance measurement remains historically and surprisingly uncorrelated with the uses and business impacts of the forecasts.

Bringing business back into forecasting practices

The hereby introduced "Decision Impact" family of metrics makes it possible to bring back the "business" dimension into the very heart of planning. The benefit being the realignment of the entire company around the business.

More than that, these metrics open up new perspectives and allow for completely new use cases around automation, prioritisation and improvement of key business processes.

In our next articles, we will go into more detail on the operational uses of "Decision Impact" metrics.

This article first appeared at https://www.vekia.fr/previsions-business/. To learn more about this topic and more generally about Supply Chain optimization, please visit our website www.vekia.fr.

Jairo Iván Sánchez

Consultor | Entrenador | Estrategia es hacer que las cosas sucedan generando caja | S&OP / IBP / S&OE / IBPx| Demand/supply planning | Inventarios | Order Management | OEE + OTIF | FMCG / SMCG / CPG| Retail | Manufactura

2 年

Dear Johann ROBETTE, it has been a pleasure to read your insights focused on stuff that matters to business people and process leaders; from my reading, I want to ask some things: 1. What are the differences between a forecaster and a planner? Could it be concentrated on the same person/role? 2. In practical terms, the way to calculate DI would be: - Carrying cost when the organization has excess inventory - Cost of lost sales (including opportunity costs and reputational costs) isn′t it? Please let me know if you have any suggestions; your input will be relevant to my current practice. I am looking forward to reading your answers.

回复
Johann ROBETTE

Beyond grand theory, I help companies ensure their Supply Chain actually drives value! ????

3 年

I am pleased to announce that the next article on "DI" has just been published. Using a?Walmart?dataset (from the M5 competition), this study shows some experimental evidence of standard metrics performing poorly when it comes to making business decisions. https://www.dhirubhai.net/pulse/last-first-insights-from-m5-competition-johann-robette-/?published=t&trackingId=3Jetadu8ki%2BiNBhtXBsa%2FQ%3D%3D

Very good job Johann and very well written. It's educating, fundamentally correct and truly inspiring on a subject where many supply chain leaders have lost faith. Two concepts, that I particularly like, that you put forward very well; are the connection of the forecast impact to the business impact (so called "Decision impact" in your article). And the notion of entitlement of the forecast accuracy benefit (so called "DIo"), meaning to what extent, the forecast could influence a result. These notions are keys in Supply Chain in many aspects (not only forecast) but very well documented and define in your paper. Of course, theory is good but application is still very difficult in this space and requires lot of resources hence why you need to digitalize and automatize to be able to do what is explained in the article. Really looking forward to reading your next article with concrete examples including application details, of course ?? . I may call you next time to know if I need an umbrella!!!!! ?? #vekia #supplychainmanagement #supplychainsolutions #forecasting #operationsexcellence #operationsmanagement #supplychainoptimization

Dr. Muddassir Ahmed

Your Supply Chain Success, My Mission | I write The Supply Chain Show? Newsletter weekly for 68k subscribers I Supply Chain Maven I Supply Chain Trainer I Supply Chain Consultant | Keynote Speaker | Meme ExtroNerd

3 年

Hi Johann ROBETTE ?? very detailed and comprehensive explanation of your views and very well laid out! I certainly agree with your point “ the quality of a forecast totally depends on its use and the decisions it triggers.”.... and the outcome of those decisions if executed properly impact the forecast. However, my view when it comes to “unconstrained demand forecasting” (which I guess referred as forecast in this article) after 15 years is this: 1) All forecast regardless of method , software, person has some degree of error 2) What you do with forecast and it’s error interms of actioning is far more important. Meaning- I see lots of demand planners report MAPE, great! Then what they do with that is far more important. 3) Spend more time in gathering the unconstrianed demand forecast at the point of source and reducing error in the data than crunching forecasting full or errors in the data to start with!

Timothy Brennan

Product Leader @ Anaplan

3 年

Refreshing and insightful take on forecast evaluation. Thank you for sharing Johann!

要查看或添加评论,请登录

Johann ROBETTE的更多文章

社区洞察

其他会员也浏览了