Taming Uncertainty: How You Can Mitigate the Effect of Large Forecast Errors on Forecast Accuracy
Planners and managers in supply chain organizations are accustomed to using the Mean Absolute Percentage Error (MAPE) as their best (and sometimes only) answer to measuring forecast accuracy. It is so ubiquitous that it is hardly questioned. I do not even find a consensus on the definition of forecast error in supply chain organizations around the world among practitioners who participate in the CPDF? demand forecasting workshops. For most, Actual (A) minus Forecast (F) is the forecast error, for others just the opposite.
Among practitioners, it is a jungle out there trying to understand the role of the APEs in the measurement of forecast accuracy.?Forecast accuracy is commonly measured and reported by just the Mean Absolute Percentage Error (MAPE), which is the same no matter which definition of forecast error one uses.
Bias is the other component of accuracy, but is not consistently defined, either. For some, Actual (A) minus Forecast (F) is the forecast error, for others just the opposite. If bias is the difference, what should the sign be of a reported under-forecast or over-forecast??Who is right and why??For example, in my book on demand forecasting and planning shows pallets of soft drinks shipped versus forecast error. Is under- or overforecasting the predominant issue in this situation?????
Sources of unusual data and Outliers in forecast errors should never be ignored in the accuracy measurement process. For example, because of the arithmetic mean, the calculation of the mean forecast error ME (the arithmetic mean of Actual (A) minus Forecast (F)) will drive the estimate towards the outlier. An otherwise unbiased pattern of performance can be distorted by just a single unusual value.?
When an outlier-resistant measure is considered along with the conventional measure, you should always report non-conventional measure along with the conventional measure. In addition, the analyst should check out the APEs for anything that appears unusual. Then work with domain experts to find a credible rationale (stockouts, weather, strikes, etc.)
Are There More Reliable Measures Than the MAPE???
The M-estimation method, demonstrated and illustrated in Chapter 2 of my book Change & Chance Embraced (available on global Amazon websites), can be used to automatically reduce the effect of outliers by appropriately down- weighting values ‘far away’ from a typical MAPE.?The method is based on an estimator that makes repeated use of the underlying data in an iterative procedure. In the case of the MAPE, a family of robust estimators, called M-estimators, is obtained by minimizing a specified function of the absolute percentage errors (APE). Alternate forms of the function produce the various M-estimators. Generally, the estimates are computed by iterated weighted least squares.
领英推荐
It is worth noting that the Bisquare-weighting scheme?is more severe than the Huber weighting scheme. In the bisquare scheme, all data for which | ei | ≤ Ks will have a weight less than 1. Data having weights greater than 0.9 are not considered extreme. Data with weights less than 0.5 are regarded as extreme, and data with zero weight are, of course, ignored. To counteract the impact of outliers, the bisquare estimator gives zero weight to data whose forecast errors are quite far from zero.??
What we need, for best practices, are robust/resistant procedures that are resistant to outlying values and robust against non-normal characteristics in the data distribution, so that they give rise to estimates that are more reliable and credible than those based on normality assumptions.
Taking a data-driven approach with APE data to measure precision, we can create more practical Typical APE (TAPE) measures. However, we recommend that you start with the Median APE ( MdAPE) for the first iteration. Then use the Huber scheme for the next iteration and finish with one or two more iterations of the Bisquare scheme. The Huber-Bisquare-Bisquare Typical APE (HBB TAPE) measure has worked quite well for me in practice and can be readily automated even in a spreadsheet. This is worth testing with your own data to convince yourself whether the Mean APE should remain king of the accuracy jungle!!
When an outlier-resistant measure is close to the conventional measure, you should always report the non-conventional measure along with the conventional measure. In addition, the analyst should check out the APEs for anything that appears unusual. Then work with domain experts to find a credible rationale (stockouts, weather, tariffs, strikes, etc.)
Because of their potential value to operations and logistics planning, I explain these tools as a smarter forecasting practice in my book Change&Chance Embraced: Achieving Agility with Smarter Forecasting in the Supply Chain. The book is available online on Amazon along with some 5-star?reviews.
Hans Levenbach, PhD is Executive Director,?CPDF Training and Certification Professional Development Programs for demand forecasters, planners and managers. He conducts public and on-site hands-on Workshops on Demand Forecasting for multi-national supply chain companies worldwide. He is group manager of the LinkedIn groups (1)?Demand Forecaster Training and Certification, Blended Learning, Predictive Visualization, and (2)?New Product Forecasting and Innovation Planning, Cognitive Modeling, Predictive Visualization.
I invite you to join if you have interest in sharing conversations and your comments on forecasting topics.
Leader in Management & Process
5 年I like this! Passing on to my team for trial....
Supply Chain - Manufacturing & Healthcare
6 年Nice comment and great observation.....Best, JT
Supply Chain - Manufacturing & Healthcare
6 年Have you ever used Tracking Signals calculated as running sum o forecast error divided by mean absolute deviation [RSFE/MAD] For Bias?