Beyond Time Series Forecasting?

Beyond Time Series Forecasting?

A number of nascent disciplines (statistics, econometrics and meteorological forecasting) had matured by the early 20th century to the point that, following the Panic of 1907, enterprising prognosticators could take a pinch of one and a sprinkling of the others, and make a good living. Or, in the case of early business forecasters Roger Babson and Irving Fisher, a fortune. Harvard Business School historian Walter Friedman does an excellent job of looking at this early period of business forecasting in his book Fortune Tellers: The Story of America's First Economic Forecasters, so I won't reproduce an extended history here. Suffice it to say that in this early period, there are several considerations from this period that have a direct bearing on the course of business forecasting over the next one hundred years.

First, there was not a lot of good data available. Indeed, it was only immediately prior to the Second World War that governments started to recognize the need for better large-scale data gathering and classification efforts, so for the two decades leading up to that point, much of the pioneering work of early forecasters and economists had an even bigger data-shaped asterisk beside their findings than they do today.

Second, there was no consensus on the best approach. Babson was derided by early econometricians and academics like Fisher for using the "na?ve" approach of extrapolating past patterns into the future. We today call this branch of forecasting "time series" forecasting, and by a wide margin it is the most commonly practiced type of business forecasting. Providing trend forecasting some contemporaneous impetus, no doubt, was the fact that after Babson's very public proclamation in the weeks leading up to the Crash of 1929 that the market was due for a correction, Irving Fisher, equally publicly, mocked him and said econometric analysis proved the market was at a permanent new equilibrium. Babson was of course right, though many people fail to recognize he had been predicting a downturn for at least two and a half years.

Irving Fisher's position was based on what we now call "causal forecasting", or forecasting based on an understanding of the relationship between the thing being forecasted - business growth, customer demand, inventory - and its drivers. Multiple Nobel laureates have credited Fisher's work as being antecedent to their advances, but in the public's mind in 1930, Fisher was wrong and Babson was right.

Image depicting Fisher's monetary supply water table

The third consideration is the lack of computing. Babson's "analyses" were little more than multiple period moving averages of past commodity and stock prices, and required large teams of interns and clerks to compile and prepare his 'BabsonCharts'. The work that earned Fisher the first PhD from Yale in Economics was an econometric model proving the relationship between money supply and commodity prices. The model was built on a table using water - which represented money - pumped between valves and levers which represented the various money supply and commodity prices. Computer-aided simulation would not exist for several decades.

By the late 1950s, when digital computers with stored program execution were technically extant, though by no means common, Robert Goodell Brown presented his seminal research on inventory control to the Operations Research Society of America which contained what is now widely accepted as the "father" of time series techniques: exponential smoothing. Gardner's excellent 1985 paper on Exponential Smoothing provides much more detail for anyone interested. What matters within the context of my question are two details. First - exponential smoothing was intended as a heuristic: a shortcut trading off optimality for expedience. It had first been invented by Brown during WW2 for use on continuous variables in order to quickly calculate the required trajectory for depth charges launched an enemy submarines. Second - he made a case for an ideal alpha (the smoothing constant) of 0.1 not because of some deep understanding of the nature of material movements, but because it was easy to move the decimal one position over and save additional manual calculation.

Hughes and Morgan, in the Journal of the Royal Statistical Society write:

The main attraction of [time series] forecasting is its automatic nature. It is in conditions where this advantage outweighs its limited scope that it is most used. ... Ironically, the method that is in most widespread use on computers involves calculations so trivial as to scarcely require a computer. This is exponential smoothing in its various guises.

They wrote this in 1967. I respectfully submit that little, fundamentally has changed in our approach. The conditions now, however, are completely different than in the burgeoning days of forecasting. We have much better data and data policies than we did (though not everyone makes use of them). We have a much deeper understanding of the prerequisites of causal forecasting - notably differential calculus and regression analysis. And, most important, we have many quanta more computing capability at the fingertips of anyone with a laptop. I would also add, courtesy of the behavioral insights of psychologists like Tversky and Kahneman and economists like Thaler, that we now also understand that human decision making is not nearly as neat and tidy as neoclassical economists held, and lends itself more closely to probabilistic rather than deterministic forecasting.

One last thought before I pose my question, because I know the question will have some people howling. I, unlike some who have criticized the findings of Fildes and Makridakis that in "real life" simple models may often outperform more complex models, have no issue with the idea that in practice, heuristic approaches though theoretically suboptimal, may be preferred to more complex approaches. This is because I am a practitioner first and foremost. I care about having the best possible forecast available for use in a timeframe that makes it actionable. I am not arguing to get rid of time series approaches. I am certainly not, though there are now some who are, arguing for the abolition of forecasting altogether. However...

Have we gotten too comfortable with na?ve time series approaches?

Has it become too acceptable to adopt heuristics in EVERY event, rather than only in those where the tradeoff is legitimately required?

Given that nearly every forecaster says "The forecast is always wrong", why do we cling to a deterministic approach?

Given the massive advances in computing capacity, why aren't more forecasters asking more complicated questions?

Will the real life best practice ever leverage our last hundred years of advances and replace time series, deterministic forecasts with causal, probabilistic forecasts?

These are questions, not veiled opinions. I've done a lot of work leveraging "exponential smoothing in its various guises", but I would love to hear from you about what you think is next.

Final thought - I have intentionally avoided adding ML and AI to my questions because, and perhaps this is a separate conversation, I would argue they are currently being applied primarily as ways to extend the automation and refinement of methods which ultimately remain rooted in time series techniques. Not all, of course. But in practice, most.


Very thoughtful article, Johathan. My sense is that the folks who proclaim 'The forecast is always wrong" subscribe to a Flat Earth Mindset in forecasting practice. We have been through this before as humans, but in forecasting we also need to recognize another dimension, namely that of uncertainty as a certain factor. The uncertainty 'curvature' can ideally be described by a probability distribution, but it does not need to be. There are useful measurements of ranges, prediction limits, quantile estimates that can be fruitfully explored and utilized by practitioners, now that we have the needed computing power and statistical algorithms (ML/SL).

Jonathon Karelse

Operations Leader | HBR Advisory Council | Forbes Bestselling Author

3 年

Jeff Baker - CPIM, CSCP, CPF and Patrick Bower, very interested in your thoughts as well.

回复
Jonathon Karelse

Operations Leader | HBR Advisory Council | Forbes Bestselling Author

3 年

Michael Gilliland and Charles Chase, I assume you've got opinions here. Likewise Nicolas Vandeput, who has been leading the conversation on data science.

回复

要查看或添加评论,请登录

社区洞察

其他会员也浏览了