- It is common for supply chain forecasting consulting to be tool-centric.
- Forecasting applications are now heavily promoting AI and machine learning.
- People are bananas for BI/Big Data/Science for forecasting without the evidence that these items improve forecast accuracy versus their cost.
- Is using unstructured data really going to lead to the advertised improvements?
Introduction to Being Boxed in by the Forecasting Tool
In this article, we will discuss our philosophy and experience in forecasting consulting. We outline what we observe as a continuing trend where anything that is new in forecasting is presumed to be better than older techniques and discuss the reality of forecast improvement projects.
The Overemphasis on the Tool or Forecasting Application
Over the past several decades, supply chain forecasting consulting has become highly tool-centric. How forecasting or supply chain forecasting became so tool or application-centric is an interesting story. One reason is that the large consulting companies became very focused on systems implementation.
And if you as the company don’t get a particular tool and a consultant who knows that particular tool, that the market for forecasting consulting is relatively small.
We performed a search for “supply chain forecasting consulting” and a few other terms. It is surprising that both the resulting pages were for websites with low volume, and the material presented was quite generic.
This makes us question how often companies are accessing non-vendor specific forecasting.
The Treadmill of Forecasting Techniques
Ever since software vendors became the primary drivers for forecasting thought leadership (at least broadly, if not necessarily in journals or in conferences) the vendors have brought out a barrage of new techniques. When new to forecasting it is natural to think that these new techniques are truly breakthroughs. However, after a few decades of witnessing this, a pattern begins to emerge. Something else that becomes apparent if you spend enough time on forecasting projects is that the things vendors talk about don’t have anywhere near the uptake one might expect within companies that plan to implement these new techniques.
Due to the treadmill effect of the continual introduction of new approaches, that there is little emphasis or financial incentive to test whether previous assertions regarding what are supposedly game-changing forecasting techniques are actually more effective than the more established techniques.
Is AI and Machine Learning Taking Over Forecasting?
What happens if we apply these historical lessons to AI and machine learning in forecasting? Well, it ends up looking remarkably similar to the promises made by vendors and consulting companies of the previous forecasting techniques.
There are actually vanishingly few case studies of either AI or ML improving forecast accuracy, but because of marketing, many companies that have yet to master univariate statistical forecasting now think they need AI or ML to keep one step ahead. In a very short time, at most supply chain forecasting vendors, AI/ML has been added with peculiar speed, as if all the vendors caught the AI/ML “cold” at the same time.
Could it be that this angle was added simply because the topic happens to be “hot.” Can one image a scenario where a software vendor adds things to their website merely for commercial reasons?
No of course not. We are sorry we brought it up.
Let the robot and it is AI/ML do the forecasting for you!
People are Bananas for BI/Big Data/Data Science!
We get this type of question reasonably frequently..
“Why use univariate forecasting when you can use Random Forrest or neural nets?”
Well, if you read up on the specific types of ML algorithms you find that they don’t have much to do with supply chain forecasting. First, you need the extra data, which companies normally don’t have. It is difficult to do ML with a single variable.
More Complex Techniques to the Rescue for Degraded Demand History?
The trend over the past few decades of product proliferation, constantly product number rolling, magnified promotions (enabled by a special class of software called Trade Promotion Management), increased quarterly buying (to shun inventory so as to provide an overly rosy financial position to Wall Street), and other short-term approaches to managing a business all run counter to using more complex forecasting methods.
Supply chain management chaos has consequences. This image is a visual representation of what modern business practices do to the supply chain demand history.
Statistical forecasting is based on pattern recognition and all of these items list disturb our statistical approaches from this pattern recognition. And bigger computers or data science/machine learning will not be able to overcome poorer quality demand history data.
For larger portions of the product location dataset of companies, the best forecast method ends up being a constant or level! For shame.
That is right, the more intermittent the demand history the more the methods applied move away from even the more advanced univariate forecasting methods, much less machine learning methods. This has been shown through numerous analyses of different client demand history data.
Big Data = Big Forecasting Accuracy Improvements?
A lot of Big Data is a solution looking for a problem….and the solution is vendors and consulting companies selling Big Data benefits. Many of these proposed benefits are in improved forecast accuracy.
Curiously, when proponents of Big Data begin pitching the benefits of Big Data for forecasting, it often quickly becomes evident that they have not worked in forecasting. In fact, there is an oversupply of people, often software vendor executives or in sales, who have never worked in forecasting themselves who are very comfortable proposing how items will not only improve forecast accuracy but improve radically forecast accuracy. The size of the claim can sometimes be tacked back to the size of their sales quota or other internal organizational incentives.
More Data Feeds Forecast Improvement?
The recent proposals about forecasting improving because of “more data inputs” is strikingly similar to the previous claims around consensus forecasting resulting in better forecasts. Yet, a major outcome of consensus methods was to bring strongly biased inputs with a high management overhead that normally overwhelmed the company’s abilities to triage and quality check them. Companies generally love the idea of consensus based forecasting but aren’t too keen on investing the time into the measurement, tracking, and eventually removing forecasts that increase forecast error from the forecasting process. What companies are really interested in getting is a consensus based forecasting process that manages itself. You get more forecast inputs, close your eyes….and poof, forecast accuracy improves….just like the pamphlet from the vendor promised.
More forecasting inputs do not equal better forecast accuracy. This is provable. A certain percentage of the population thinks that the earth is flat. Including the views of the Flat Earth Society (a real thing by the way) along with geologists and astronomers will not improve our understanding of what we all live on.
The history of quality problems with much smaller amounts of data that was added to the forecasting process through consensus based forecasting inputs does not bode well for the ability of companies to wrangle and quality control far large data sets brought through unstructured data.
Unstructured Forecasting Nirvana?
The concept of adding unstructured data is similar in another way to the previous consensus trend in forecasting. It means going down the pathway of a high energy input/maintenance approach for forecast improvement. And there is little evidence for its effectiveness. But the fact that something has no evidence should never stop a vendor or consulting company from adding that item to their website.
“Johnson, our competition has unstructured data forecasting on their website, we have to have unstructured data forecasting on our websites!”
For those of us that work in forecasting, we already have enough challenges just getting clean univariate sales history data. Is adding a bunch of unstructured messy data really that tantalizing of a concept?
The proposal that a Facebook feed or other social media input to forecasting improves the forecast is often assumed to be true, but proposing a forecast improvement hypothesis is no big accomplishment. The world is literally filled with hypotheses as to what improves forecast accuracy.
We test innumerable forecast improvement hypothesis for our clients. Here are common examples.
“Forecasting at the product group improves forecast accuracy”
“Forecasting using end of quarter markers improves forecast accuracy.”
“Changing to daily forecasting will improve forecast accuracy”
“We need to do machine learning, and it has to improve forecast accuracy.”
Yet, after testing, it turns out that very few hypotheses improve accuracy over the univariate forecast at the product location combination.
Forecasting is a lot more enjoyable, the further away you are from it. If you don’t yourself do forecasting, it is very natural to think that you can come up with all manner of methods to improve the forecast. However, when you work for months testing one hypothesis after another, that turn out to not improve the forecast, or not improve it enough to make the method proposed to be worthwhile, it is not as exciting.
Point two, companies only rarely fund this type of testing, so even if testing were the natural default position, companies normally have less interest in testing outcomes than simply using new techniques and hoping for the best.
Point three, the implemented forecast accuracy improvement will always be lower than the tested or lab improvement. Once the improvement is rolled out, it moves from a controlled environment where one or two specialists are working on an analytical project, to an operations environment, where things are a lot more distracting and messy.
The Natural Predisposition Away from Forecast Testing
Forecast Testing means receiving the negative reinforcement that your “brilliant plan” was not so brilliant after all. Confirmation bias moves people away from doing this type of testing. It is why forecasting knowledge must be combined with following a scientific testing approach.
We have been doing attribute-based/characteristic forecasting for years now. This is a form of multivariable forecasting which is far less complex than machine learning methods and testing things our clients could not test. And we are very happy to test this. However, the accuracy improvements are never particularly large.
Our Tools and Approach to Our Forecasting Consulting
The books we have written cover forecasting in SAP DP, Demand Works Smoothie, ToolsGroup, JDA DM among others. Applications are important, but they are only one piece to the puzzle. In our forecasting consulting, a lot of what we do is not centered around any one application. And we use anything from the mathematical programming language R to Google Sheets (great for collaboration) to get into different dimensions of analysis. Along the way, we have even developed our own forecast error measurement application.
But much of that complexity is only for testing purposes, often hypothesis testing. What is true more often than not is that basic forecasting techniques end up being what we place in the systems of our clients. These are the same basic techniques that are companies are too bedazzled by topics ranging from demand sensing to artificial intelligence to effectively leverage.
Brightwork Forecast Explorer
Sales Forecasting Book
The Problems with Combining Forecasts
In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.
Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.
Is Everyone Focused on Forecast Accuracy?
Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.
By reading this book you will:
- See the common misunderstandings that undermine being able to combine these different forecast types.
- Learn how to effectively measure the accuracy of the various inputs to the forecast.
- Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
- Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
- Learn how CRM supports (or does not support) the sales forecasting process.
- Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
- Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.
- Chapter 1: Introduction
- Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
- Chapter 3: The Common Problems with Statistical Forecasting
- Chapter 4: Introduction to Best Fit Forecasting
- Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
- Chapter 6: Sales Forecasting
- Chapter 7: Sales Forecasting and CRM
- Chapter 8: Conclusion