- The importance of having an honest and correct measurement or forecast accuracy or forecasting accuracy.
- How the error is measured.
Video Introduction: How to Best Measure Forecast Error
Text Introduction (Skip if You Watched the Video)
The vast majority of companies are unhappy with their forecast accuracy or forecasting accuracy. However, many of the same companies that are unhappy with their forecasting accuracy do not know their forecast error. Secondly, getting to the actual forecast error is not as easy as it sounds. One major issue is that there is so much manual intervention in the forecast that the error measured is the error of both the statistical forecast and the manual interventions. At one company we found there were ten different areas where the forecast could be altered manually. You will learn how obtaining the true forecast error is much more difficult than it initially appears.
Our References for This Article
If you want to see our references for this article and related Brightwork articles, see this link.
Questions to Ask on Forecast Error
There are several questions to be answered on forecasting error measurement.
- At What Level of the Product Hierarchy?
- Where is the Error Measured Geographically?
- The Bucket of the Error Measurement?
At What Level of the Product Hierarchy?
The first is what level in the product hierarchy; the forecast should be measured. It is essential to consider who the “customer” of the forecast error is in asking this question. For S&OP forecasts, the customer is the S&OP group so that the forecast error can be aggregated and be dollarized.
For supply chain planning, the customer of the demand forecast is both supply planning and production planning. For example, in the calculation of dynamic safety stock, the forecast error affects the stocking level. The higher the prediction error, the higher the safety stock calculation. Safety stock is calculated per SKU. Thus the prediction error should be calculated at the SKU.
Where is the Error Measured Geographically?
A second question comes into play regarding geography. For most companies, supply planning occurs at different locations. In this case, the forecast error should be measured at the SKU or product location. However, I recently ran into a client that forecasted everything to a single location (both supply and production planning).
Until they began to separate their forecast by location, it seems necessary to check the forecast’s error independent of location, although this is not recommended.
The Bucket of the Error Measurement?
Another question is the packetization of the forecast error. The more aggregated the forecast error measure is from a timeline perspective, the lower the forecast error. However, again, what is most important is how the forecast is consumed, which is the supply planning system. For this reason, the most useful forecast error is for the lead time demand error at a product location combination.
Understanding the Context of the Forecast Error
Without understanding the context, forecast error measurement is meaningless. In the “Supply Chain Forecasting Software,” I lay out the relevant circumstances of forecast error. I categorize the contexts into the following areas:
- Forecast Error Context by Aggregation
- Forecast Error Context by Product type
- Forecast Error Context by the System Generated Versus the Final Forecast
- Forecast Error Context of Location
- Forecast Error Context by Duration
Some people who work in forecasting consulting spend too much time focused on the exact measurement type (MAD, MAPE, etc..). And not enough time focusing their clients on the context of the forecast error and choosing a forecast error in a context that makes sense for the company.
Some very experienced consultants who should know better waste far too much time on this topic.
How to Account for Manual Forecast Adjustments
- All companies allow planners to make manual forecast adjustments to the statistical forecast. How this process is managed is one of the most important regarding the quality of the final forecast.
- This process is very poorly managed, more often than not, because many people in the field do not understand the importance of removing forecast bias and secondly.
- Because many people who know better are not interested in fighting the political battle necessary to keep to a structured approach, this is related to how the manual forecast adjustments are maintained, measured, and controlled.
All of this brings up how to measure how different aspects of the forecasting process are performing.
This is discussed in this article.
These issues apply equally to consensus based forecasting as to statistical forecasting. The common misimpression of consensus-based forecasting is that the forecast will be improved if everyone has a say. This is a false and quite the opposite conclusion of the research in this area. Those experienced with consensus and statistical techniques know that managing the bias and ultimately removing bias is essential in improving the overall forecast.
How Not to Do It
They did not keep separate fields for the statistically generated forecast and the manual override or adjustments at one client. Since I did not want the manual forecast adjustments to interfere with the statistical forecast, I had them remove any product that had received manual forecast adjustments. That worked ok for the short-term, although it significantly reduced the number of projects to measure the forecast error. However, I made an additional recommendation to begin to record and archive much more information about the manual overrides. This included the following:
- Separate fields for the final statistical forecast and the manual forecast adjustments
- Multiple fields for the manual change so that the date and the person could be tied to the change.
This allowed the company to isolate the manual forecast adjustments to individuals and departments and then to be to look for bias. Without this data, it is impossible to reduce bias because it is not possible to know where and from who the bias is coming from.
This is particularly problematic for some clients that do a poor job of restricting access to the forecasts and where multiple people can change the forecast. This is a sure signal of a forecasting problem, a lack of appropriate control over permissions.
Control Over Manual Overrides
Control over the manual overrides must be tied to accountability for the forecast. Without this, forecasts become extremely poor, as no one declares ultimate ownership for the forecast, and everyone can blame everyone else for a poor forecast. I have seen several clients manage their forecast like this, and this situation devolves to this every time. Once there is an institutional acceptance for poor role management, it becomes challenging to re-install discipline, so this is very much a situation to be careful of.
Unfortunately, it is the position that most companies seem to find themselves.
On this topic, I very much agree with Ed Pound, COO at Factory Physics, on how companies interpret forecast error.
Too many companies focus on forecasting accuracy without knowing how to use forecast error. The idea is “just get forecast accuracy better and we will reduce the forecast error so it doesn’t matter.” Since eliminating forecast error is impossible, this typically becomes a quixotic task. One always benefits from better forecasts but the effort at improving forecast accuracy has a strongly diminishing benefit with increasing effort. As an alternative, the recommended approach is to understand what the forecast error is so that, when making forecasts, a company knows the level of risk it has to manage to get good service–this provides predictive control and makes management an easier task. – Ed Pound
Manual forecast adjustments to the forecast are essential, and the software selected needs to make manual forecast adjustments easy to do. However, not that many applications do this. Therefore, one of the first steps in forecast analysis is to determine the system generated versus manually adjusted forecast. Both of these values must be committed to understanding where the opportunities for improvement lay. How to determine how much forecast accuracy has the potential to be improved is covered in this article, “How Much Can Your Forecast Accuracy be Improved.”
A Better Approach to Forecast Error Measurement
All forecast methods to be put into service in forecasting need to be compared or tested against other methods. The problem is that most companies’ forecast error measurement capability is not up to the task.
Our observation is that all of the standard forecast error measurements are negative for improving forecast accuracy. As we cover in the article How is Forecast Error Measured in All of the Dimensions in Reality?
Observing ineffective forecast error measurements at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast. The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from other forecasts.
This is the fastest and most accurate way of measuring multiple forecasts that we have seen.
Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?
It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.
What the Forecast Error Calculation and System Should Be Able to Do
One would be able to for example:
- Measure forecast error
- Compare forecast error (For all the forecasts at the company)
- To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
- To be able to measure any forecast against the baseline statistical forecast.
- To weigh the forecast error (so progress for the overall product database can be tracked)
Getting to a Better Forecast Error Measurement Capability
A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.
Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.