What This Article Covers
- Externally Managing Forecast Error
- The Common Problems in Measuring Forecast Accuracy
- Reporting Forecast Accuracy at a High Aggregation
- Why For Supply Chain Planning the Forecast Error Must be Measured at the Product Location Combination
- Normal Demand History Demand and Service Part or Low Volume Demand History Forecast Error Measurement
- Multiple Planning Bucket Error Calculation
- Check the Forecast Error of Every Forecasting Input
- Forecasting Testing — Everywhere
Forecast error measurement is hugely important. Without an accurate rating of your forecast accuracy, it is unknown what changes to the forecast process are desirable and will lead to forecast accuracy improvement.
However, while most of the discussion circulates the forecast error measurement type (MAPE, MAD, MSQE, etc..), what is many times overlooked is the level of the hierarchy the forecast error is measured and the measurement of the various inputs to the final forecast. While it is surprising to many, but in our experience companies only very rarely know their forecast error at the product location combination. And they never quantify monetarily the value of the forecast improvement.
The Common Problems in Measuring Forecast Accuracy
A primary reason for this is that most forecasting software is designed around viewing the time series graphically and either making manual adjustments or fitting various forecasting models. The design orientation is not specifically around forecast error determination. This means measuring forecast accuracy given the standard tools is time consuming, difficult, itself error-prone. For these reasons and a host of others, forecast accuracy measurement in any depth tends not to occur.
While a forecast accuracy number is often known, the level of aggregation of the measurement is usually not…..which is of course not possible. Here are some common issues we see with forecast error measurement.
- The forecast error is reported at too high of a level.
- The forecast error is only measured and reported for the statistical forecast, with literally no other forecasts measured for error.
- There is quite frequently inadequate data maintained regarding manual overrides, meaning the forecast improvement (or degradation) due to overrides can be challenging to measure.
It is critical not to confuse performing a forecast at a level of aggregation by measuring the forecast at an aggregation. Measurement must be at the lowest level, or the level pertinent to supply planning. The supply planning process is the “customer” of the supply chain forecasting process.
Reporting Forecast Accuracy at a High Aggregation
Companies will often say that they are reporting forecast error at an aggregation (product family, product group, etc..) because it reduces the forecast error. However, forecast error for supply chain planning must be measured at a specific product for a specific location. That is the measurement location is not optional. This leads companies to have a much lower forecast accuracy than they think they do. This lead one executive on at a client of ours to state.
“I don’t get it. Our unit forecast accuracy is high, why is our service level so low?”
Higher aggregation means of course lower error. But the error cannot simply be measured wherever it is lowest!
To make forecast improvements, a company must know its forecast error, and then what techniques make what percentage improvement in the forecast error. If the company is “hazy” as to what a relevant forecast error is, or what their forecast error is, it becomes impossible to know what the impact is of changes to techniques.
Why For Supply Chain Planning the Forecast Error Must be Measured at the Product Location Combination
The forecast error that is pertinent to supply planning is the error at the product location combination. No other aggregation higher than this has any meaning for supply planning as the supply plan must be generated at the product location. Companies require an easy way to measure forecast error. Brightwork Forecast Explorer provides the easiest way to calculate forecast error, and it can be used to calculate any forecast interval (that is a grouping of periods) and many different forecasting inputs.
Normal Demand History Demand and Service Part or Low Volume Demand History Forecast Error Measurement
We originally designed Brightwork Forecast Explorer to use the MAPE error calculation as we have used it to good results on many forecasting projects. However, all of the error calculations we tested resulted in high overhead. MAPE has a problem with zeros in the demand history. So we use an override for safety stock. But after testing MAPE, SMAPE, MASE, MAAPE, and others, we switched away from these error measures to a monetary measure of forecast error. This calculates the monetary improvement from moving from one forecast method to a second forecast method.
- Negative Monetary Error Calculations: If the monetary calculation is negative, then the first method is better.
- Positive Monetary Error Calculations: If the monetary error calculation is positive and large, then it makes sense to focus on that product location combination as it means the most financially to the company to change.
Were the ingredients of these cupcakes checked before the baking process began? If not, that is very similar to the forecasting process at most companies.
With Brightwork Forecast Explorer, you can know the forecast error — and the forecast error as it relates specifically to the product location combination for all inputs.
Forecast inputs should be determined by measuring each forecasting input error before being included as part of the final forecast.
Check the Forecast Error of Every Forecasting Input
Forecasts come in from sales, marketing, and supply chain, among other sources.
However, to develop a high-quality consensus forecast, the error of each input must be evaluated.
- Most companies do not know the error of these inputs.
- Therefore they do not know how much to weight some inputs versus others.
- Companies that attempt to incorporate sales input into the forecasting process will nearly always confuse “getting input” with getting quality input and therefore will not set up the appropriate measurement mechanism and firewall between the sales forecast and the supply chain forecast.
The only successful consensus-based sales/supply chain forecasting projects are where sales provide but do not control which inputs get into the supply chain forecast. (this generalizes for all consensus inputs). This is because sales want the inventory always available, and can increase inventory by increasing their forecast. As they aren’t held responsible for inventory, they don’t care how much the cost of sale is, as it does not impact their quota.
Successful consensus-based sales/supply chain forecasting projects are rare.
A primary reason for this is that companies find it too difficult to maintain the forecast error of each of these inputs. Therefore, low accuracy inputs end up being accepted into the consensus forecast. This is like baking a cake without worrying about where the ingredients came from or their quality.
Don’t waste your time, or lose your mind trying to calculate forecast error. None of the forecasting applications calculated forecast error the way the Brightwork Explorer does. It took years to figure out the lack of effectiveness, using the traditional error measurement methods and eventually coming up with our approach.
Multiple Planning Bucket Error Calculation
Particularly with lower volume sales history, it makes sense to test larger duration forecast planning buckets to see how much it improves forecastability. To create a composite file (with monthly, quarterly and even 1/2 year or full year planning buckets) it is necessary to perform some data wrangling. We provide data wrangling services.
Forecasting Testing — Everywhere
We mean that. We say it because use Brightwork Forecast Explorer ourselves to measure forecast error from every dimension, which is why we focused on this functionality in particular.
Brightwork Forecast Explorer is perfect for measuring any forecast error.
- Statistical forecasting in different time buckets.
- Statistical forecasts from top-down techniques.
- Statistical forecasts using different history (for instance, sales orders versus consumption)
- Determining the statistical forecastability of different locations in the supply network (which may lead to placing forecasts in locations different from their current locations).
- Determining which data streams to use for generating a forecast (for example orders versus bookings or goods issues)
- Testing historical removal — that is removing history that is nonrepresentative to improve pattern recognition.
- Sales forecasts
- Marketing forecasts
We can count on one hand the number of companies that we have worked with that knows the forecast error of their sales or marketing forecasts!
And the end of the process, once the forecast error can be easily calculated (we always need the files in the format of the forecast being at the product location combination), this tells the entity that is creating the forecast, what is the best way to forecast.