- Companies tend to think that weighing a forecast error and forecasting accuracy is unnecessary, and grouping is just fine.
- We cover the issues with relying on unweighted forecast error.
Video Introduction: Forecast Error Myth #2: An Unweighted Forecast Error Measurement Makes Sense
Text Introduction (Skip if You Watched the Video)
Unweighted forecast errors mean that entirely disproportionate forecast errors are applied to each product location combination. When averaged, the large volume items count the same as the low volume items in the database. The idea that unweighted forecasts can provide a picture of forecast accuracy above the product location combination is a major myth of forecast error measurement. You will find out the importance of weighting the forecast error and how the standard forecast error measurements do not allow for easy forecasting error weighing.
Our References for This Article
If you want to see our references for this article and related Brightwork articles, see this link.
Myth #2: The Myth of Usefulness of the Unweighted Forecast Error
For proportionality and forecast error tracking, it is necessary to report aggregated forecast error measurements. However, only a minority of companies use any weighing of the forecast error.
Without weighing some type, a product with monthly sales of 20,000 units receives the same weight as one with monthly sales of 200 units. And that is a serious problem. However, this myth is often never discussed at companies.
Reporting Out Forecast Error from the Demand Planning Department
The forecast error or forecasting accuracy measurement in forecasting applications is primarily for the user or planner.
Companies will then create a custom report that pushes the forecast error or forecasting accuracy, or planners will export the error to Excel, perform the calculation in a spreadsheet, and provide it to their Director of Forecasting/Demand Planning, then provides it to the broader company. The forecast error report is often presented with great anticipation as if this is the “horse’s mouth” on forecast error or forecasting accuracy for the period.
Grouping forecast error or forecasting accuracy does not solve the unweighed forecast error, and it merely rolls the inaccuracy into the aggregation.
That is, the same problems persist. The smaller volume product location combinations continue to be measured the same as the large volume product locations.
Right now, and going back many years, companies have been reporting out grouped forecast error without any weighting of any kind and thinking, “they have a pretty good read on things.”
For All of the Myths, See the Following Table
Forecasting Error Myths
|Myth Number||Forecasting Error Myth||Forecasting Error Myth Article and Link|
|1||One Can Rely on The Forecast Error Measurement in Forecasting Applications||Link|
|2||Unweighted Forecast Error Measurement Makes Sense||Link|
|3||Sales And Marketing Have their Forecast Error Measured||Link|
|4||Most Forecast Error Measurement Supports Identifying How to Improve Forecast Accuracy||Link|
|5||Non Comparative Forecast Error Measurement is Helpful||Link|
|6||Forecast Error Measurements are Straightforward to Do||Link|
The Bias of an Unweighted Forecast Error or forecasting accuracy
This will have a strong tendency to overstate the forecast error, as lower volume product locations tend to have higher forecast errors (which are valued equally to higher volume locations).
The Complexity with Weighted Error Calculation
If standard forecast error measurements are used (which we don’t recommend), each forecast error or forecasting accuracy must be multiplied by the following fraction.
The # of Sales Units (Normally Some Average of Multiple Periods or the # of Units for the Period) for the Product Location Combination
Divided By the Following
The Total # of Sales Units (For the Same Periods Selected for the Numerator) For All Product Location Combinations
For a detailed explanation of how weighted forecast error is calculated, see the article How to Use Weighted MAPE for Forecast Error Measurement.
The Issue With The Adjustment
This is a troublesome calculation, which is yet another reason to steer clear of the standard forecast error measurements, as they lack an inherent scale within their measurement.
Each of the standard forecast error measurements (MAD, MAPE, MSE, etc..) has been designed to consider the product location combination error. Any weighting requires additional calculation.
Forecast error or forecasting accuracy calculation is misunderstood to be accurate if no weight is applied. The problems have been listed in this article. However, all weighing must be performed external to forecast applications, designed only to calculate forecast error at the product location combination.
Why Do Companies Have So Many Problems With Forecast Error?
While there are several reasons that we have documented, our view is that this myth that the forecast error need not be weighted is one reason for this disconnect.
Our further view is that forecast error, or forecasting accuracy calculation in companies, does not help companies drive to improved forecasting accuracy. We have demonstrated this to many companies by asking those that receive the forecast error report, what they think it means, and what they do with it. We found the forecast error report invariably is a measurement that desired to be seen, but which is so misunderstood and so misrepresented in its calculation versus what the multiple recipients think it is…as to be useless.
What the Unweighted Forecast Error Report Does Not Do
- The forecast error does not show the recipients what different forecast models’ applications can improve product location combinations.
- It does not tell the recipients which forecast inputs are valuable and which reduces forecasting accuracy.
What it does do is promote a dysfunctional approach to forecast management. It also makes the recipients think they have received illumination that is at best highly incomplete.
A Better Approach
Observing the same thing at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that meets all of the requirements of creating a forecast error that drives forecasting accuracy improvement.
Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?
It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.
What the Forecast Error Calculation and System Should Be Able to Do
One would be able to for example:
- Measure forecast error
- Compare forecast error (For all the forecasts at the company)
- To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
- To be able to measure any forecast against the baseline statistical forecast.
- To weigh the forecast error (so progress for the overall product database can be tracked)
Getting to a Better Forecast Error Measurement Capability
A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.
Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.