- The standard forecast error measurements are universally accepted, but they are not comparative.
- Comparative forecast error is rarely discussed.
Forecast error is nearly universally reported without comparison. That is, the forecast error is reported as a single number, which is the forecast using the current forecasting approach.
See our references for this article and related articles at this link.
This is true even though when changes are made to a forecast, one must know the comparative forecast error or what the error would be the change in the method were applied.
Myth #5: The Helpfulness of Non-Comparative Forecast Error Measurement
This reinforces the lack of testing that is so common in forecasting. Forecasting is not seen as being at one level at one point in time, only to be improved with the economical application of new methods, but rather a steady-state of error.
When a forecast error is produced by a forecasting application, while it may provide multiple forecast error calculations (MAD, MAPE, MSE, etc..), it is invariably for a single forecast. As we cover in the article Forecast Error Myth #1: One Can Rely on The Forecast Error Measurement in Forecasting Applications, the forecast error measurement functionality in nearly all forecasting applications is extremely limited. Forecasting applications are designed to produce a non-comparative forecast error at a product location combination.
For All of the Myths, See the Following Table
Forecasting Error Myths
|Myth Number||Forecasting Error Myth||Forecasting Error Myth Article and Link|
|1||One Can Rely on The Forecast Error Measurement in Forecasting Applications||Link|
|2||Unweighted Forecast Error Measurement Makes Sense||Link|
|3||Sales And Marketing Have their Forecast Error Measured||Link|
|4||Most Forecast Error Measurement Supports Identifying How to Improve Forecast Accuracy||Link|
|5||Non Comparative Forecast Error Measurement is Helpful||Link|
|6||Forecast Error Measurements are Straightforward to Do||Link|
All other forecast error measurement is normally the product of a custom report that companies create — or is a periodic extract to Excel which is aggregated to what the executives in the company want to see. These forecast error measurements are also not comparative, as the executives are not responsible for testing different forecasting methods to try to improve forecast error, and of course a change in forecast method must be changed at the product location combination.
All of this leads to the following question.
What Is The Purpose of a Forecast Error Measurement?
Forecast error measurement should point to where to make changes to improve forecast accuracy. However, a non-comparative forecast error does not do this. And it is almost a univeral reality that forecasting spends far less time in attempting different forecasting methods than having planners make manual adjustments to the forecast.
In many years of forecasting consulting, I found that it was rare for either testing to be performed, or for ongoing adjustments to the major levers within the forecasting system.
If the forecast error does not serve this purpose than it just becomes an inert number.
A Better Approach
Observing ineffective and non-comparative forecast error measurements at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast and to compare this one forecast versus another.
The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from other forecasts.
This is the fastest and most accurate way of measuring multiple forecasts that we have seen.