How is Forecast Error Measured in All Dimensions in Reality?

Last Updated on April 28, 2021 by

Executive Summary

  • Forecast error is often presented as a cut and dried affair.
  • Our research shows that forecast error is measured much differently than generally known.

Video Introduction: How is Forecast Error Measured in All Dimensions in Reality?

Text Introduction (Skip if You Watched the Video)

A brief synopsis of how forecasting error measurement goes something like the following. Choose the forecasting error measurement calculation, calculate and report the forecast error. If one reads books on forecasting, the forecast error measurement chapter of the book invariably focuses on the forecast error calculation. However, the measurement calculation selected is only one dimension of how the forecast error measurement is produced. You will learn significant but often either overlooked or underemphasized areas of forecast error measurement.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

The “Less Advertised” Areas of Forecast Error

The following areas are far less written about, but they are all important factors in effectively measuring forecast error.

Myth #1: One Can Rely on The Forecast Error Measurement in Forecasting Applications

Nearly all forecasting applications only provide forecast error measurements at the product location combination. This error is necessary for using a dynamic safety stock calculation and for evaluating the specifics. However, it does not help with reporting. And even within the application, the design requires that the user continually “hop around” to different product location combinations to see the error measurement. If multiple product locations are selected, an error measurement will calculate, but it is not usable.

Forecasting applications are designed primarily to produce forecasts, manage forecast errors or show errors in multiple dimensions. However, the buyers of forecasting applications do not know this and presume they will get everything they need to manage and improve forecasts once they buy a forecasting application.

Myth #2: Unweighted Forecast Error Measurement Makes Sense

For proportionality and forecast error tracking, it is necessary to report aggregated forecast error measurements. However, only a minority of companies use any weighing of the forecast error. Without weighing some type, a product with monthly sales of 20,000 units receives the same weight as one with monthly sales of 200 units. Grouping forecast error does not solve the issue of unweighed forecast error, and it merely rolls the inaccuracy into the aggregation. That is, the same problems persist. The smaller volume product location combinations continue to be measured the same as the large volume product locations.

Myth #3: Sales and Marketing Have their Forecast Error Measured

Sales and marketing demand to have input into the final forecast but refuse to have no interest in having their forecast error measured. Forecasts for sales and marketing are often viewed as a “game” to these departments. It is a game they play that allows them to meet their objectives and has nothing to do with forecast accuracy.

  • Incentives from Marketing: For marketing, it is to inflate new product introductions to make it seem like they add more value to the company than they are.
  • Incentives from Sales: For sales, it is some combination of gaming their quota and ensuring plenty of stock available, regardless of the cost of stock maintenance.

Most companies actively allow sales and marketing never to have their forecast inputs measured. Therefore, they should never receive negative reinforcement or require them to adjust their behavior to improve the accuracy of the forecasts they provide.

Why?

Because Sales and Marketing want to keep feeding inaccurate forecasts to the process, they are not concerned with the waste or inefficiency it creates. They choose forecasts, not for accuracy but their narrow individual or organizational objectives.

Myth #4: Most Forecast Error Measurement Supports Identifying How to Improve Forecast Accuracy

It is often overlooked that the main point of forecast error measurement is forecast accuracy improvement. However, in virtually all cases I have seen, forecast error is only reported as a single point error. There is a second problem where the context of the forecast error is not presented. For instance, a specific number does not mean all that much without context.

  1. Low volume items will only very rarely have a low forecast error.
  2. High volume items will only very rarely have a high forecast error (unless promotions or other factors damage the demand history). However, context-free forecast error reporting is the norm.

Myth #5: Most Forecast Error Measurement Supports Identifying How to Improve Forecast Accuracy

Forecast error is nearly universally reported without comparison. The forecast error is reported as a single number, which is the forecast using the current forecasting approach. This is true even though when changes are made to a forecast. One must know the comparative forecast error or what the error would be the change in the method were applied.

This reinforces the lack of testing that is so common in forecasting. Forecasting is not seen as being at one level in time, only to be improved with the economical application of new methods, but rather a steady state of error.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.