Why is it Important to Understand Forecasting Error?

Executive Summary

  • Understanding and measuring forecast error is critical to improving forecast accuracy.
  • Forecast error is far less well understood than most people know.

Introduction

It is vital to understand forecasting error as it provides the necessary feedback to improve forecast accuracy eventually. However, forecast error measurements are tricky, and in most companies, it tends to be assumed that the current forecast error measurement method must be “right.” However, forecast error measurement methods all have assumptions, strengths, and weaknesses. Furthermore, forecast error is often reported at aggregation levels above the product location combination. Again, other departments outside the supply chain use aggregation that does not relate to supply chain management.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, visit this link.

Forecast Error Calculation

RowJanuaryFebruaryMarchAprilAverage
Example #1
Forecast 20 20 30 40
Actual Demand 50 0 50 0
Forecast Error (100% Error at Zero Demand) .6 1 .4 1 .75
Mean Demand 25 25 25 25
Forecast Error (Average Demand for Periods with Zero) .2 .2 .2 .6 .3
Forecast Error by Subtraction .3 .2 .2 .4 110
Example #2
Forecast 150 100 120 40
Actual Demand 300 0 50 0
Forecast Error (100% Error at Zero Demand) .5 1 1.4 1 .975
Mean Demand 87.50 87.50 87.50 87.50
Forecast Error (Average Demand for Periods with Zero) .71 .14 .37 .54 .44
Forecast Error by Subtraction 1.5 1 .7 .4 360

Here are a few examples of how a forecast error is measured. However, this calculation approach contains assumptions. This is a basic example of forecast error. 

The Forecasting Myth: Forecast Error Measurements are Straightforward

We consider this one of the significant myths of forecasting.

For a forecast error measurement to be understood, it must be documented in full.

The All-Important Context of the Forecast Error Measurement

Not only must the forecast error method be explained, but all the related factors, ranging from the forecast planning bucket to the level of aggregation reported, must also be understood. Any level of aggregation reduces the forecast error. That is, it reduces the number but does not reduce the forecast error. If one moves from measuring forecast error at the month and increases the interval bucket to the quarter, the forecast error number will decline. But nothing will have happened to the actual forecast error. Groups of companies that want to make their forecast accuracy look better than it is frequently take advantage of this trick. I had one demand planning department that had one report that showed their forecasted total volume unit versus actual total unit volume. So, if you forecasted 100 million units in a year but sold 120 million units of all items in your SKU catalog, your forecast error for the year is 20%.

This measurement means nothing, as the company did not sell a single item at a single location.

Getting Forecast Error Through to the Nonmathematical Individuals in a Company

I have spent considerable time interacting with individuals in sales and marketing when working on forecasting projects and have found it extremely difficult to explain forecast errors to them, as they live in non-quantitative realms. And secondly, forecast error is usually not something they worry about. These groups are significant drivers of forecast error in the supply chain forecast. Still, they often cannot be told they have a high error, and their eyes tend to glaze over at reviewing forecast error reports.

All of this is work to get a person to a basic understanding of the forecast error. It is curious how many times.

What Is the Forecast Error Again?

I have asked various people in my clients what the forecast error/accuracy at the company is, and they have often told me something like the following.

“Oh, it’s around 75%.”

I then ask them.

Is that monthly or weekly, and is it weighted?

And the response I have often gotten is..

I am not sure about that, but I know it’s around 75%.

If you don’t know the forecast error measurement characteristics or the overall context of the measurement, the fact that the number (75% accuracy/25% error) is thrown out does not mean anything. The fact that the person does not realize this tells you how little they know about forecast error measurement.

How About Weighing that Forecast Error?

When reporting average error across many items or SKUs, it does not make sense to do so without weighing the error, or the forecast error on items with sales of 20 items will count the same as the highest volume SKU with yearly sales of 100,000 items.

One of the major problems of forecast error measurement is that forecasting systems do not have a weighted forecast. So, they can only tell users the forecast error of a specific product location combination.

This is very typical of forecasting applications. There are multiple forecast error measurements. However, in this screenshot, an aggregation is selected. This means the forecast error measurement is not functional. One must select just the specific SKU or product location combination to obtain an accurate measurement. However, if one wants a forecast error measurement of a database segment, it must be done in a custom report. 

Why It is Important to Understand Forecasting Error: The Problem of Stopping Too Early on Forecast Error Explanation

A significant issue is assuming that groups that do not calculate the forecast error themselves know what the error means. We argue that a forecast error that is not comparative has little utility, as we cover in the article Forecast Error Myth #5: Non-Comparative Forecast Error Measurement is Helpful.

This is not a view shared by most — even experienced people who work in forecasting. Yet we came to this conclusion independently after working in forecasting for over a decade and being exposed to how to forecast error measurement performed at many companies.

Why Report on Forecast Accuracy?

Our conclusion is that the only valuable forecast error measurement naturally tells the people working in forecasting where to put their effort to improve forecast accuracy.

A forecast error without context does not drive the people responsible for forecasting to improve the accuracy.

Scenario #1: No Forecast Error Measurement

The worst situation is not to measure forecast error at all.

Scenario #2: Forecast Error Measurement Without Action

However, a quick second does not know where to focus once the forecast error is determined. One continues to review forecast error but does not use the forecast error measurement to effect change that improves forecast accuracy.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.