Forecast Error

What is the Definition of Forecast Accuracy?

Executive Summary

  • Forecast accuracy has a simple definition but many important details.
  • We cover the definition of forecast accuracy and its implications.

Video Introduction: Definition of Forecast Accuracy

Text Introduction (Skip if You Watched the Video)

Understanding and measuring forecast error is critical to improving forecast accuracy. Forecast error is far less well understood than most people know. It is crucial to comprehend forecasting error as it provides the necessary feedback to improve forecast accuracy eventually. Forecast error is deceptively easy to understand. The vast majority of people who work with forecast errors can often be caught off guard about the forecast error they rely upon. The term “the devil is in the details” certainly applies to forecast accuracy. This problem with understanding forecast error restricts the ability to improve the accuracy of the forecast.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, visit this link.

What is the Definition of Forecast Accuracy?

Forecast accuracy is at a high level the difference between the forecast and what actually happened. However, let us elaborate on this general definition.

  1. Forecast accuracy is the degree of difference between the forecasted values and the agreed-upon forecasting bucket (so weekly, monthly, quarterly, etc.).
  2. Forecast accuracy is never known until the event has passed. This is why all forecast accuracy measurement is historical.
  3. Future forecast accuracy can only be described in terms of accuracy probability. This accuracy probability is based upon historical accuracy.

Weighed Forecast Accuracy 1

How to weight forecast accuracy.
ForecastActual6 Month Average ForecastUnweighed AccuracyWeighed Accuracy
Product B2052025%.045
Product A20015020075%1.36
Average50%70%

Here is a sample forecast error/accuracy measurement. Notice it is declared what the measurement bucket is — which is monthly. The measurement bucket is nearly always the planning bucket. 

Here is an example of an analysis (best fit) performed as a test outside of the company’s planning bucket. The planning bucket at this company was monthly. However, I wanted to test what would happen to model selection if the planning bucket was switched to quarterly. Changing the planning bucket is valuable for analysis. However, normally the error bucket is the planning bucket. And this gets into another question. 

What is the Correct Planning Bucket for Error Measurement?

The correct error measurement bucket or interval is the lead time of the product location combination. The error should be the “forecast over lead time.” This is because the forecast is used to commit to stock a product at a particular location over the product’s lead time at that location. This is a topic rarely discussed as it is on the ethereal side, but it is true. However, this is too complicated to implement as a database of products has a wide variety of lead times. Therefore, in most cases, the forecast lead time is not discussed, and the forecast error is set to an arbitrary unified interval, which is normally a month.

When one does a search in Google for “forecast error over lead time,” it is curious how few results come back related to the question. This topic is off the radar of most supply chain planning forecasting departments. However, it is important as the safety stock driven from the forecast (hypothetically, as dynamic safety stock is not very widely implemented) is supposed to be for the lead time of that product location combination. 

Let us move to another area of forecast accuracy measurement.

Moving Past the Easy Part of Forecast Error Measurement

  1. Defining forecast accuracy is the easy part.
  2. The difficult part is getting into details of forecast accuracy.

Forecast accuracy has dimensions. Any forecast error or accuracy must have a complete listing of its dimensions, or else the forecast error or accuracy does not have any meaning. It cannot be compared to other forecast errors or accuracy calculations.

This is a topic we cover in the article How is Forecast Error Measured in All of the Dimensions in Reality?

Why the Complexity in Forecast Accuracy?

The reason for the complexity is that there are multiple dimensions of forecast accuracy that must be defined before reporting the forecast accuracy is understood. When companies discuss forecast accuracy, there is a strong tendency to select a forecast error measurement and then move on from that point to observe the measurement output. However, the dimensional analysis will, in most cases, be underexplored.

Even beyond the topics raised in the article just references, there are also important distinctions to be understood regarding what “is” the forecast and what “is” the actual.

Let us address the definition of both of these values used to calculate forecast error or accuracy.

Distinction #1: Defining the Forecast

Was this the forecast before lead time, or were changes made within lead time doing something like demand sensing? For a forecast accuracy measurement to be useful, it must not be altered after the time to respond to the forecast has passed. Demand sensing alters the forecast within lead time, which is a type of forecast accuracy cheating. I cover this in detail in the article How to Best Understand Demand Sensing and Demand Shaping.

The other part of the forecast accuracy measurement is the actuals. However, here also, we run into issues with obtaining an accurate set of values.

Distinction #2: Defining the Actuals

For the actuals, is that the number that was sold, or the number that could have been sold if capacity has been available.

However, authentic demand is not what was provided or available in the databases of companies. Instead, they normally have a record of the constrained demand. Authentic demand is what was demanded, which is not the same as what was sold. What was sold was contained by what the company was able to provide. If a company stocks out of an item, demand is not recorded as a sales order and therefore is lost to the demand history. When the statistical forecast is generated, it uses the constrained demand history, not the authentic demand history.

We cover this topic in the article How to Best Understand Measuring the Unconstrained Forecast.

Conclusion

Forecast accuracy has a simple high-level definition — however, the more one dives into the specifics of forecast accuracy, the more explanation is required for what is actually being measured. All companies should have their forecast accuracy assumptions and settings documented so that those that work with forecast accuracy can know what is being measured. The context of the forecast error, its dimensions, assumptions, etc., are what make the numbers that come out of the forecast error measurement have meaning.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.