What is an Acceptable or What is a Good Forecast Accuracy?

Executive Summary

  • A frequently asked question is what is a good or an acceptable forecast accuracy.
  • Is there a real answer to this question?

Introduction

What is good forecast accuracy is a common question. Companies typically want to know if their forecast accuracy is where it should be. However, this question opens up another question related to the company’s ability to perform forecast error measurement and is rarely asked. Our client consulting experience tells us is a significant limitation in forecasting at companies.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

The Problem With the Question of What is a Good Forecast Accuracy

The following are why the idea of “what is a good forecast accuracy” or “what is an acceptable forecast accuracy” is problematic.

Issue #1: The Question of Volume

Forecast accuracy is, in large part, determined by the demand pattern of the item being forecasted. Some items are easy to forecast, and some are difficult. For example, a company with many intermittent demand items can’t match a company’s forecast accuracy with a large percentage of high-volume items in its database.

Issue #2: Forecasting Competence

Most companies tend to put minimal resources into forecasting. Even the most prominent companies have quite small demand planning departments. Therefore, forecasting in a way that matches or exceeds the average forecast accuracy of other companies is not much of an accomplishment.

Issue #3: Forecast Error Measurement Differences

There is an enormous number of variances in forecast generation and how the forecast error is measured. A forecast can be weekly monthly, or even quarterly in its planning bucket. The planning bucket typically determines the measurement bucket. The same forecasting method will have very different accuracy depending on what planning bucket is used.

There are several ways of measuring forecasting error, with the common ones being MAD, MAPE, and RMSE, and the less common ones being sMAPE and MASE — and many others. None of the forecast error measurements can be compared to each other.

For a forecast error to be comparable, every single dimension of the forecast error must be the same: the planning bucket, the forecast error measurement, the forecast aggregation, and all the other factors. The idea that a forecast error has any meaning without a complete understanding of the dimensions is a significant myth of forecast error measurement.

The Comparative Error of Different Approaches

There is a way to determine how much forecast accuracy can be improved from its initial level, but it does not set an “acceptable forecast level.” Instead, it means building up the company’s ability to measure comparative forecast error. If approach A is used instead of approach B, what is the forecast error. To understand comparative forecast error, see the article Forecast Error Myth #5: Non Comparative Forecast Error Measurement is Helpful.

A More Straightforward Approach

Observing ineffective and non-comparative forecast error measurements at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast and compare this one forecast versus another.

The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast, and then the product location combinations can be sorted to show which product locations lost or gained forecast accuracy from other forecasts.

This is the fastest and most accurate way of measuring multiple forecasts that we have seen.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.