Forecast Error

What is a Good Forecast Accuracy Percentage?

Executive Summary

  • A frequently asked question is what is a good forecast accuracy percentage.
  • We answer if there a real answer to this question.

Introduction

Companies typically want to know if their forecast accuracy percentage is what is a good forecast accuracy percentage. We will cover this question, and bring up important related topics.

See our references for this article and related articles at this link.

The Problem With the Idea of What is a Good Forecast Accuracy Percentage

Under the perfect scenario, people that ask this question normally would like to be able to receive an acceptable accuracy percentage value, and then compare it against their current accuracy percentage.

The following are the reasons that the idea of an acceptable forecast accuracy percentage is problematic.

Issue #1: The Question of Volume

Forecast accuracy is, in large part, determined by the demand pattern of the item being forecasted. Some items are easy to forecast, and some are difficult. For example, it is virtually impossible for a company with many intermittent demand items to match the forecast accuracy of a company with a large percentage of high volume items in its database.

Issue #2: Forecasting Competence

Most companies tend to put minimal resources into forecasting. Even the largest companies have quite small demand planning departments. Therefore, forecasting in a way that matches or exceeds the average forecast accuracy of other companies is not much of an accomplishment.

Issue #3: Forecast Accuracy Measurement Differences

There is an enormous number of variances, both terms of forecast generation, but also how the forecast error is measured. A forecast can be weekly or monthly or even quarterly in its planning bucket. The planning bucket typically determines the measurement bucket. The same forecasting method will have very different accuracy, just depending upon what planning bucket is used.

There are several different ways of measuring forecasting error, with the common ones being MAD, MAPE and RMSE, and then less common ones being sMAPE and MASE — and there are many others. None of the forecast error measurements can be compared to each other.

For a forecast error to be comparable, every single dimension of the forecast error must be the same, the planning bucket, the forecast error measurement, the forecast aggregation, and all the other factors. The idea that a forecast error has any meaning, without a complete understanding of what all of the dimensions are, is a significant myth of forecast error measurement.

For these reasons and more, it is not feasible to provide a specific forecast accuracy percentage.

The Comparative Accuracy of Different Approaches

There is a way to determine how much forecast’s accuracy can be improved from its initial level, but it is no by setting an “acceptable forecast level.” Instead, it means building up the company’s ability to measure comparative forecast error. That is, if approach A is used instead of approach B, then what is the forecast error. To understand more about comparative forecast error, see the article Forecast Error Myth #5: Non Comparative Forecast Error Measurement is Helpful.

A More Straightforward Approach

Observing ineffective and non-comparative forecast error measurements at so many companies, we developed the Brightwork Explorer to in part, have a purpose-built application that can measure any forecast and to compare this one forecast versus another.

The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from other forecasts.

This is the fastest and most accurate way of measuring multiple forecasts that we have seen.