What is a Good Forecast Accuracy Percentage?

Executive Summary

  • A frequently asked question is what is a good forecast accuracy percentage.
  • We answer if there a real answer to this question.

Introduction

Executives in companies typically want to know if their forecast accuracy percentage is what is a good forecast accuracy percentage. The standard expectation is that their forecast accuracy can be benchmarked and determined whether it is good or bad. And there is also frequently an expectation that there are readily available benchmarks for forecast accuracy. You will learn the problems of thinking of forecast accuracy or error and a more functional way to think of forecast error comparison.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, visit this link.

The Problem With the Idea of What is a Good Forecast Accuracy Percentage

Under the perfect scenario, people who ask us this question typically would like to receive an acceptable accuracy percentage value and then know what to do next. However, while following this path, it is quite frequent that they miss out on functional comparisons of forecast errors that are available to them right with their own demand history and forecast history. Before we get into that topic, let us first address some reasons why the idea of an acceptable forecast accuracy percentage is problematic.

Issue #1: The Question of Volume

Forecast accuracy is, in large part, determined by the demand pattern of the item being forecasted. Some items are easy to forecast, and some are difficult.

For example, a company with many intermittent demand items can’t match a company’s forecast accuracy with a large percentage of high volume items in its database. And companies frequently have a high percentage of their product database with low or intermittent demand. These companies could quickly reduce their forecast error by removing the low-demand items from the product database. Still, instead, they tend to add to the number of difficult to forecast products through marketing efforts.

Issue #2: Forecasting Competence

Most companies tend to put minimal resources into forecasting, particularly when considering how important forecasting is to the company’s performance. Even the largest companies have relatively small demand planning departments. It is common for those in leadership positions that touch forecasting to not care about the topic. Several influential groups in companies are more often than not hostile to forecast error measurement.

Therefore, even if forecasting benchmarks were available (which they are not), forecasting in a way that matches or exceeds the average forecast accuracy of other companies is not much of an accomplishment.

Issue #3: Forecast Accuracy Measurement Differences

There is an enormous number of variances in forecast generation and how the forecast error is measured. There are several ways of measuring forecasting error, with the common ones being MAD, MAPE, and RMSE, and the less common ones being sMAPE and MASE — and many others. None of the forecast error measurements can be compared to each other.

The Measurement Bucket

A forecast can be weekly or monthly, or even quarterly in its planning bucket. The planning bucket typically determines the measurement bucket. The same forecasting method will have very different accuracy depending on what planning bucket is used.

The Reality of Comparison

For a benchmark to be performed, each company would need to measure the forecast error the same way (which they wouldn’t) and to report their results honestly (which they don’t), and consistently measure and report on the error over the years (which they will never do). Companies that want this information want to access it but don’t want to contribute to its development or maintenance.

For a forecast error to be comparable, every single dimension of the forecast error must be the same: the planning bucket, the forecast error measurement, the forecast aggregation, and all the other factors. The idea that a forecast error has any meaning without a complete understanding of the dimensions is a significant myth of forecast error measurement.

For these reasons and more, providing a specific forecast accuracy percentage is not feasible.

The Comparative Accuracy of Different Approaches

There is a way to determine how much forecast accuracy can be improved from its initial level, but it is not by setting an “acceptable forecast level.” Instead, it means building up the company’s ability to measure comparative forecast error. What is the forecast error if approach A is used instead of approach B? The first comparison that generally should be done is the current forecast (either statistical or final forecast) and compared against a fundamental forecast like a simple moving average. This type of forecast is called the “naive forecast,” and a comparison results in an estimate of the “forecast value add.” You can read more about it in the article How to Understand the Naive Forecast Best?

Calculating comparative forecast error is one of the most critical capabilities required to improve forecast accuracy. However, it is a capability that few companies possess. To understand comparative forecast error, see the article Forecast Error Myth #5: Non-Comparative Forecast Error Measurement is Helpful.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.