- The forecast accuracy is measured by a number of universally accepted measurement methods.
- After going over them, we will question if these methods are effective.
How is forecast accuracy measured is one of the most common questions in the field of forecasting.
See our references for this article and related articles at this link.
Forecast accuracy calculation applying one of the standard forecast error calculation methods. The calculation of these methods is widely known, but not as well understood as generally thought. This article is written from the vantage point of having worked on forecasting projects for over a decade and a half and will describe the reality of forecast accuracy measurement.
How is Forecast Accuracy Measured?
The most commonly used forecast accuracy measurements are listed below.
We have not conducted a poll to determine the forecast accuracy measurement, but having worked in the field for a while, this is our rough estimate of the relative frequency of use from most to least used.
To see an explanation of each, just click the link.
- MAPE: Mean Absolute Percentage Error
- MAD: Mean Absolute Deviation
- MAE: Mean Absolute Error
- RMSE: Root Mean Square Error
- MASE: Mean Absolute Scaled Error
- sMAPE: Symmetrical MAPE
Something to notice is that as one goes down the list, the lesser-used forecast accuracy measurements are lesser used. A significant issue is forecast accuracy measurements that are not proportional. Forecast accuracy measurements that are not proportional are unintuitive and, hence difficult to understand.
When the topic of forecast accuracy is discussed in most cases, the topic does not move beyond the forecast accuracy of the item, generally at a location, what we call the product location combination.
In many cases, forecasts are created at a much more aggregated level, such as with sales forecasting. However, when the forecast accuracy is created at the product location, a significant element of the forecast accuracy measurement is how the accuracy is reported outside of the individuals actually performing the forecasting. And this gets into the topic of both forecast accuracy reporting and aggregation.
The topic of forecast accuracy tends to focus overwhelmingly on the forecast error measurement, when in fact, this is only one dimension of the forecast accuracy measurement as we cover in the article How is Forecast Error Measured in All of the Dimensions in Reality?
Forecast Accuracy Weighing
One of the major issues of forecast accuracy measurement is that the forecast accuracy is not weighted, and this means the aggregated forecast accuracy is inaccurate. Most people being presented with aggregated forecast accuracy are not aware of this inaccuracy due to unweighted forecast errors.
We have identified this as a major myth in the article Forecast Error Myth #2: An Unweighted Forecast Error Measurement Makes Sense.
Other types of aggregation also can easily lead to inaccuracy.
Forecast Accuracy by Grouping
Companies often think that forecasts do not have to be weighed if they are grouped, which is not true. This only rolls the inaccuracy into a more aggregated level.
Reporting Out Forecast Error from the Demand Planning Department
The forecast error measurement in forecasting applications is primarily for the user or planner. As we cover in the article Forecast Error Myth #1: One Can Rely on The Forecast Error Measurement in Forecasting Applications, this forecast error is only calculated for the product location combination.
Companies will then create a custom report that pushes the forecast error, or planners will export the error to Excel, and then perform the calculation in a spreadsheet, and provide it to their Director of Forecasting/Demand Planning, who then provides it to the broader company. The forecast error report is often presented with great anticipation as if this is the “horse’s mouth” on forecast error for the period.
It isn’t. It does not direct the company where to make forecast accuracy improvements.
A Better Approach
Observing ineffective forecast error measurements at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast. The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from other forecasts.
This is the fastest and most accurate way of measuring multiple forecasts that we have seen.