- The forecast accuracy is measured by a number of universally accepted measurement methods.
- After going over them, we will question if these methods are effective.
Video Introduction: How is Forecast Accuracy Measured?
Text Introduction (Skip if You Watched the Video)
How forecast accuracy is measured is one of the most common questions in the field of forecasting. Forecast accuracy calculation is normally performed by applying one of the standard forecast error calculation methods. The calculation of these methods is widely known but not as well understood as generally thought. Forecast error is deceptively easy to understand. The vast majority of people who work with forecast errors can often be caught off guard about the forecast error they rely upon. The term “the devil is in the details” certainly applies to forecast accuracy. This problem with understanding forecast error restricts the ability to improve the accuracy of the forecast.
Our References for This Article
If you want to see our references for this article and related Brightwork articles, see this link.
How is Forecast Accuracy Measured?
The most commonly used forecast accuracy measurements are listed below.
We have not conducted a poll to determine the forecast accuracy measurement, but having worked in the field for a while, this is our rough estimate of the relative frequency of use from most to least used.
To see an explanation of each, click the link.
- MAPE: Mean Absolute Percentage Error
- MAD: Mean Absolute Deviation
- MAE: Mean Absolute Error
- RMSE: Root Mean Square Error
- MASE: Mean Absolute Scaled Error
- sMAPE: Symmetrical MAPE
Something to notice is that the lesser-used forecast accuracy measurements are lesser used as one goes down the list. A significant issue is forecast accuracy measurements that are not proportional. Forecast accuracy measurements that are not proportional are unintuitive and, hence difficult to understand.
When forecast accuracy is discussed in most cases, the topic does not move beyond the forecast accuracy of the item, generally at a location, what we call the product location combination.
In many cases, forecasts are created at a much more aggregated level, such as sales forecasting. However, when the forecast accuracy is created at the product location, a significant element of the forecast accuracy measurement is how the accuracy is reported outside of the individuals actually performing the forecasting. And this gets into the topic of both forecast accuracy reporting and aggregation.
The topic of forecast accuracy tends to focus overwhelmingly on forecast error measurement. In fact, this is only one dimension of forecast accuracy measurement, as we cover in the article How is Forecast Error Measured in All of the Dimensions in Reality?
Forecast Accuracy Weighing
One of the major issues of forecast accuracy measurement is that the forecast accuracy is not weighted, which means the aggregated forecast accuracy is inaccurate. Most people presented with aggregated forecast accuracy are not aware of this inaccuracy due to unweighted forecast errors.
We have identified this as a major myth in the article Forecast Error Myth #2: An Unweighted Forecast Error Measurement Makes Sense.
Other types of aggregation also can easily lead to inaccuracy.
Forecast Accuracy by Grouping
Companies often think that forecasts do not have to be weighed if grouped, which is not true. This only rolls the inaccuracy into a more aggregated level.
Reporting Out Forecast Error from the Demand Planning Department
The forecast error measurement in forecasting applications is primarily for the user or planner. As we cover in the article Forecast Error Myth #1: One Can Rely on The Forecast Error Measurement in Forecasting Applications, this forecast error is only calculated for the product location combination.
Companies will then create a custom report that pushes the forecast error, or planners will export the error to Excel, perform the calculation in a spreadsheet, and provide it to their Director of Forecasting/Demand Planning, who then provides it to the broader company. The forecast error report is often presented with great anticipation as if this is the “horse’s mouth” on forecast error for the period.
It isn’t. It does not direct the company were to make forecast accuracy improvements.
A Better Approach
Observing ineffective forecast error measurements at so many companies, we developed the Brightwork Explorer to, in part, have a purpose-built application that can measure any forecast. The application has a very straightforward file format where your company’s data can be entered, and the forecast error calculation is exceptionally straightforward. Any forecast can be measured against the baseline statistical forecast — and then the product location combinations can be sorted to show which product locations lost or gain forecast accuracy from other forecasts.
This is the fastest and most accurate way of measuring multiple forecasts that we have seen.
Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?
It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.
What the Forecast Error Calculation and System Should Be Able to Do
One would be able to for example:
- Measure forecast error
- Compare forecast error (For all the forecasts at the company)
- To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts.
- To be able to measure any forecast against the baseline statistical forecast.
- To weigh the forecast error (so progress for the overall product database can be tracked)
Getting to a Better Forecast Error Measurement Capability
A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.
Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.