Forecast Error Myth #5: Non Comparative Forecast Error Measurement is Helpful

Executive Summary

  • The standard forecast error measurements are universally accepted, but they are not comparative.
  • Comparative forecast error is rarely discussed.

Introduction

Forecast error is nearly universally reported without comparison. The forecast error is reported as a single number, which is the forecast using whatever is the current forecasting approach. In companies, people will often propose another method that will improve forecast accuracy, but they normally don’t test the hypothesis. You will learn why one must know the comparative forecast error or the error would be the change in the method was applied to make improvements in forecast accuracy.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

Myth #5: The Helpfulness of Non-Comparative Forecast Error Measurement

This reinforces the lack of testing that is so common in forecasting. Forecasting is not seen as being at one level at one point in time, only to be improved with the economical application of new methods, but rather a steady-state of error.

When a forecasting application produces a forecast error, it may provide multiple forecast error calculations (MAD, MAPE, MSE, etc..). It is invariably for a single forecast. As we cover in the article Forecast Error Myth #1: One Can Rely on The Forecast Error Measurement in Forecasting Applications, the forecast error measurement functionality in nearly all forecasting applications is extremely limited. Forecasting applications are designed to produce a non-comparative forecast error at a product location combination.

For All of the Myths, See the Following Table

Forecasting Error Myths

Select the link below to be taken to the appropriate forecast error myth.
Myth NumberForecasting Error MythForecasting Error Myth Article and Link
1One Can Rely on The Forecast Error Measurement in Forecasting ApplicationsLink
2Unweighted Forecast Error Measurement Makes SenseLink
3Sales And Marketing Have their Forecast Error MeasuredLink
4Most Forecast Error Measurement Supports Identifying How to Improve Forecast AccuracyLink
5Non Comparative Forecast Error Measurement is HelpfulLink
6Forecast Error Measurements are Straightforward to DoLink

All other forecast error measurement is normally the product of a custom report that companies create — or is a periodic extract to Excelthath is aggregated to what the company’s executives want to see. These forecast error measurements are also not comparative, as the executives are not responsible for testing different forecasting methods to improve forecast error. Of course, a change in forecast method must be changed at the product location combination.

All of this leads to the following question.

What Is The Purpose of a Forecast Error Measurement?

Forecast error measurement should point to where to make changes to improve forecast accuracy. However, a non-comparative forecast error does not do this. And it is almost a universal reality that forecasting spends far less time in attempting different forecasting methods than having planners make manual adjustments to the forecast.

In many years of forecasting consulting, I found that it was rare for either testing to be performed, or for ongoing adjustments to the major levers within the forecasting system.

If the forecast error does not serve this purpose, then it just becomes an inert number.

Trying to Effectively Measure Comparative Forecasts

Years ago I used to measure forecast accuracy by importing forecasts from one system that was difficult to measure accuracy, into one that was easier.

This is explained in the following quotation, which is from an article I wrote years ago that I have since deleted as I have moved away from.

In order to compare the forecast accuracy from two systems against one another, it’s essential to compare the identical forecast accuracy. Different systems can measure accuracy. Differently, this can be a challenging task. However, what I found convenient is that the application Smoothie by Demand Works allows the importation of other forecasts and the direct comparison between an external forecast like SAP DP, vs the Smoothie generated forecast. Actually, Smoothie enables the import of up to 3 forecasts, and this allows the consistent calculation of errors across all forecasts.

The Use of Reference Measures

The way this is accomplished is by importing forecasts into Smoothie. The way Smoothie does this is a little like how you copy a tab in Excel from one Workbook to another. First, you create a model and then import the forecast to that model via a spreadsheet or an ODBC compliant database.

This part is essential. It is necessary that both models have the same setups, such as the number of periods of history, the number of periods to forecast. If this is not set up, the import will fail. Next, go to the model you want to import the data and select Import from the menu under File, and then Model Forecast.

Next, select the Measure drop down and then select either Reference 1, Reference 2 or Reference 3. Smoothie allows the importation of anywhere from 1 to 3 models. Next select the model you want to import from, and then select process.

How to Compare Forecast Error from Two Different Systems through Forecast Importation

Once imported, you can now directly compare the forecasts against one another. It’s hard to overestimate how important and time-saving this is.

I have stopped using this approach, and now import both forecasts into a separate specialized forecast error measurement application.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.