How MASE is Calculated for Forecast Error Measurement

Last Updated on April 28, 2021 by

Executive Summary

  • MASE is one of the alternatives presented for limitations with MASE forecast error measurement.
  • MASE is an interesting entry into forecast accuracy.

Video Introduction: How MASE is Calculated for Forecast Error Measurement

Text Introduction (Skip if You Watched the Video)

How MASE for Mean Absolute Scaled Error is a forecast error measurement calculation that is not that frequently used in companies. MASE has the disadvantage of non being proportional, which makes the error difficult to explain and undermines the faith that the error measurement is reliable. In fact, in most cases, when you use the term MASE in forecasting departments, most of the audience it will often be the first time they have heard of this method. You will learn about MASE’s calculation and the common problems that are found in MASE and other forecast error measurement calculations.

See our references for this article and related articles at this link.

How MASE is Calculated

How MASE is calculated is as follows.

  1. Absolute value of (Subtract the forecast from the actuals)
  2. Take the average the absolute error of the product location combinations or the MAE
  3. Divide the error by the MAE

The formula is..

= ABS(Error)/MAE

Advantages of the How MASE is Calculated for Forecast Error

  1. The error is proportional; that is, there is no squaring such as with MAD, RMSE, or sMAPE.
  2. The error is easy to calculate.
  3. The error is relative to other errors as each product location error is divided by the MAE or average error.

Disadvantages of the MASE Calculation

It is a bit nonintuitive.

One might ask why its most important feature, which is dividing the absolute error by the MAE (and from which the term “Scaled” is derived), is necessary although it is to ground the forecast error. However, most people in forecasting departments are not familiar with measuring a line item’s forecast error, in conjunction with the rest of the data set.

The Interesting Feature of MASE

One might have “categories” of error.

A natural categorization would be high volume items versus low volume items.

This would prevent the low volume items from being measured in a way that sets too high of a standard, as low volume items typically have a limited ability to attain better forecast accuracy.

The Same Problem With All the Standard Forecast Error Measurements

We have never used MASE on an actual project for reporting forecast error. However, we have tested it for several clients that wanted the forecasting error method evaluated. However, MASE shares the same problems with all of the standard forecast error measurements.

These include the following:

  1. They are difficult to weigh. And any forecast error aggregated beyond the product location combination must be weighed to make any sense. This may seem nit-picky, as weights can, in principle, be applied. It is an extra step and an extra step that most companies fail to complete. I have had many discussions with a number of companies about the importance of weighing forecasts, and it is challenging to explain the need to weighing to those outside of forecasting (and those are often people who are customers of the forecast).
  2. They make comparisons between different forecasting methods overly complicated.
  3. They lack the context of the volume of the demand history or the price of the product being forecasted, meaning that the forecast errors must be provided with context through the use of another formula.
  4. They are difficult to explain, making them less intuitive than a different approach.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.