How MAPE is Calculated for Forecast Error Measurement

Executive Summary

  • MAPE is a universally accepted forecast error measurement.
  • MAPE is generally low in effectiveness in providing feedback to improve the forecast.

Introduction (Skip if You Watched the Video)

MAPE, or Mean Absolute Percentage Error, is a forecast error calculation method that removes negatives from the equation. It is easy to understand and easy to calculate. It also is proportional. MAPE is a universally accepted forecast error measurement; even still, MAPE is generally moderate in effectiveness in providing feedback to improve the forecast. The MAPE calculation must be weighed to view the actual forecast error concerning the overall forecast database. You will learn about MAPE calculation and different ways of calculating MAPE, and broader implications for forecast improvement using the MAPE.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

How MAPE is Calculated

How MAPE is calculated is one of the most common questions we get.

MAPE is calculated as follows.

  1. Take the absolute value of the forecast minus the actual for each period that is being measured.
  2. Divide this result by the actual.

The formula is..

= ABS(F – A)/A

The Broader Context of How MAPE is Calculated

But the narrow question broadens out when one looks at the different dimensions of forecast error.

We cover in the article How SAP DP miscalculates The MAPE Error With Zero Demand, the best-known issue and fatal flaw of MAPE, which is its inability to deal with periods of zero demand. We cover in the article How to Use Weighing MAPE for Forecast Error Measurement, how MAPE can be weighted, but how it isn’t easy to do so, and this leads (in part) to MAPE rarely being weighed, which leads to enormous inaccuracy when MAPE is reported in any aggregation.

The Same Problem With All the Standard Forecast Error Measurements

Before we developed our approach to forecast error calculation, MAPE was our favorite method.

Yet, after several years, the problems with MAPE continued to surface. And they are problems that all standard forecast error measurements have in common.

These include the following:

  1. They are difficult to weigh. Any forecast error aggregated beyond the product location combination must be weighed to make any sense.
  2. They make comparisons between different forecasting methods overly complicated.
  3. They lack the context of the volume of the demand history or the price of the forecasted product, meaning that the forecast errors must be provided with context through another formula.
  4. They are difficult to explain, making them less intuitive than a different approach.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.