The Problem Using a Monthly Forecast Percent Error

Executive Summary

  • The percent forecast error is commonly calculated in a problematic way.
  • We cover the proper forecast error measurement in the time dimension.

Video Introduction: Forecast Error

Text Introduction (Skip if You Watched the Video)

Not much thought is given to this topic, yet companies’ forecast errors are normally measured based on a month. However, the forecast error measurement should be over the lead time of the product location combination. This is because the forecast is designed to provide supply planning to cover the demand over lead time. You will learn the logic for forecast coverage and the difficulty in moving to forecast error over lead time.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

How the Percent Error is Calculated and the Mismatch with Lead Times

If we review how supply chain companies commonly calculate the forecast error, we find the following.

  • Typically forecast error is calculated on a month per month basis. The forecast is divided by the actual demand for a product location (or for whatever level of aggregation is being measured). In a dynamic safety stock calculation, the error is calculated over the lead-time. If the lead-time for the product is two months in length, and the month to month MAPE is 50 percent error, if 50 percent error is used, while the two-month MAPE is 25 percent error, the calculated safety stock will be too high.
  • If, on the other hand, the lead-time is two weeks, and the 50% MAPE is used, the safety stock will be too small.

The Proper Forecast Error Measurement in the Time Dimension

The only proper forecast error measurement is over the lead time. This can be seen by taking an example. If you have a one-week lead-time, then you can reorder every week. Therefore you can reorder the following week. On the other hand, if the lead-time is three months, you cannot adjust the forecast during the three months after the order is placed.

Therefore under the standard monthly forecasting error measurement interval, the forecast error will be overestimated for the product with the weekly lead time and underestimated for the product with the three month lead time.

The Problems with Using a Month as the Forecast Measurement Interval

A month is used in many cases to measure forecast error. And I do it myself on projects because if one has an overall database of products, it is too much work to adjust the forecast error per lead time per product as each product has a different lead-time.

Now I have never once seen this topic raised on projects. But it is undeniably true. Therefore, due to the complexity of measuring forecast error in this way, the standard and inaccurate interval of a month continue to be used for prediction error.

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.