Is It Possible to Know the Improvement from AI Without Knowing the Forecast Error?

Executive Summary

  • It is often stated that AI will significantly improve forecast accuracy.
  • These AI proponents seem to assume that it will be a simple matter to measure AI’s net improvement on forecasting.

Video Introduction: Is It Possible to Know the Improvement from AI Without Knowing the Forecast Error?

Text Introduction (Skip if You Watched the Video)

At Brightwork Research & Analysis, we usually have popped the balloon of hype around AI/ML and its projected opportunities to improve forecast accuracy. In this article, we ask a different question: how companies will know if their expensive AI/ML project is improving forecast accuracy. This is a topic that nearly never discussed when the topic of AI/ML is brought up. You will learn why without an effective method of performing forecast error measurement, it will be difficult for companies to know if their AI/ML investments are providing the promised benefit.

Our References for This Article

If you want to see our references for this article and related Brightwork articles, see this link.

Issues That Hold Back Forecast Error Measurement

Issues that restrict effectively measuring forecast error.
Measurement IssueDescription
Limitations in Forecasting ApplicationsMost forecasting applications only measure the forecast error at the SKU, and do not allow for total product location database measurement and weighed forecast errors.
Error MetricsOne of the most intuitive forecast error measurements, MAPE, is undermined when there are zeros in the demand history. And zeros are increasingly prevalent in sales histories.
Zero Tolerant Error Metrics are ComplexError metrics that can tolerate zeros in the demand history (like sMAPE, MASE etc..) are not intuitive, are complex to calculate and are often not available within forecasting applications.
Exempt Groups from Forecast ErrorSome groups in organizations submit inputs to the final forecast, but are not held accountable for forecast error.
Poor Education on Forecast ErrorBasic forecasting error understanding is often lacking within companies. For example, the idea that the forecast error completely changed depending upon the forecast bucket and the level in the hierarchy must often be repeatedly explained.
Constant Error Reporting DiscrepanciesSales and marketing and other groups report forecast error at high levels of aggregations than supply chain management. All of these higher levels of aggregation result in lower forecast errors, giving a false impression as to the actual forecast error. For supply chain management the forecast error must be measured at the product location combination (or SKU). Some supply chain departments report out aggregated forecast error, again to make the forecast error appear better than it is.
A Lack of Error Measurement AutomationAs forecast error cannot be calculated with much nuance or customizability within forecasting applications, this means that some automated method of measuring forecast error outside of forecasting applications is necessary. The lack of this ability is often used as an excuse to report forecast error at higher levels of aggregation (see points 5 and 6 above for the problems with this.)

The Problem with Starting AI/ML Projects Without the Forecast Error Worked Out Beforehand

AI or any other method used to improve forecast error requires a consistent and agreed-upon method of measuring forecast error. Yet, most AI projects are begun before this is in place. When executives hear about AI, they often get excited and are more willing to open their wallets. IBM’s AI consulting division recently reported 20,000 ongoing AI projects (some of those are outside of forecasting, but many are in forecasting). And how many forecast error improvement projects does IBM have underway?

We would estimate very few.

Like IBM, Deloitte wants to sell you an AI project. How about a forecast error measurement project? Not so much. AI is hot. Forecast error measurement is decidedly “not.” 

Delving into forecast error improvement does not excite anyone. It can lead to eye-rolling, a concern by executives that they will be held responsible for problematic forecasting inputs if the measurement is too effective. There is also a general disdain for the mathematics of forecast error measurement combined with the fact that most forecast error measurements are not that helpful in pointing companies where they should focus on improving forecast error.

How can any AI forecast improvement project be approved without a reliable and proven forecast error measurement already be in place?

Forecast Error Questions For which the Company Should Already Have Answers

A high percentage of companies that have kicked off AI forecast improvement projects most likely do not answer these and more questions around forecast error measurement.

  • Is the company going to report based on a weighted forecast error?
  • How will the forecast error be used to drive forecast improvement?
  • Will low volume SKUs with a high forecast error be targeted for more improvement effort than high volume SKUs?
  • What is the mechanism for defining some of the product location databases as “unforecastable?”

Without these and more questions answered, what is the point of the AI project being kicked off?

Conclusion

Without defining the measurement schema, AI forecast improvement projects are pointless. Most of them are fruitless in any case and filled with exaggerated promises by both software vendors and consulting firms eager to ride the AI bubble to revenue enhancement.

These projects are doubly undermined out of the gate and guaranteed to waste money and distract from forecast improvement without a firm forecast error measurement schema.

Let us recall; forecast error measurement should be the easier task. If one can’t develop a forecast error measurement schema, what hope can there be to master the far more complex and speculative AI forecast improvement project?

Why Do the Standard Forecast Error Calculations Make Forecast Improvement So Complicated and Difficult?

It is important to understand forecasting error, but the problem is that the standard forecast error calculation methods do not provide this good understanding. In part, they don't let tell companies that forecast how to make improvements. If the standard forecast measurement calculations did, it would be far more straightforward and companies would have a far easier time performing forecast error measurement calculation.

What the Forecast Error Calculation and System Should Be Able to Do

One would be able to for example:

  1. Measure forecast error
  2. Compare forecast error (For all the forecasts at the company)
  3. To sort the product location combinations based on which product locations lost or gained forecast accuracy from other forecasts. 
  4. To be able to measure any forecast against the baseline statistical forecast.
  5. To weigh the forecast error (so progress for the overall product database can be tracked)

Getting to a Better Forecast Error Measurement Capability

A primary reason these things can not be accomplished with the standard forecast error measurements is that they are unnecessarily complicated, and forecasting applications that companies buy are focused on generating forecasts, not on measuring forecast error outside of one product location combination at a time. After observing ineffective and non-comparative forecast error measurements at so many companies, we developed, in part, a purpose-built forecast error application called the Brightwork Explorer to meet these requirements.

Few companies will ever use our Brightwork Explorer or have us use it for them. However, the lessons from the approach followed in requirements development for forecast error measurement are important for anyone who wants to improve forecast accuracy.Â