Knowing the Improvement from AI Without Knowing the Forecast Error?

Executive Summary

  • It is often stated that AI will greatly improve forecast accuracy.
  • These AI proponents seem to assume that it will be a simple matter to measure the net improvement of AI on forecasting.

Introduction

At Brightwork we have normally popped the balloon of hype around AI/ML and its projected opportunities to improve forecast accuracy. In this article, we ask a different question, which is how will companies know if their expensive AI/ML project is improving forecast accuracy.

How Most Companies Fail to Effectively Measure Forecast Error.

The degree to which forecast accuracy measurement in performed in practice is generally greatly overestimated. While one can find an enormous number of articles on the best way to measure forecast accuracy, most of the articles tend to focus on the measurement math that is used. The reality of how forecast error is measured is a far smaller area of coverage.

There are several issues that hold back companies from effectively measuring forecast error.

Issues That Hold Back Forecast Error Measurement

 
Measurement Issue
Description
1Limitations in Forecasting ApplicationsMost forecasting applications only measure the forecast error at the SKU, and do not allow for total product location database measurement and weighed forecast errors.
2Error MetricsOne of the most intuitive forecast error measurements, MAPE, is undermined when there are zeros in the demand history. And zeros are increasingly prevalent in sales histories.
3Zero Tolerant Error Metrics are ComplexError metrics that can tolerate zeros in the demand history (like sMAPE, MASE etc..) are not intuitive, are complex to calculate and are often not available within forecasting applications.
4Exempt Groups from Forecast ErrorSome groups in organizations submit inputs to the final forecast, but are not held accountable for forecast error.
5Poor Education on Forecast ErrorBasic forecasting error understanding is often lacking within companies. For example, the idea that the forecast error completely changed depending upon the forecast bucket and the level in the hierarchy must often be repeatedly explained.
6Constant Error Reporting DiscrepanciesSales and marketing and other groups report forecast error at high levels of aggregations than supply chain management. All of these higher levels of aggregation result in lower forecast errors, giving a false impression as to the actual forecast error. For supply chain management the forecast error must be measured at the product location combination (or SKU). Some supply chain departments report out aggregated forecast error, again to make the forecast error appear better than it is.
7A Lack of Error Measurement AutomationAs forecast error cannot be calculated with much nuance or customizability within forecasting applications, this means that some automated method of measuring forecast error outside of forecasting applications is necessary. The lack of this ability is often used as an excuse to report forecast error at higher levels of aggregation (see points 5 and 6 above for the problems with this.)

The Problem with Starting AI/ML Projects Without the Forecast Error Worked Out Beforehand

AI or any other method used to improve forecast error requires a consistent and agreed upon method of measuring forecast error. Yet, most AI projects are begun before this is in place. When executives hear about AI, they often get excited and are more willing to open their wallets. IBM’s AI consulting division recently reported 20,000 ongoing AI projects (some of those are outside of forecasting, but many are in forecasting). And how many forecast error improvement projects does IBM have underway?

We would estimate very few.

Like IBM, Deloitte clearly wants to sell you an AI project. How about a forecast error measurement project? Not so much. AI is hot. Forecast error measurement is decidedly “not.” 

Delving into forecast error improvement does not excite anyone, it can lead to eye-rolling, a concern by executives that they will be held responsible for problematic forecasting inputs if the measurement is too effective, a general disdain for the mathematics of forecast error measurement and it certainly is not very cinematic.

How can any AI forecast improvement project be approved, without a solid and proven forecast error measurement already be in place?

Forecast Error Questions For which the Company Should Already Have Answers

A high percentage of companies that have kicked off AI forecast improvement projects most likely do not have the answer to these and more questions around forecast error measurement.

  • Is the company going to report on the basis of a weighted forecast error?
  • How will the forecast error be used to drive forecast improvement?
  • Will low volume SKUs with a high forecast error be targeted for more improvement effort than high volume SKUs?
  • What is the mechanism for defining some of the product location database as “unforecastable?”

Without these and more questions answered, what is the point of the AI project being kicked off?

Conclusion

Without defining the measurement schema, AI forecast improvement projects are pointless. In fact, most of them are fruitless in any case and filled with exaggerated promises by both software vendors and consulting firms eager to ride the AI bubble to revenue enhancement.

These projects are doubly undermined out of the gate and guaranteed to waste money and to distract from forecast improvement without a firm forecast error measurement schema.

Let us recall, forecast error measurement should be the easier task. If one can’t develop a forecast error measurement schema, what hope can there be to master the far more complex and speculative AI forecast improvement project?

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion