How to Best Understand Forecast Bias

Executive Summary

  • Forecast bias is distinct from the forecast error and one of the most important keys to improving forecast accuracy.
  • Reducing bias means reducing the forecast input from biased sources.
  • A test case study of how bias was accounted for at the UK Department of Transportation.

Introduction: The Truth Around Bias

Forecast bias is well known in the research, however far less frequently admitted to within companies. You will learn how bias undermines forecast accuracy and the problems companies have from confronting forecast bias.

Bias as the Uncomfortable Forecasting Area

Bias is an uncomfortable area of discussion for many because it describes how people who produce forecasts can be irrational and have subconscious biases. This relates to how people consciously bias their forecast in response to incentives. This discomfort is evident in many forecasting books that limit the discussion of bias to its purely technical measurement. No one likes to be accused of having a bias, which leads to bias being underemphasized. However uncomfortable as it may be, it is one of the most important areas on which to focus to improve forecast accuracy.

What is Bias?

Forecast bias is a tendency for a forecast to be consistently higher or lower than the actual value. Forecast bias is distinct from forecast error in that a forecast can have any level of error but still be completely unbiased. For instance, even if a forecast is fifteen percent higher than the actual values half the time and fifteen percent lower than the actual values the other half of the time, it has no bias. But forecast which is on average fifteen percent lower than the actual value has both a fifteen percent error and a fifteen percent bias. Bias can exist in statistical forecasting or judgment methods. However, it is much more common with judgment methods and is, in fact, one of the major disadvantages with judgment methods.

After bias has been quantified, the next question is the origin of the bias. With statistical methods, bias means that the forecasting model must either be adjusted or switched out for a different model. Grouping similar types of products, and testing for aggregate bias, can be a beneficial exercise for attempting to select more appropriate forecasting models.

For judgment methods, bias can be conscious, in which case it is often driven by the institutional incentives provided to the forecaster. Bias can also be subconscious. A good example of subconscious bias is the optimism bias, which is a natural human characteristic. Forecasting bias can be like any other forecasting error, based upon a statistical model or judgment method that is not sufficiently predictive, or it can be quite different when it is premeditated in response to incentives. Bias is easy to demonstrate but difficult to eliminate, as exemplified by the financial services industry.

Forecast Bias List

  • Forecast bias is a tendency for a forecast to be consistently higher or lower than the actual value.
  • Forecast bias is distinct from forecast error. A forecast bias can be high, but with a reasonable forecast error given the circumstances of what is forecasted. Alternatively, a forecast bias can be low, but with a high error. For instance, in the case of service part forecasting, the bias is often (that is in data sets tested by Brightwork Research & Analysis) low, but the error is normally high.
  • For instance, a forecast which is ½ the time 15% higher than the actual, and  ½ of the time 15% lower than the actual has no bias at all. A forecast which is on average 15% lower than the actual value has both a 15% error and a 15% bias.

Bias Accounted for at the UK Department of Transportation

In addition to financial incentives that lead to bias, there is a proven observation about human nature: we overestimate our ability to forecast future events. We also have a positive bias—we project that events that we find desirable will be more prevalent in the future than they were in the past. This is one of the many well-documented human cognitive biases. Cognitive biases are part of our biological makeup and are influenced by evolution and natural selection. This human bias combines with institutional incentives to give good news and to provide positively-biased forecasts.

The UK Department of Transportation is keenly aware of bias and has developed cost uplifts that their project planners must use depending upon the type of project that is being estimated. An uplift is an increase over the initial estimate. Different project types receive different cost uplift percentages based upon the historical underestimation of each category of project.

Measuring the Uplift

For instance, rail projects receive on average a forty percent uplift, building projects between four and fifty-one percent, and IT projects between ten and two hundred percent—the highest uplift and also the widest range of uplifts. A quotation from the official UK Department of Transportation document on this topic is telling:

“Our analysis indicates that political-institutional factors in the past have created a climate where only a few actors have had a direct interest in avoiding optimism bias.”

However, once an individual knows that their forecast will be revised, they will tend to adjust their forecast accordingly. Therefore, it is essential that adjustments to a forecast be performed without the forecaster’s knowledge. The UK Department of Transportation has taken active steps to both identify the source and magnitude of bias within their organization. They have documented their project estimation bias for others to read and to learn from. However, most companies refuse to address the existence of bias, much less actively remove bias. Several reasons why the most companies tend to shy away from addressing bias are listed in the graphics below:

How Large Can Bias Be in Supply Chain Planning?

Some research studies point out the issue with forecast bias in supply chain planning. According to research by Shuster, Unahobhokha, and Allen, forecast bias averaged roughly thirty-five percent in the consumer goods industry. They point to research by Kakouros, Kuettner, and Cargille (2002) in their case study of the impact of forecast bias on a product line produced by HP. They state: “Eliminating bias from forecasts resulted in a twenty to thirty percent reduction in inventory.”

Bias Identification Within the Application

All of this information is publicly available, and can also be tracked inside of companies by developing analytics from past forecasts. Companies often do not track the forecast bias from their different areas (and, therefore, cannot compare the variance) and they also do next to nothing to reduce this bias. Forecast bias can always be determined regardless of the forecasting application used by creating a report. However, it is preferable if the bias is calculated and easily obtainable from within the forecasting application. Bias tracking should be simple to do and should be quickly observed within the application without performing an export. Of the many demand planning vendors I have evaluated over the years, one vendor stands out in their focus on actively tracking bias: Right90. The application’s simple bias indicator, shown below, shows a forty percent positive bias, which is a historical analysis of the forecast.

Being able to track a person or forecasting group is not limited to bias, but is also useful for accuracy. For instance, the screenshot on the following page is from Consensus Point and shows the forecasters and groups with the highest “net worth.” This network is earned over time by providing accurate forecasting input.

Reducing Forecasting Inputs From Biased Forecasters

This provides a quantitative and less political way of lowering input from lower quality sources. It also promotes less participation from weak forecasters, as they can see that their input has less of an impact on the forecast. These types of performance dashboards exist in a few vendors, but forecasting accuracy could be significantly improved if they were universal.

In all forms of forecasting, an easy way to compare the performance of forecasters is a necessity. Forecast inputs must be tracked and reviewed, and adjustments must eventually be made because there are wide quality differences between forecasters. These type of dashboards should be considered a best practice in forecasting software design.

The consensus-based vendors, Inkling Markets, Consensus Point and Right90, have the most significant focus on bias removal that I have seen. Why the statistical vendors lag in this area is an interesting question, and in my view can be rationally explained by the fact that judgment methods are known to have more bias than statistical methods.

Using Bias Removal as a Forecast Improvement Strategy

It’s very difficult to find a company that is satisfied with its forecast. I have yet to consult with a company that has a forecast accuracy anywhere close to the level that it really could be. Everything from the use of promotions, to the incentives they have set up internally, to poorly selected or configured forecasting applications, stand in the way of accurate forecasts. I often arrive at companies and have to deliver the bad news about how their forecast systems are mismanaged, and I am sometimes asked by a director, who is worn out by funding continuous improvement initiatives for forecasting “but why have our results not improved.” My answer is often that they are very merely violating the rules established in scholarly sources for forecast management, and therefore they have poor outcomes. However, it is also rare to find a company that has a well-thought-out plan for improving its forecast accuracy.

Focusing on the Wrong Areas for Bias Removal

When I listen to the plans from executives to improve their forecast, they almost always focus on the wrong areas and miss out on some of the most straightforward ways to obtain forecast improvement. New software is usually seen as the magic bullet but can only be part of the solution.

One of the simplest (although not the easiest) ways of improving the forecast—removing the bias—is right under almost every company’s nose, but they often have little interest in exploring this option.

Addressing Forecast Bias

We measure bias on all of our forecasting projects. Measuring bias can bring significant benefits because it allows the company to adjust the forecast bias and to improve forecast accuracy.

  • The largest bias by far tends to come from judgment methods, and within this category is sales forecasting.
  • A primary reason for this is that sales want to ensure product availability, and sales are not measured by inventory turns on inventory investment.
  • Some companies are unwilling to address their sales forecast bias, but another primary reason for their trepidation is they have never actually measured their forecast bias from all the forecast inputs.

Some companies are unwilling to address their sales forecast bias for political reasons. But a major reason for their trepidation is they have never actually measured their forecast bias from all the forecast inputs. Therefore without the actual data, they are less willing to confront entities within their company that is damaging forecast accuracy.

Conclusion

Forecast bias is quite well documented inside and outside of supply chain forecasting. Bias is based upon both external factors such as incentives provided by institutions, as well as being a basic part of human nature. How much institutional demands for bias influence forecast bias is an interesting field of study. It is a subject made even more interesting and perplexing in that so little is done to minimize incentives for bias. Properly timed biased forecasts are part of the business model for many investment banks that release positive forecasts on investments that they already own. The so-called “pump and dump” is an ancient money making technique. Investment banks promote the positive biases for their analysts, just as supply chain sales departments promote negative biases by continuing to use a salesperson’s forecast as their quota. These institutional incentives have changed little in many decades, even though there is never ending talk of changing them and even though it is well known how incentives lower forecast quality, they persist even though they conflict with all of the research in the area of bias.

The easiest way to remove bias is to remove the institutional incentives for it. However, as many companies have a strong preference for doing so, it falls to the actual forecast process, where adjustments are taken in the demand planning software after biased forecasts have been entered. Software designed around the mitigation of forecast bias can help highlight bias and can provide mechanisms to adjust it within the application. Within the application, there should be the ability to identify bias and the ability to adjust bias quickly and easily.

How Common is This?

Companies by and large do not ask for this, so software companies do not develop bias identification in their software (and do not build bias identification as a main component of the user interface). Bias identification is important enough that it should have its own dashboard, or view, within all demand planning applications, not only for general ease-of-use but because adjusting for bias is about more than identification and adjustment; it is also about making the case.

The case for bias can best made in a presentation format, to demonstrate to others that the bias exists and action should be taken to minimize its effect on the final forecast. When bias is demonstrated in this way, it’s more difficult to dispute.

If conversations in bias are kept at a high level and not demonstrated with a visual aid, which shows the bias clearly, all types of excuses will be offered by the groups that produced the biased forecast as to why there was, in fact, no bias. The application’s bias dashboard should support that presentation by showing bias from many products and from different vantage points in real time. Bias can be identified by many criteria including

  1. Bias by individuals
  2. Bias by an overall department
  3. Bias by-products and geography, etc.

Bias information must be detailed because those with a biased forecast will most often push back by saying there was a good reason for the forecast at the time.

Or that the person tabulating the bias does not have sufficient detail to claim bias. Therefore, it is not sufficient if the dashboard allows an expert to determine the existence of bias, the application should present well enough to make the case to people who are not experts in forecasting. The software should have the ability to provide decision support. But ultimately, there must be the institutional will to eliminate bias, and that cannot be obtained from a software application.

Remote Forecasting Consulting

  • Questions About This Area?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, tell us your question below.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

References

Forecast bias is covered in detail in the following book.

Forecasting Software Book

FORECASTING

Supply Chain Forecasting Software

Providing A Better Understanding of Forecasting Software

This book explains the critical aspects of supply chain forecasting. The book is designed to allow the reader to get more out of their current forecasting system, as well as explain some of the best functionality in forecasting, which may not be resident in the reader’s current system, but how they can be accessed at low-cost.

The book breaks down what is often taught as a complex subject into simple terms and provides information that can be immediately put to use by practitioners. One of the only books to have a variety of supply chain forecasting vendors showcased.

Getting the Leading Edge

The book also provides the reader with a look into the forefront of forecasting. Several concepts that are covered, while currently available in forecasting software, have yet to be widely implemented or even written about. The book moves smoothly between ideas to screen shots and descriptions of how the filters are configured and used. This provides the reader with some of the most intriguing areas of functionality within a variety of applications.

Chapters

  • Chapter 1: Introduction
  • Chapter 2: Where Forecasting Fits Within the Supply Chain Planning Footprint
  • Chapter 3: Statistical Forecasting Explained
  • Chapter 4: Why Attributes-based Forecasting is the Future of Statistical Forecasting
  • Chapter 5: The Statistical Forecasting Data Layer
  • Chapter 6: Removing Demand History and Outliers
  • Chapter 7: Consensus-based Forecasting Explained
  • Chapter 8: Collaborative Forecasting Explained
  • Chapter 9: Bias Removal
  • Chapter 10: Effective Forecast Error Management
  • Chapter 11: Lifecycle Planning
  • Chapter 12: Forecastable Versus Unforecastable Products
  • Chapter 13: Why Companies Select the Wrong Forecasting Software
  • Chapter 14: Conclusion
  • Appendix A:
  • Appendix B: Forecast Locking
  • Appendix C: The Lewandowski Algorithm.