Last Updated on March 24, 2021 by HostingandOther
- Forecast bias is distinct from the forecast error and one of the most important keys to improving forecast accuracy.
- Reducing bias means reducing the forecast input from biased sources.
- A test case study of how bias was accounted for at the UK Department of Transportation.
*This article has been significantly updated as of Feb 2021.
Video Introduction: How to Understand Forecast Bias
Text Introduction (Skip if You Watched the Video)
Forecast bias is distinct from forecast error and is one of the most important keys to improving forecast accuracy. It is a tendency for a forecast to be consistently higher or lower than the actual value. Forecast bias is well known in the research, however far less frequently admitted to within companies. You will learn how bias undermines forecast accuracy and the problems companies have from confronting forecast bias. We will also cover why companies, more often than not, refuse to address forecast bias, even though it is relatively easy to measure.
Our References for This Article
If you want to see our references for this article and other Brightwork related articles, see this link.
Bias as the Uncomfortable Forecasting Area
Bias is an uncomfortable area of discussion because it describes how people who produce forecasts can be irrational and have subconscious biases. This relates to how people consciously bias their forecast in response to incentives. This discomfort is evident in many forecasting books that limit the discussion of bias to its purely technical measurement. No one likes to be accused of having a bias, which leads to bias being underemphasized. However, uncomfortable as it may be, it is one of the most critical areas to focus on to improve forecast accuracy.
What is Bias?
Forecast bias is a tendency for a forecast to be consistently higher or lower than the actual value. Forecast bias is distinct from forecast error in that a forecast can have any level of error but still be completely unbiased. For instance, even if a forecast is fifteen percent higher than the actual values half the time and fifteen percent lower than the actual values the other half of the time, it has no bias. But forecast, which is, on average, fifteen percent lower than the actual value, has both a fifteen percent error and a fifteen percent bias. Bias can exist in statistical forecasting or judgment methods. However, it is much more prevalent with judgment methods and is, in fact, one of the major disadvantages with judgment methods.
After bias has been quantified, the next question is the origin of the bias. With statistical methods, bias means that the forecasting model must either be adjusted or switched out for a different model. Grouping similar types of products, and testing for aggregate bias, can be a beneficial exercise for attempting to select more appropriate forecasting models.
For judgment methods, bias can be conscious, in which case it is often driven by the institutional incentives provided to the forecaster. Bias can also be subconscious. An excellent example of unconscious bias is the optimism bias, which is a natural human characteristic. Forecasting bias can be like any other forecasting error, based upon a statistical model or judgment method that is not sufficiently predictive, or it can be quite different when it is premeditated in response to incentives. Bias is easy to demonstrate but difficult to eliminate, as exemplified by the financial services industry.
Forecast Bias List
- Forecast bias is a tendency for a forecast to be consistently higher or lower than the actual value.
- Forecast bias is distinct from forecast error. A forecast bias can be high, but with a reasonable forecast error given the forecasted circumstances. Alternatively, a forecast bias can be low, but with a high error. For instance, in in-service part forecasting, the bias is often (that is, in data sets tested by Brightwork Research & Analysis) low, but the error usually is high.
- For instance, a forecast which is ½ the time 15% higher than the actual, and ½ of the time 15% lower than the actual has no bias. A forecast which is, on average, 15% lower than the actual value has both a 15% error and a 15% bias.
Bias Accounted for at the UK Department of Transportation
In addition to financial incentives that lead to bias, there is a proven observation about human nature: we overestimate our ability to forecast future events. We also have a positive bias—we project that we find desirable events will be more prevalent in the future than they were in the past. This is one of the many well-documented human cognitive biases. Cognitive biases are part of our biological makeup and are influenced by evolution and natural selection. This human bias combines with institutional incentives to give good news and to provide positively-biased forecasts.
The UK Department of Transportation is keenly aware of bias. It has developed cost uplifts that their project planners must use depending upon the type of project estimated. Uplift is an increase over the initial estimate. Different project types receive different cost uplift percentages based upon the historical underestimation of each category of project.
Measuring the Uplift
For instance, on average, rail projects receive a forty percent uplift, building projects between four and fifty-one percent, and IT projects between ten and two hundred percent—the highest uplift and the broadest range of uplifts. A quotation from the official UK Department of Transportation document on this topic is telling:
“Our analysis indicates that political-institutional factors in the past have created a climate where only a few actors have had a direct interest in avoiding optimism bias.”
However, once an individual knows that their forecast will be revised, they will adjust their forecast accordingly. Therefore, adjustments to a forecast must be performed without the forecaster’s knowledge. The UK Department of Transportation has taken active steps to identify both the source and magnitude of bias within their organization. They have documented their project estimation bias for others to read and to learn from. However, most companies refuse to address the existence of bias, much less actively remove bias.
How Large Can Bias Be in Supply Chain Planning?
Some research studies point out the issue with forecast bias in supply chain planning. According to Shuster, Unahobhokha, and Allen, forecast bias averaged roughly thirty-five percent in the consumer goods industry. They point to research by Kakouros, Kuettner, and Cargille (2002) in their case study of forecast bias’s impact on a product line produced by HP. They state:
“Eliminating bias from forecasts resulted in a twenty to thirty percent reduction in inventory.”
Bias Identification Within the Application
All of this information is publicly available and can also be tracked inside companies by developing analytics from past forecasts. Companies often do not track the forecast bias from their different areas (and, therefore, cannot compare the variance), and they also do next to nothing to reduce this bias. Part of this is because companies are too lazy to measure their forecast bias.
At this point let us take a quick timeout to consider how to measure forecast bias in standard forecasting applications.
Interlude: How is Biased Tracked in Applications?
Forecast bias is generally not tracked in most forecasting applications in terms of outputting a specific metric. However one can very easily compare the historical demand to the historical forecast line, to see if the historical forecast is above or below the historical demand.
The problem in doing this is is that normally just the final forecast ends up being tracked in forecasting application (the other forecasts are often in other systems), and each forecast has to be measured for forecast bias, not just the final forecast, which is an amalgamation of multiple forecasts.
Observe in this screenshot how the previous forecast is lower than the historical demand in many periods.
This is a negatively biased forecast.
Although it is not for the entire historical time frame. Earlier and later the forecast is much closer to the historical demand. However, this is the final forecast. To determine what forecast is responsible for this bias, the forecast must be decomposed, or the original forecasts that drove this final forecast measured.
The Political Implications of Pointing Out Forecast Bias
Pickup up back where we were before that interlude, confronting forecast bias means risking yourself politically because many people in the organization want to continue to work their financial bias into the forecast. And they do not like being told they can’t. In fact, they will bristle at the idea that they have any financial bias and will typically point fingers back at the person who points out they have a financial bias. And if you prove that their forecast was biased with all the numbers, they will often still say it wasn’t by coming up with an excuse for why “something changed” and that this was why their forecast was off. This extends beyond forecasting as people generally think they are far more objective than they are. It is difficult for even salespeople that they may have some bias in presenting their products versus a competitor’s products. Typically a person who is 100% biased will make a statement like the following.
Ok, I admit I might be a little bit biased.
A big part of being biased is to try to diminish the concept of bias altogether. And I have witnessed numerous occasions where the person will try to disarm me by stating that the bias that is negative forecast is “no big deal.” And one powerful way of doing this is to make fun of the very concept of bias. And say..
Look, everyone has a bias.
In this case, a person with a financial bias will try to conflate a preference, with a financial bias. That is if a person likes a certain type of movie, they can be said to be “biased.” When this is described as a preference.
This is a deliberate act of deception, and this muddies the water as the most powerful biases that impact forecasting are financial biases (a sales quota, a desire to make marketing look good by proposing a new product will be wildly successful), not personal preferences. I am not proposing that one can’t have preferences. I am explaining that removing biases from forecasts improves their accuracy. We can both remove forecast bias from forecasts, and continue to have movie preferences, and root for our favorite sports team. These two things don’t have much to do with each other.
Those that are finally cornered on a financial bias will often say something like.
That guy was rude (the person pointing out the financial bias), what an as*****!
Politeness often seems to end up being not pointing out financial bias and allowing the financially biased individual to continue to misinform others that they are as objective or nearly as objective as anyone else.
Keeping the Presence of Objectivity Alive
Part of submitting biased forecasts is pretending that they are not biased. Companies are not environments where “truths” are brought forward and the person with the truth on their side wins. People are considering their careers, and try to bring up issues only when they think they can win those debates. Sales and marketing, where most of the forecasting bias resides, are powerful entities, and they will push back politically when challenged. And these are also to departments where the employees are specifically selected for the willingness and effectiveness in departing from reality. Each wants to submit biased forecasts, and then let the implications be someone else’s problem. This is covered in more detail in the article Managing the Politics of Forecast Bias.
It is amusing to read other articles on this subject and see so many of them focus on how to measure forecast bias. There is no complex formula required to measure forecast bias, and that is the least of the problem in addressing forecast bias. These articles are just bizarre as every one of them that I reviewed entirely left out the topics addressed in this article you are reading. The topics addressed in this article are of far greater consequence than the specific calculation of bias, which is child’s play. One only needs the positive or negative per period of the forecast versus the actuals, and then a metric of scale and frequency of the differential.
Forecast bias can always be determined regardless of the forecasting application used by creating a report. However, it is preferable if the bias is calculated and easily obtainable from within the forecasting application. Bias tracking should be simple to do and quickly observed within the application without performing an export.
But it isn’t.
Of the many demand planning vendors I have evaluated over the years, only one vendor stands out in its focus on actively tracking bias: Right90. The application’s simple bias indicator, shown below, shows a forty percent positive bias, which is a historical analysis of the forecast.
Being able to track a person or forecasting group is not limited to bias but is also useful for accuracy. For instance, the following page’s screenshot is from Consensus Point and shows the forecasters and groups with the highest “net worth.” This network is earned over time by providing accurate forecasting input.
However, most companies use forecasting applications that do not have a numerical statistic for bias.
Reducing Forecasting Inputs From Biased Forecasters
This provides a quantitative and less political way of lowering input from lower-quality sources. It also promotes less participation from weak forecasters, as they can see that their input has less impact on the forecast. These performance dashboards exist in a few vendors, but forecasting accuracy could be significantly improved if they were universal.
In all forms of forecasting, an easy way to compare the performance of forecasters is a necessity. Forecast inputs must be tracked and reviewed, and adjustments must eventually be made because there are vast quality differences between forecasters. These types of dashboards should be considered a best practice in forecasting software design.
The consensus-based vendors, Inkling Markets, Consensus Point, and Right90, have the most significant focus on bias removal that I have seen. Why the statistical vendors lag in this area is an interesting question. In my view, it can be rationally explained by the fact that judgment methods are known to have more bias than statistical methods.
Using Bias Removal as a Forecast Improvement Strategy
It’s tough to find a company that is satisfied with its forecast. I have yet to consult with a company with a forecast accuracy anywhere close to the level that it really could be.
Everything from the use of promotions to the incentives they have set up internally to poorly selected or configured forecasting applications stand in the way of accurate forecasts. I often arrive at companies and deliver the bad news about how their forecast systems are mismanaged. I am sometimes asked by a director, who is worn out by funding continuous improvement initiatives for forecasting.
“But why have our results not improved.”
My answer is often that they are merely violating the rules established in scholarly sources for forecast management, and therefore they have poor outcomes. However, it is also rare to find a company that has a well-thought-out plan for improving its forecast accuracy.
Focusing on the Wrong Areas for Bias Removal
When I listen to executives’ plans to improve their forecast, they almost always focus on the wrong areas and miss out on some of the most straightforward ways to obtain forecast improvement. The new software is usually seen as a magic bullet but can only be part of the solution.
One of the simplest (although not the easiest) ways of improving the forecast—removing the bias—is right under almost every company’s nose. Still, they often have little interest in exploring this option.
Addressing Forecast Bias
We measure bias on all of our forecasting projects. Measuring bias can bring significant benefits because it allows the company to adjust the forecast bias and improve forecast accuracy.
- The most significant bias by far tends to come from judgment methods. Within this category, sales forecasting tends to have the highest bias, as we cover in the article A Frank Analysis of Deliberate Sales Forecast Bias.
- A primary reason for this is that sales want to ensure product availability, and sales are not measured by inventory turns on inventory investment.
- Some companies are unwilling to address their sales forecast bias. Still, another primary reason for their trepidation is they have never actually measured their forecast bias from all the forecast inputs.
Some companies are unwilling to address their sales forecast bias for political reasons. But a significant reason for their trepidation is they have never actually measured their forecast bias from all the forecast inputs. Therefore without the actual data, they are less willing to confront entities within their company, damaging forecast accuracy.
The Current State of Forecasting Bias
It’s challenging to find a company that is satisfied with its forecast. And I have to agree. I have yet to consult with a company that is forecasting anywhere close to the level that they could. Everything from the business design to poorly selected or configured forecasting applications stand in the way of this objective. However, it is as rare to find a company with any realistic plan for improving its forecast.
General ideas, such as using more sophisticated forecasting methods or changing the forecast error measurement interval, are typically dead ends. One of the easiest ways to improve the forecast is right under almost every company’s nose, but they often have little interest in exploring this option. This method is to remove the bias from their forecast.
The Biased Forecast
As pointed out in a paper on MPS by Schuster, Unahabhokha, and Allen:
Although forecast bias is rarely incorporated into inventory calculations, an example from industry does make mention of the importance of dealing with this issue. Kakouros, Kuettner and Cargille provide a case study of the impact of forecast bias on a product line produced by HP. They state that eliminating bias from forecasts resulted in a 20 to 30 percent reduction in inventory while still maintaining high levels of product availability. Similar results can be extended to the consumer goods industry where forecast bias is prevalent.
While several research studies point out the issue with forecast bias, companies do next to nothing to reduce this bias, even though there is a substantial emphasis on consensus-based forecasting concepts. Forecasting bias is endemic throughout the industry. However, so few companies actively address this topic.
What Type of Bias?
It’s important to differentiate a simple consensus-based forecast from a consensus-based forecast with the bias removed. However, removing the bias from a forecast would require a backbone. That is, we would have to declare the forecast quality that comes from different groups explicitly. Few companies would like to do this.
The first step in managing this is retaining the metadata of forecast changes. This includes who made the change when they made the change and so on.
After this information is recorded, bias can be removed.
Forecast bias is quite well documented inside and outside of supply chain forecasting. Bias is based upon external factors such as incentives provided by institutions and being an essential part of human nature. How much institutional demands for bias influence forecast bias is an interesting field of study. It is a subject made even more interesting and perplexing in that so little is done to minimize incentives for bias. Properly timed biased forecasts are part of the business model for many investment banks that release positive forecasts on their own investments. The so-called “pump and dump” is an ancient money-making technique. Investment banks promote positive biases for their analysts, just as supply chain sales departments promote negative biases by continuing to use a salesperson’s forecast as their quota. These institutional incentives have changed little in many decades, even though there is never-ending talk of replacing them. However, it is well known how incentives lower forecast quality. They persist even though they conflict with all of the research in the area of bias.
How to Remove Bias?
The easiest way to remove bias is to remove the institutional incentives for bias.
Yet, few companies actually are interested in confronting the incentives they create for forecast bias. As we cover in the article How to Keep Forecast Bias Secret, many entities (companies, government bodies, universities) want to continue their forecast bias. For some, having a forecast bias is an essential part of their business model.
For those interested in removing forecast bias, software designed to mitigate forecast bias can help highlight bias and provide mechanisms to adjust it within the application. Within the application, there should be the ability to identify bias and adjust bias quickly and easily.
How Common are Requests for Bias Removal from Forecasts by Companies?
Companies, by and large, do not ask for or discuss bias removal. They want forecast accuracy improvement but are generally blind to the topic of bias. Within any company or any entity, large numbers of people contribute information to various planning processes that have an easily measurable bias, and they do not appreciate having it pointed out.
And outside of judgment forecasting software, software companies do not develop bias identification in their software (and do not build bias identification as a central component of the user interface). Bias identification is essential enough to have its dashboard, or view, within all demand planning applications. Not only for general ease-of-use but because adjusting for bias is about more than identification and adjustment. It is also about making the case.
Many people benefit from providing forecast bias. And if there is no cost to them, they will continue to provide a forecast with bias. For example, marketing is going to overstate their new product forecast because it makes them look like they are adding more value than they are to the company.
The Importance of Exposing Forecast Bias
- The case for bias can best be made in a presentation format to demonstrate to others that the bias exists, and the action should be taken to minimize its effect on the final forecast.
- When a bias is demonstrated in this way, it’s more difficult to dispute. However, the challenges in attempting to remove bias cannot be underestimated, even after the bias is pointed out.
If conversations in bias are kept at a high level and not demonstrated with a visual aid, which shows the bias clearly, all types of excuses will be offered by the groups that produced the biased forecast as to why there was, in fact, no bias. The application’s bias dashboard should support that presentation by showing bias from many products and different vantage points in real-time. Many criteria, including can identify bias
- Bias by individuals
- Bias by an overall department
- Bias by-products and geography, etc.
Bias information must be detailed because those with a biased forecast will most often push back by saying there was a good reason for the forecast at the time. However, the reasons provided don’t change a bad or biased forecast. Anyone can come up with an excuse as to why something they predicted did not occur.
Comparative Forecast Error Measurement
To determine the bias of a forecast, one must have the ability to measure comparative forecast accuracy efficiently. As we cover in the article Forecast Error Myth #5: Non-Comparative Forecast Error Measurement is Helpful, there is a strong myth that one does not need to perform comparative forecast error. And related to this myth is a second myth that forecast error is effectively measured in forecasting applications, as we cover in the article Forecast Error Myth #4: Most Forecast Error Measurement Supports Identifying How to Improve Forecast Accuracy.
As companies tend to lack an automated way of performing comparative forecast error measurement, there is often little understanding of how much other forecast methods (or manual overrides, marketing adjustments, etc.) improve or degrade the forecast error.