How to Understand Consensus Forecasting Methods vs Statistical Forecasting Methods

Click Link to Jump to Section

Executive Summary

  • The statistical forecasting is normally combined with judgment methods produced by sales and marketing to arrive at a consensus forecast.
  • We compare these different forecasting approaches.

Introduction to Consensus Forecasting

There are three categories of obtaining forecast information from different people who apply to supply chain forecasting:

  1. Consensus forecasting
  2. Manual overrides to statistical forecasting systems
  3. Collaborative forecasting

The Conventional Wisdom

Presently, there is a general concept at play that getting more people involved in and adjusting the forecast will improve the forecast accuracy. This is partly a holdover from consensus forecasting, which requires many people to produce a forecast. However, the lessons from both practical observation and from the research are that restricting access to adjusting the forecast is one of the most critical components of improving forecast accuracy.

This is discussed in my book Supply Chain Forecasting Software, and I have included a quote below from the book:

“Many stories on CBF seem to center around just getting more people to participate, as if un-moderated participation has been proven to generate high quality forecasts. The real story about CBF is considerably more complicated. In fact, CBF is very much a process of receiving input and then performing analytical filtering to remove or reduce the impact of individuals or groups with poor forecasting accuracy. This part of CBF is under-emphasizes, probably because it’s not as “feel good” of a story, and the next logical question is “who is going to get their input reduced?” It also brings up the question of how this topic is raised during the implementation of CBF projects.”

Important Rules Around Consensus Forecasting – CBF

And what applies to consensus forecasting is true of the second two methods that I have listed.

Not all manual forecast adjustments are created equal. Some groups of individuals, as well as some specific people, tend to have better forecast accuracy than others. Secondly, the most efficient manual forecast adjustments are when the forecast is brought down by a significant amount. A research study on this topic is explained by Michael Gilliland:

“Robert Fildes and Paul Goodwin from the UK reported on a study of 60,000 forecasts at four supply chain companies and published the results in the Fall 2007 issue of Foresight: The International Journal of Applied Forecasting. They found that about 75 percent of the time, the statistical forecasts were manually adjusted – meaning that 45,000 forecasts were changed by hand! Perhaps the most interesting finding was that small adjustments had essentially no impact on forecast accuracy. The small adjustments were simply a waste of time.  However, big adjustments, particularly downward adjustments, tended to be beneficial.”

Collaborative forecasting success tends to be strongly related to the percentage of business that the collaborating company represents to its partner. Partners that have less at stake tend not to put effort into providing timely and beneficial forecasts.

Dealing with Reality

This is a hard pill to swallow, but not all people in the organization that touches the forecast are trying to improve its accuracy or have the domain expertise to improve accuracy. In fact, it is quite surprising how many companies allow so much manual adjustment into the forecast without tracking the forecast’s adjustments’ impact on forecast accuracy. (although to be fair, many applications do not make this a focal point, one of the few that does is Right90, that specializes in sales forecasting software) The goal should not be to increase the number of participants simply but to include only those members that add value to the forecasting process.

Consensus Forecasting and Judgement Forecasting Methods and Bias

For those of you that have read other posts on this blog, you will know I am a big fan of the book Demand-Driven Forecasting by Charles Chase. Charles has the following things to say about consensus forecasting vs. statistical forecast methods.

“Judgement methods are not as robust as quantitative methods when it comes to sensing and predicting the trend, seasonality and cyclical elements… Unfortunately judgement techniques are still the most widely used forecasting methods in business today. Such methods are applied by individuals or committees in a consensus process to gain agreement and make decisions… However, judgmental methods tend to be biased toward the individual or the committee developing the forecast. They are not consistently accurate over time due to their subjective nature… However, over my 20 years as a forecasting practitioner, I have found that quantitative methods have been proven to outperform judgement methods 90 percent of the time due to the structured unbiased approach.”

This statement is supported by research, which is rarely discussed in the industry.

Statistical Methods and Bias

Statistical methods have less bias.

In fact, many research studies going back decades demonstrate that when demand history exists, statistical methods are preferable to consensus forecasting or judgment forecasting methods. This information from the research does not seem to effectively reach industry as many companies are pinning their hopes on consensus forecasting, without having a plan for adjusting for the inherent bias in judgment methods.

The Issue with Political Management of Bias Control

Forecasting is an intensely political activity at most companies. Let me rephrase. Forecasting is an intensely political activity at all companies. However, some companies can reduce the political bias from entering into the forecast, and most are not.

Whose Interests are Reflected in the Consensus Forecast?

The fact that different groups in a company have different incentives is well documented. These incentives can cause the forecast to be set higher or lower than they rationally would be. The issue is that while this is known, not enough companies do enough to reduce this known bias. Those that control the final forecast must understand, that as pointed out by Michael Gilliland, not every person who adjusts the forecast has forecast accuracy as an objective.

Some want a lot of stock available in the system.

The Focus of Forecasting in Books

If you read most forecasting books, they tend to focus very heavily on the mechanics of forecasting. They talk about forecasting methodologies (simple exponential smoothing, regression, etc.), however not enough get into the business process or how forecasts are used in real life.

When I was younger and less experienced and read these type of books, I came away with the impression that forecasting was mostly about simply following a rational process and selecting your algorithm. That way of thinking misses an entire dimension to forecasting which typically exists in the manual overrides to the system.

This emphasis getting even more lopsided. This is because the current concept or trend in forecasting is that statistical methodologies are less important to the forecast than creating a consensus forecast that incorporates many inputs from within the company, and collaborative forecasting, which includes the forecast input from external business partners.

Errors in Cognition

It is very common for the past to be misinterpreted and for a new “paradigm” to arise which is based upon an incomplete analysis of history in any area. A few areas that have been misinterpreted in forecasting in recent years include the following:

Errors in interpreting the history of forecasting include the following:

  1. Statistical Forecasting: There was never any evidence that sophisticated algorithms would significantly increase forecast accuracy. Vendors and consultants pushed this idea on industry.
  2. Consensus Forecasting: There has never been evidence that statistical methods of forecasting provide poor output compared to consensus forecasting and collaborative forecasting methods.

The Origins of this Myth on Consensus Forecasting

These two myths are at the heart of the current problem of the recent trend in forecasting which is to try to get more inputs to forecasting. The reason for the first myth was simply based on the incentives of software companies and consultants to sell the business. These groups were able to convince industry to invest in complex software by the promise of better forecasts, but without ever actually proving that these more complex solutions would improve forecasts.

The second myth is related to the first myth.

The argument goes that since the more advanced methods did not improve the forecast, those statistical methods are now less important than getting more groups to have input into the forecast.

Why Political Bias is Unmeasured

The problem with the current myth is that it does not account for political bias. Getting more inputs on the forecast does not address the incentives of these groups. It also tends to assume that all the groups forecast at a similar accuracy, which cannot be true. Even if different groups had equivalent forecasting capability, their incentive structures would skew the resulting forecast to be consistently above and below their actual forecast capability.

The current concept regarding collaborative and consensus forecasting is not wrong, but it is incomplete, which can lead to the same outcome as being wrong. The issue is one of measurement. That is the accuracy of all the groups providing their forecasts to the process must be measured, and over time the groups with more accurate forecasts should be weighed more heavily than those who have a poorer record. This would seem to me to be the best way to manage the political dimension of the forecasting process.

Of course, simply instituting a forecasting process where poorer performing forecasting groups have their weighing decreased is its political activity.

The Popularity of Consensus Forecasting

When you see the light on something, it sometimes can be surprising that the item is far less popular than you imagined. Working in companies as a consultant in the supply chain, the term consensus forecasting is frequently used.

In fact, I have written articles on how the pendulum has swung from overemphasizing statistical methods to overemphasizing consensus forecasting. So imagine my surprise when I performed a search in Google Adwords Keyword Search Tool and found a volume level so low that it is not measured by Google. See the image below which shows how CBF scored.


The next closest term, which is collaborative forecasting, is far more popular, and about equal in popularity to the term “sales and operations planning.” Of course, collaborative forecasting is not the same as CBF, but the concept it similar. There can be confusion among different supply chain terms, so it is possible that some people meant CBF when they typed in collaborative forecasting into Google.


More Evidence on LinkedIn

Another example of the low profile that CBF has in public consciousness is the LinkedIn groups related to the topic. On linked in there are two groups for people who want to keep up with collaborative forecasting.

These two groups have a combined membership of 127 members. The only group on LinkedIn for CBF is the group I started a few weeks ago and has only three members, including me.

Consensus Forecasting Books?

I was checking in on the consensus forecasting group that I created on LinkedIn and noticed that after roughly four months I did not have a single request to be added to the group. Other groups I have created do tend to get requests to be added. I next decided to check Amazon.com for books on consensus forecasting, as this tends to provide an insight into how mature a topic is and how much it is on people’s radar.

How Important is Consensus Forecasting Again?

I find it very strange that I work on different clients where many people say how important consensus forecasting is, and yet there are no books directly on the topic. In fact, there are even articles that declare that statistical forecasting is no longer where the focus should be, and most of the real potential for improving forecast accuracy resides in consensus forecasting.

I don’t know if this is true and have never seen any research to substantiate the claim. In fact, the claim makes no sense because each forecasting category provides opportunities for improvement and I don’t know how one would begin to quantify where the majority of opportunity resides. However, I bring this up to demonstrate that consensus forecasting is certainly on people’s minds.

Below are the books related to consensus forecasting.

Interestingly, S&OP forecasting, a type of forecasting which is a subset of consensus forecasting has quite a few titles on Amazon.com

The mind map of forecasting and how the different categories are related is listed below.

There are not a shortage of books on statistical forecasting. Here you can see the number of books on statistical methods; and in fact, statistical forecasting methods dominate the forecasting category.

Does This Mean that Consensus Forecasting is Not Covered?

No, it does not, but it is covered lightly. Consensus forecasting is covered here and there as a number of forecasting books have some coverage on consensus forecasting. On several that I checked consensus forecasting was only briefly described in a 1/2 page summary before moving on to another something like collaborative forecasting.

Important information related to the organizational challenges and the controls related to forecasting quality are often left out. There is quite a lot more to successful consensus forecasting than simply obtaining the forecast input from the right parties.

The Solution

The business community needs to gain an appreciation for how important specialized consensus forecasting software is to supporting their process before the vendors that provide the best solutions in this space can take off. So collectively the vendors and consulting companies need to begin to raise the profile of consensus forecasting software so that companies can learn about the great tools in this space.

After an objective has been set, the next question is always where to allocate time and resources. In my view, much of the emphasis in education should be on explaining how consensus based forecasting success should not be expected for using standard demand planning software that was never designed for the task.

What is One of the Most Surprising Things About Collaborative Forecasting?

Many forecasts that are shared between companies are never used.

Ways of Performing Collaborative Forecasting

  • With documents (EDI, XML, Excel)
  • With collaborative applications (SAP SNC, E2Open, etc..)

The Depressing News About Consensus Forecasting

Consensus forecasting is a major focus area that many companies want to implement in the coming years. However, whenever a major initiate is taken, it’s important to analyze what the history has been with respect to the software type that is to be implemented. While reading “The Fortune Sellers,” a book that takes a very realistic view of forecasting effectiveness in multiple areas such as weather forecasting and financial forecasting.

In fact, the book does not focus on supply chain planning forecasting but still has many lessons that can be generalized to the supply chain space. The book has the following to say about research into the improvement that can be expected from using consensus forecasting methods.

“Consensus forecasts offer little improvement. Averaging faulty forecasts does not yield a highly accurate prediction. Consensus forecasts are theoretically slightly more accurate than the predictions of individual forecasters by only a few percentage points, due to the average effect that evens out the egregious errors that individual forecasters periodically make. But consensus forecasts are no more likely to predict key turning points in the economy than the individual forecasts on which they are based, and the few extra points of accuracy gained by averaging do not necessarily make them superior to the naive forecast.”William A. Sherden

I will have to do more research on this myself as to why, which will go into a future book on the history of supply chain planning, however, this is consistent with other research into the Delphi method that was designed by the RAND Corporation.

How Executives Increase Demand Forecasting Error

There are several ways in which executives reduce demand forecasting accuracy. They include the following:

  • Directly reducing accuracy through the direct demand forecasting adjustment.
  • Unrealistic forecast goal setting causing inaccuracies in forecasting measurement.
  • Selecting inappropriate demand forecasting software, and marginalizing the users from the software selection process.
  • Not hiring the right people, and spending the necessary money to get expert forecasting knowledge in their companies. This connects to poor software selection, and the company cannot properly differentiate between various vendor claims and therefore often purchase software primarily based on the brand.

Executive Forecast Adjustment

While the present concept in companies that getting more people to touch the forecast is a good thing, in fact, the large body of research and practical experience on forecasting input is that only individuals with the combination of domain expertise and limited bias should be allowed to adjust demand forecasting. Individuals need to earn the right to adjust the forecast but having their input sandboxed.

That is the input is taken and analyzed over time to see what its adjustment to the forecast process would have been. If the individual’s input would have improved demand forecasting, then after enough observational time, they can then be allowed to adjust the forecast. Their input must be monitored over time to ensure it stays at a high-quality level.

While they have the authority to change forecasts, executives are not experts in product demand, and they also tend to have a strong bias. This means that they are not good candidates to provide input to the forecast. Executives input has been tested over time and not been found to, improve the forecast.

This is described by Michael Gilliland in “Worst Practices in Forecasting.

“This style of report should be easy to understand. We see that  the overall process is adding value compared to the naïve model, because in the bottom row the approved forecast has a MAPE of 10 percentage points less than the MAPE of the naïve forecast. However, it also shows that we would have been better off eliminating the executive review step, because it actually made the MAPE five percentage points worse than the consensus forecast. It is quite typical to find that executive tampering with a forecast just makes it worse.”

Who can tell executives that their input does not improve demand forecasting? Their employees won’t tell them, consultants won’t tell them as they want to sell them more services, and criticism is not the way to accomplish this goal. Criticizing people lower in the organization is ok. In fact, poorly selected software, which was never a good fit for the company is typically hung on users being “resistant to change.” Therefore executive input into the forecast continues.

Setting Unrealistic Goals

Because companies don’t baseline their forecasts against the naive forecast, they don’t know what a good forecasting accuracy is for their products. Executives obviously tend to be type A personalities who want improvement. What should the improvement goal be? Arbitrary goals don’t work very well, and this is described by Michael Gilliland of SAS in “Worst Practices in Forecasting.

“Your goal is now set to 60 percent forecast accuracy or you will be fired. So what do you do next? Given the nature of the behavior you are asked to forecast – the tossing of a fair coin – your long-term forecast accuracy will be 50 percent, and it is impossible to consistently achieve 60 percent accuracy. Under these circumstances, your only choices are to resign, stay around and get fired, or figure out a way to cheat!”

Selecting Inappropriate Demand Forecasting Software

Most companies have selected poorly performing and inappropriate forecasting software for their needs, which has often been purchased due to the brand association rather than the application itself being able to improve forecast accuracy. That is a bold statement, so what evidence do I have of this?

It’s simple really.

I repeatedly run into clients who can’t do forecasting activities that I can perform easily in some of the best of breed applications that I have access to (which are also much less expensive than the applications they have). When I demonstrated this software, the response I get is “well our software can do that.”

I then ask how long they have been trying to get it to work, and they say..

“a year and a half.”

Rather than taking this as evidence that they have made a poor software selection, they will tell me

“this is a major vendor; we just can’t believe it can’t do (fill in the blank).”

The second piece of evidence is that it is common to see consensus forecasting attempted in a statistical forecasting application. This means that many executives are not familiar with the different specialized software as it applies to the distinct forecasting processes, a topic which I address in significant detail in my upcoming book the book Supply Chain Forecasting Software.

Who Gets to Influence Demand Forecasting Software Selection?

Forecasting software selections often are limited to the executives the vendors, a large consulting company (which is trying to get the company to use its services, which are typically trained only in the large brands). The analyst’s firms like Gartner are not in the room but have influence, and Gartner tends to write from the strategic perspective, and not directly at the application level. (they also derive more income from large vendor contributions)

Their strategic perspective tends to favor the biggest vendors also. Every influencing party in the room is aligned with the major vendors. So the executives often purchase a big brand, often with lagging functionality, which can be beneficial if the forecasting application fails to deliver very much value (which is often the case with forecasting software from mega-vendors), as the executive can always say,

We did our due diligence, we bought SAP, it’s a major brand, what else can one do?

It turns out quite a bit more.

Perhaps I am naive, have a different perspective on risk, and don’t understand executive politics very well, I would prefer to have a success implementation with a best of breed vendor, rather than a failed implementation which then needs to be defended by explaining that a major brand was selected. It’s a sad state of affairs when buying an uncompetitive application based upon brand is considered the lowest risk solution and buying from a best of breed vendor require “bravery” on the part of an executive.

When executives buy software in the standard way, planners get an application that is difficult to use and is not able to help them get their job done. It is strange that executives should want to remove the input from the very people that they will hold accountable for using the system. Executives should see their role as putting the best tool in the hands of planners that allows them to improve forecast accuracy. Myself, I am unconcerned with the software that other people use, as long as they like it, can use it effectively to meet their objectives.

Not Hiring the Right People to Perform Demand Forecasting

People with deep forecasting expertise tend not to work permanently for companies, but tend to work for best of breed vendors or consulting companies. This is a matter of budget. A company needs several very expert people in forecasting because consultants and vendors are selling something to the company.

A company needs experts in-house that can be relied upon to provide feedback on ideas and concepts for forecast improvement. Large companies can easily afford to staff this type of person but tend not to. That is a mistake that is easily changed.

A List of the Best Consensus Forecasting Vendors

I am occasionally asked if I am aware of a list of the best consensus-based forecasting vendors. I provide a list of three vendors, two that only do consensus forecasting, and one that can do both statistical and consensus-based forecasting (depending upon the requirement) in this article. However, in order for a company to triangulate with other sources, it is desirable for companies to find other lists as well.

I have not found this list that I can recommend.

One reason is that consensus forecasting is still not interpreted as requiring a specialized forecasting solution by either the large consulting companies or the analysts like Gartner or Forrester. The problems large consulting firms have in selecting software is thoroughly explained here. Currently, the large consulting companies are giving out incorrect information both on what consensus forecasting is and solutions exist to meet the needs of a consensus forecasting process. This is primarily because the large consulting firms do not have relationships with these best of breed consensus forecasting vendors, and don’t have anyone trained they can staff on these applications, they simply won’t recommend them.

Financial Bias in Software Selection

However, consulting firms usually don’t create lists that they publish on the best software because they have no unbiased research capability. To them, the best software for their clients is one they have trained resources for which they can bill. The question here is more pertinent for analysts. However, these analysts both see consensus forecasting as simply one way of using any forecasting system. Unfortunately, this is one of the biggest mistakes that companies make when beginning their consensus forecasting project. One of the major reason that so many consensus forecasting projects provide mediocre results is that an inappropriate solution is used. Using the wrong tool, companies have many challenges even getting input from the necessary departments, much yet adjusting the results. If companies continue to do this, eventually consensus forecasting will decline as a trend, and the pendulum will swing back to statistical forecasting in terms of emphasis.

I could easily see this happening, with all of the evidence that consensus forecasting is overrated being based upon companies that used statistical packages to perform consensus forecasting. That is literally how unscientific this process can be.

Where Major Vendors Stand with Consensus Forecasting

Consensus forecasting specific software is an area of innovation in the market for supply chain software. Innovative areas of the market, such as consensus forecasting or inventory optimization and multi echelon planning are a concern to the major vendors because their model is to put as little development effort as possible into their applications and try to adjust the perception of their products through slick marketing. Any area which is new they will attempt to, and often be successfully co-opt with salesmanship, marketing and leveraging their pre-existing relationships with existing clients. This is the luxury enjoyed by large vendors. They can wait for innovation to be generated in the smaller vendors, and then co-opt it after it begins to become popular. They want to convince their customers that anything new and innovative they already have in their suite, and customers should not look elsewhere. If they can convince companies that a one-size fits all approach will work to every domain of the supply chain, then they win.

Why The Analysts Don’t Differentiate Consensus Forecasting Solutions

I think there are a few reasons that these analyst firms take this approach. However, most likely the single most important factor is that they are paid by vendors. SAP alone pays them several million dollars per year, and generally the larger the vendor, the more they can afford to pay. The analysts do not disclose or publish the fact that they are paid by vendors, and their business model is similar to the financial rating agencies. The exception is that, unlike rating agencies that are paid exclusively by those who want their products rated. Both the producers and the consumers pay the analysts, so their income sources are more balanced.

However, while they present themselves as having one customer (those that buy their research). In fact, they have two clients, the vendors being the second clients. Within these vendors, the biggest pay the most, so the research results are slanted in their direction. The companies that I think are the best in consensus forecasting are small and don’t pay Gartner to be listed. Secondly, if consensus forecasting were perceived and explained as a separate solution requiring separate software, the big vendors would lose because their solutions are extremely weak in consensus forecasting, and requiring the company to adjust around the application rather than the application being inherently usable for the consensus forecasting process.

Consensus-Based Forecasting and Statistical Forecasting in One Application

The statement regarding the specialization of consensus forecasting applications requires some further clarification. I am aware of two pure CBF applications that I would be comfortable implementing. There is one application, that has its “heritage” in statistical forecasting, but which I would be comfortable using as a consensus forecasting application.

However, this is the only dual (stat and consensus forecasting) application that I am comfortable saying this about. The other statistical applications I have worked with do not have any ability to add value to the consensus forecasting process.

This is not a criticism as much as it simply is not their design. Therefore, the question of pure consensus forecasting or combined stat-consensus forecasting is a bit less cut and dried than only consensus forecasting applications should be used for consensus forecasting because there is at least one exception. However, the selection of different applications must be driven by the way the company intends to perform consensus forecasting. If consensus forecasting is closer to an S&OP process where the input is taken from other teams by conference calls and the consensus items are adjusted by one group or person, then the combined stat-consensus forecasting application would work fine. If instead the process requires direct system input, then a pure consensus forecasting application should be selected.

The List

High-Level Consensus

  • Inkling Markets
  • Consensus Point

Detailed Consensus

  • Demand Works Smoothie
  • Forecast Pro

Conclusion

On its face, it seems like a strange result, as domain expertise is often distributed across multiple individuals. Also, S&OP, which is a type of consensus forecasting absolutely has to bring together individuals to make a shared forecast, because no group would allow just operations, or just finance to create the overall forecast. One point of weakness of the research may also be the software that is used. Many companies attempt to perform consensus forecasting with statistical software, and few focus on reducing bias. Therefore, as implemented, by companies who don’t really focus on a high-quality implementation, it is easy to see how consensus forecasting can show so little improvement.

The Necessity of Fact Checking

We ask a question that anyone working in enterprise software should ask.

Should decisions be made based on sales information from 100% financially biased parties like consulting firms, IT analysts, and vendors to companies that do not specialize in fact-checking?

If the answer is “No,” then perhaps there should be a change to the present approach to IT decision making.

In a market where inaccurate information is commonplace, our conclusion from our research is that software project problems and failures correlate to a lack of fact checking of the claims made by vendors and consulting firms. If you are worried that you don’t have the real story from your current sources, we offer the solution.

Search Our Other Statistical Forecasting Content

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

 

References

Gilliland, Michael, “Worst Practices in Forecasting,” SAS

“The Fortune Sellers,” William A. Sheriden, John Wiley & Sons, 1998

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion