How to Understand Best Fit Forecast Model Selection

Click Link to Jump to Section

Executive Summary

  • We cover best-fit functionality in how it works and the implementation steps in best-fit.
  • Why the best fit historical accuracy is lower than what the future forecast accuracy will be.
  • Important things to know about best-fit forecasting and model selection.


Best fit forecasting is a procedure within most supply chain forecasting applications. It is a procedure that:

  • Compares all of the forecasting models within that application for each item being forecasts.
  • Automatically calculates the error for each model
  • Assigns the forecast model to the forecasted item (the product location combination, the product, the product group: whatever is being forecasted)

The Definition of Best Fit

Best fit is a software procedure that is available in most supply chain forecasting applications. It works in the following way generally, although the particulars differ per application.

  1. A software procedure that fits the history using different forecasting models.
  2. The procedure then ranks the various models based on their distance from the actuals or their forecast error.

The forecast model with the lowest overall error is then selected (which is a combination of all the individual errors for every period), and that is the “best fit.”

How is it Run?

Best-fit functionality can be automated within the application (that is run as soon as the application is loaded with data), or require the best fit to be run interactively, or as part of a batch job. SAP DP, for example, requires the best fit to be initiated in either of these two ways.

How Common is Best Fit?

Almost all statistical forecasting applications have the best fit procedure. However, how much it is deployed is an entirely different question.

Things to Know about Best Fit Forecast Model Selection

Issue #1: Overestimation of Best Fit

Best fit, while sounds nice — is only the best fit of the models that are contained within the application. There are many cases where other models are not within the application that is the best model to use. I have also found cases where the system will not seem to select the best model even though it is in its database.

Issue #2: Will Best Fit Find the Best Forecast Model?

Best fit will not find the best forecast model in many circumstances.

  • If the product’s demand characteristics have very significantly changed — the best fit determined model is not helpful. For instance, there are cases where the unit of measure changes over time for a product. One product which was sold in one unit of measure changes the unit of measures. In the case there twice as many products are sold — the model selected by the best fit will not catch this — that is it cannot see the difference between a temporary and permanent change.
  • The best fit selected forecast model will often underestimate new products. This is because new products tend to build very rapidly.

Issue #3: Applications That Make Best Fit Difficult to Use

Some applications, like SAP DP, make the best fit tough to use — and therefore, most of the companies that have SAP DP do not take advantage of its best-fit functionality. I have developed an approach of incorporating the best-fit output of other forecasting applications by using the best-fit model selections in these other applications — and their assignments provided by third-party best-fit results — and building custom models within SAP DP.

This requires patience in the phase of the project where tuning is performed.

An Important Rule Around Best Fit

Best-fit procedures can only say what the best forecast model would have been in the past, and cannot be said definitively to be the best model to use in the future.

Common Problems with Best Fit Forecasting Functionality

Common difficulties with the best-fit forecasting functionality are that most companies are mostly related to poorly designed software — or at least poorly designed software from a usability perspective. Best fit forecasting functionality must be controlled to prevent what is known as over-fitting.

The best fit procedure that a forecasting application uses (and different applications have different mathematics that controls the procedure’s outcome) can only tell the user what forecasting models would have worked in the past. This is sometimes the forecasting model that should be used in the future, but in many cases, it should not be used. Fitting history is easy; the tricky part is forecasting accurately.

This is emphasized by the well-regarded author, Michael Gilliland of SAS

Michael Gilliland is one of the best sources of practically oriented information on forecasting. He has outlined this practical approach, which Brightwork Research & Analysis supports, in a series of books and articles. In this quotation, he brings up the topic of the historical fit versus the future forecast.

“Historical fit is virtually always better than the accuracy of the forecasts generated. In many situations the historical fit is better than the accuracy of the forecasts. Any one of you who has done statistical forecasting knows this. You might have a MAPE of 5 percent in your historical fit, but a MAPE of 50 percent in your forecasts – that would not be at all unusual. As a practical and career-extending suggestion in communicating with your management, don’t tell them the MAPE of your historical fit – they don’t need to know it! Knowing the MAPE of your historical fit will only lead to unrealistic expectations about the accuracy of your future forecasts.”

Managing Best Fit Comparisons

A best-fit forecast model can be first compared to the naive forecast, but secondly, the best fit can be compared against the current forecasting models to see which accuracy is higher. When a new best-fit forecast is created for a product database, there must be a way of holding it separate from the other forecasts, and to be able to compare each of these predictions, and to perform analytics on them to understand the differences in forecast accuracy.

In my experience, this often means exporting the different forecasts to Excel (which can now hold millions of records) or a database to perform the comparisons.

Why Can more Complex Methods be Made to Better Fit the Forecast History?

More sophisticated methods can be done to fit the history better often better than simpler methods. This is because they can be tuned to match history better than simple forecast methods. But this does not mean that they necessarily produce a better forecast.

Applying Best Fit Forecast Model Selection

There is an important distinction in running the best fit procedure as a test and implementing best fit like a standard course within the production system. One of the best systems I have come across for the implementation of best-fit forecasting is Demand Works Smoothie.

Smoothie Virtual Hierarchy Example 2

Smoothie does not require any particular activity to be performed to run a best fit procedure. As soon as the data is loaded into the application, the best fit procedure is performed. One has to override the best fit selection to choose something other than the best fit. Furthermore, the best fit always runs every time the forecast is run.

Smoothie is so good at running the best fit forecast that I often use it to troubleshoot SAP DP, and could use it to troubleshoot other systems. How SAP DP performs best fit is an interesting contrast to how best fit is executed in Smoothie — and it shows the wide variety of options that are available within forecasting applications.

Best Fit in SAP DP

Unlike Smoothie, SAP DP does not declare a general univariate forecast model to be selected as an output of the best fit process.

Auto Model Selection

SAP DP instead only assigns the Auto Model Selection 2 model with a set of associated parameters (as can be seen in the screenshot below)

Auto Model Selection Parameters Their parameters are only useful if you can run the Auto Selection Model in the background job. 

The problem with this is that the Auto Model Selection 2 is not recommended to be run in production because it will often give strange results. It will also give different results depending upon whether Auto Model Selection 2 is run interactively (that is within the Planning Book), or whether it is scheduled as part of a batch job or Process Chain. There are some rumours that this has been improved in the latest version of SAP APO, yet, I have not yet come across a client with the most recent version of SAP APO, so I have not had an opportunity to test this. The current issues with SAP DP apply for most of the companies out there as the most recent version of APO has not yet been widely adopted.

How The Best Applications Provide Low Maintenance Best Fit

For instance, Smoothie selects the actual model to be used — for example, Level, Level Seasonal, etc. Smoothie optimizes the parameters internally and does not show the parameter values or the historical usage (how many months back) on any screen. This works very well for Smoothie, but less well when one wants to take the best fit selected model and incorporated it into another system. This is because these values need to be configured into other systems to use the right “flavor” of the model. However, it takes a lengthy tuning process to configure the forecast models in SAP DP. This is a point of frustration for many companies, as they cannot understand why it takes so long to perform this activity.


Choosing the right model for SAP DP requires patience and multiple runs of the forecasting system, going back and tuning the system and then rechecking the results. The best way to do this is to use the Univariate view to choose a product or a group of products and make the adjustments.

Auto Model Selection

Groups of products can be selected so that a forecast model and the adjustments within the Forecast Profile can be checked for all products in the product group. Without best-fit functionality, while grouping products can be helpful for diagnostic work, for actual assignment, each forecasted product must be checked. This is a tedious process, but necessary for most of the products when using SAP DP.

Best fit can be run from this Univariate view — called running interactively, but as I have already pointed out, best fit in DP does not choose a particular model that can be efficiently run as part of the batch job. Assignment of Forecast Profiles can be performed in the Univariate view and these, will create a forecast, it often not feasible to keep the Auto Model Selection 2 assigned as will often result in erroneous results – that is the same pattern will not repeatedly be created.

Assignment of Forecast Profiles can be performed in the Univariate view, and these will produce a forecast, it often is not feasible to keep the Auto Model Selection 2 assigned as will usually result in erroneous results – that is the same pattern will not repeatedly be created.

In most cases, the forecast error can be checked, and this can allow one to select the best (better) forecasting model type — or Forecast Profile. It is a time-consuming process. It takes too long and is not particularly precise, yet this is the limitation of the application that those who buy SAP DP must learn to live with.

Manual vs Automated Best Fit Forecast Model Selection

With DP, there was never much choice. You can either use Auto Model Select 1 (which I don’t think ever worked) or Auto Model Select 2 (which does work but often gave different results depending upon whether the routing was run in batch or interactively (i.e. in the planning book).

Sometimes the Best Fit functionality in SAP DP can appear broken; however, if it is run in a certain way, it can be made work. The open question is whether companies can be satisfied running it in just this limited way.


Many SAP DP customers have not been able to get SAP DP best fit to work correctly and as a consequence is eager to get the functionality to work correctly. However, there are some excellent reasons why we have yet to gain exposure to a client that has implemented SAP best fit in a production environment.

Facts About SAP “Best Fit”

SAP has two different best-fit methods, which are named Auto Model Selection 1 and Auto Model Selection 2. What each does is rather involved and takes about 2.5 pages of text and formulas to explain. However, the synopsis is that it is running a series of checks to select the best possible future forecast model given the demand history.

How SAP Best Fit Forecasting is Configured

This essentially sets up different supply chain forecasting methods in competition with one another per demand history trend line. The software goes back in history and uses the previous period to forecast a more recent period for which the actual demand is known. By comparing multiple forecast methods and compare the error between the forecast and the actual, the software picks a forecast methodology which “best fits” the historical trend.

In SAP APO DP, this can do by selecting the Auto Model Sel 1 or 2 on the Model tab of the planning book.

The best fit is selected either with Auto. Model Sel. 1 or Auto. Model Sel. 2. 

This can be found by going to the options button in the planning book. 

Univariate Forecast Profile

The other way to set this is in the Univariate Forecasting Profile. This can be found off the SAP Easy Access Menu:

This takes us right into the Univariate Forecast Profile

Next, we want to select the “Forecast Strategy” drop-down, which will show the following options. Two of the options are the best fit options, although it is not direct and obvious as to which of these options are the best fit procedures, which is why I have highlighted it below: 

It can also be selected from within profile maintenance. 

The only decision the planner has to make is the time horizon which the best fit calculation will be made. This is controlled under the horizons tab in the planning book. Obviously, different time selections can yield different results. However, at account after account, I am finding that this functionality does not work. That returns the constant model, and so clients are not able to use it. Here are two samples.

Best Fit Forecasting Samples

As you can see, the blue line, being the forecast does not match the yellow line, which is the historical pattern. My client noted

“Auto Selection Model seems to always choose a constant model for the statistical forecast, even when there is a clear seasonal and trend pattern. This does not meet the business’s need to account for seasonality and trend.”

I have told another forecasting vendor about the problems with best-fit supply chain forecasting in SAP DP, and they find it almost impossible to believe that a commercial forecasting system would have such a problem performing a best fit analysis. I get that response a lot with SAP DP.

There are two automodel or best fist procedures in SAP DP. This is the Automodel 1 and the Automodel 2. A best-fit procedure is a logical test that is used to determine which statistical model to apply to a time series. We will be covering Automodel 2 in this article.

Understanding Automatic Model Selection Procedure 1

Automatic model selection procedure one is used in forecast strategies 50, 51, 52, 53, 54 and 55.


  • The system checks whether the historical data shows seasonal effects by determining the autocorrelation function (see below) and comparing it with a value Q, which is 0.3 in the standard system.
  • Similarly, the system checks for trend effects by carrying out the trend significance test (formula below).
  • The formula for Autocorrelation Coefficient

The formula for Trend Significance Test


Step One

The system first tests for intermittent historical data by determining the number of periods that do not contain any data in the historical key figure. If this is larger than 66% of the total number of periods, the system automatically stops model selection and uses the Croston method.

Step Two

The system conducts initialization for the specified model and the test model (for example, in forecast strategy 54, the specified model is seasonal, and the test model is a trend model).

For initialization to take place, a sufficient number of historical values needs to be present in the system. This can be two seasons at the most for the seasonal test and three periods for the trend test

If not enough, historical values are present in the system for initialization to take place. The model selection procedure is cancelled, and a forecast is carried out on the basis of the specified model (in strategy 54, this would be a seasonal model); if the forecasting strategy is one in which no model is specified (for example, strategy 51), a forecast is created using a constant model.

The exception to this rule is strategy 53, which tests for both trend and seasonal models; if sufficient historical values exist to initialize a trend test but not a seasonal test, only a trend test is carried out.

Step Three

In forecast strategies 50, 51, 53 and 54, a seasonal test is carried out:

  • Any trend influences on the historical time series are removed.
  • An autocorrelation coefficient is calculated.
  • The coefficient is tested for significance.

Step Four

In forecast strategies 50, 52, 53 and 55, a trend test is carried out.

Any seasonal influences on the historical time series are removed.
A check parameter is calculated as in the formula above.
The system determines whether the historical data reveals a significant trend pattern by checking against a value that depends on the number of periods.

Step Five

  • If neither the seasonal test nor the trend test is positive, the system uses the constant model.
  • If the seasonal test is positive, the seasonal model is used with the specified parameters.
  • If the trend test is positive, the trend model is used with 1st order exponential smoothing and the specified parameters.
  • If both tests are positive, the seasonal trend model is used with the specified parameters.

Important Note!

The Periods per Season value in the forecast profile is very important. For instance, if your historical data has a season of 7 periods, and you enter a Periods per Season value of 3, the seasonal test will probably be negative. No seasonal models are then tried; only trend and constant models.

Overview of Strategies that Use Automatic Model Selection 1

Automodel Logic 1

O – default method

+ – a method that is used in the test is positive


You need at least two seasonal cycles and three periods as historical values to initiate the model. However, if less are available, the procedure will run, but models that require more initialization periods, such as seasonal trend, are not used.


The procedure conducts a series of tests used to determine which type of forecast model (constant, trend, seasonal, and so on) to use. The system then varies the relevant forecast parameters (alpha, beta, and gamma) in the intervals and with the increments, you specified in the forecast profile.

If you do not make any entries, the system uses default values, in all cases 0.1. It uses these parameters to execute a forecast. It then chooses the parameters that lead to the lowest error of measure that defined in the forecast profile – default is MAD.

Important Note!

For procedure 2, you must bear in mind that when you use the outlier correction, the results are not comparable with the results of the individual processes, since another procedure can be selected for the outlier correction than for the final forecast.


The system first tests for intermittent historical data by determining the number of periods that do not contain any data in the historical key figure. If this is larger than 66% of the total number of periods, the system automatically stops model selection and uses the Croston method.

  1. The system then checks for white noise. It means that the system cannot find a model that fits the historical data as there is too much scatter. If it finds the white noise, it automatically uses the constant method.
  2. If both tests are negative, the system proceeds to test for seasonal and trend effects
  3. The system first eliminates any trend that it finds. To test for seasonal effects, the system determines the autocorrelation coefficient for all possible number of periods (from Number of Periods – Length Variation to Number of Periods + Length Variation). If the largest value is larger than 0.3, the test is positive.
  4. The seasonal test is positive. If no seasonal effects have been found, it executes this test for the number of historical periods (as determined in the forecast profile) minus 2. If seasonal effects have been found, the system executes the test for the number of periods in a season plus 1.

Important Note!

Since the results of these two tests determine which models the system checks in the next stage, the Periods per Season value in the forecast profile is very important. For instance, if your historical data has a season of seven periods, and you enter a Periods per Season value of 3, the seasonal test will probably be negative. No seasonal models are then tried; only trend and constant models.

  1. The system then runs forecasts with the models selected (see table below), calculating all the measures of errors. For models that use forecast parameters (alpha, beta, gamma), these parameters are varied in the ranges and with the step size specified in the forecast profile.

Automodel Logic 2

X – The model is used if the test is positive

A – The model is used if all tests are positive

o- – The model is used if this testis negative

The constant model always runs; the one exception to this is when the sporadic data test is positive. In this case, only the Croston model is used (which is a special type of constant model).

The system then chooses the model with the parameters that result in the lowest measure of error as chosen in the Error measure field of the forecast profile.

Recommendation on SAPExpert

On SAPExpert, an article recommends using a macro to perform the best fit functionality. However, this is completely unacceptable. Best fit functionality is “core functionality” for any enterprise forecasting software, and the fact is it should simply work.

However, I have found that best fit can be made to work in SAP DP, but it must be run in batch mode. If the best fit can be made to work, SAP DP has good mathematics,  but there is some problem that ensues when running the system in batch mode. There is much more to having an effective best fit capability beyond mathematics. Overall, with the availability of best fit in other forecasting applications, it is hard to justify using best fit in DP. When I work on DP recovery projects — and most DP implementation typically needs a full recovery or at least a final tune-up post-implementation — I use a separate prototype environment and perform best-fit forecasting there. The next step is mapping the right forecasting model to the right product and location combination. However, the best fit in DP could never be run as part of the normal forecast planning run, so this is no much of change from how one would run best fit in DP.

Quote From SAPExpert

We found this quote of interest and is one of the few quotations, aside from ours, that describes the issues with SAP DP best fit or auto model selection.

“Many people have asked me, “Why don’t you use the SAP Advanced Planning & Optimization (SAP APO) functionality for automatic model selection?” This question is easy to answer. SAP APO offers the forecast strategy 50 (automated model selection), but a planner cannot influence the system by choosing a certain strategy since the system decides by itself which strategy to use. In my experience this strategy doesn’t find the best fitting model as it checks only a small set of possibilities and parameters. Wouldn’t it be better if you could select a set of best fitting models in an early project phase and have the system check which model fits the best? SAP APO macros, forecast errors, and process chains can help you solve this problem. First, predefined SAP APO forecast models help you calculate statistical forecasts for all relevant planning objects. Then, the macros allow you to calculate the mean absolute percentage error (MAPE) for all these possible forecasts. Later you compare the different forecasts by checking the MAPE for each model. Finally, you can release the whole calculation as a background calculation by using SAP APO process chains. These process chains can then calculate the forecast models, compare the MAPEs, and find the best fitting model. While there are several ways to approach this situation, this article describes a solution I prefer. The solution I describe here offers several advantages. First, for the user, the whole calculation and the model comparison are done in the background, so you don’t have to worry about finding the best forecast model. Next are the technical benefits. In SAP APO each key figure is stored in SAP liveCache. For every third planning object, key figure, and time bucket, one object in the liveCache must be reserved. These objects are known as Time Series. My method helps to reduce the number of Time Series in the liveCache. Here’s how. Demand planning (DP) often takes place on different aggregation levels. These could be, for example, a material group (bundle of materials with similar characteristics, such as products that are produced on the same capacity line) and the planning material itself (e.g., the different products of a company). One main technical recommendation for SAP APO systems is to keep the planning objects and therefore the amount of Times Series as low as possible. Thus, my recommendation is to do the planning on an aggregated level wherever possible. The process chain functionality I describe here requires at least SAP SCM 4.0″

Our Analysis

While we applaud SCMExpert for bringing up the issue, in testing using the AutoModel 1 selection (which is one of the best fit models), run in batch mode we found SAP DP to provide a superior result to Demand Works Smoothie.

However, on the other hand, it is much more difficult to run SAP best fit, and as a consequence, most companies do not run either AutoModel 1 or AutoModel 2, except during initial testing, and sometimes to choose parameters for things like seasonality models. However, AutoModel 2 in SAP is completely unusable and has been since we began working in DP years ago. Secondly, AutoModel 1 does not give the same results when run interactively as it does in batch, and this is a known problem.

Macros for Best Fit as a Recommendation?

On the topic of SCMExpert’s recommendations to use macros to replace the best fit functionality, we have a serious philosophical problem with this recommendation. Macros should not be used for what is really core functionality in an application. They are designed to provide calculated values in the Planning Book and to extend the basic functionality in DP, not replace it. If companies dislike DP’s best fit functionality and want something easier to use, instead we would recommend using in an inexpensive prototype environment to perform this function.

The Problem(s)

The problem with DP’s two best-fit models is that they both have flaws that can make them appear inoperable. In fact one of them is inoperable, but the other one works if run just one way.

  1. Auto Model Selection 1 always produces a very low forecast regardless of the demand history.
  2. Auto Model Selection 2 has issues with consistency. It provides different results depending upon whether the procedure is run interactively, or run in batch. The solution is very simply, to never allow planners to run Auto Model 2 interactively because if they are allowed to do so, the system’s credibility will be undermined.
  3. When running in batch, Auto Model Selection 2 often provides what appears to be a historical copy of demand history with a slight increase or decrease based on the trend.

The Curious Case of Different Results for Interactive Versus Batch

The fact that SAP DP does this is obviously a concern. If a planner manually intervenes in the processing, it is perfectly fine if the result from interactive best fit were different, however without intervention, the batch and interactively run procedure should not return different results. We have never been able to get to the bottom of why DP has this significant flaw. Demand planning is very different from say the more sophisticated methods used in supply planning, where if optimization is run, there are location interdependencies.

Therefore, running cost optimization for one product location combination would result in a different output than if all the product locations in the network were run at once. In fact, running an optimization for one product location would not even make sense. On the other hand, no such complexities affect demand planning. Demand planning has no location interdependencies and should provide the same results if run for one item or the entire product database.

Issues with Best Fit and Parameter Optimization

A common issue on different DP projects is that the forecast parameters (for instance alpha, beta, and gamma) – See this article for a primer on Alpha, Beta, and Gamma. Best fit functionality in DP will choose from among different models which have different parameters. However, these models must be selected and then assigned to product location combinations manually. Very few companies use Auto Model 2 constantly, but use it primarily for initial forecast prototyping. In this way can accurately be described as not very adaptive, because the most adaptive functionality in the application is not usable enough to use as part of the normal forecasting run.

In SAP APO DP, this can do by selecting the Auto Model Sel 1 or 2 on the Model tab of the planning book.

The best fit is selected either with Auto. Model Sel. 1 or Auto. Model Sel. 2. 

This can be found by going to the options button in the planning book. 

Univariate Forecast Profile

The other way to set this is in the Univariate Forecasting Profile.

Next, we want to select the “Forecast Strategy” dropdown, which will show the following options. Two of the options are the best fit options, although it is not direct and obvious as to which of these options are the best fit procedures, which is why I have highlighted it below: 

It can also be selected from within profile maintenance. 

Issues For Which DP Best Fit is Blamed but Which are Not Its Fault

In addition to the real issues described above, DP Best Fit is often blamed for things that it has nothing to do with. The most common is that Auto Model 2 when running in the batch will return a constant model for items which have an erratic demand history. This result is typical of all forecasting system with the best fit functionality and in fact, is logical for any system to do in this circumstance. This result offers a clue to companies regarding what to do with these products. This is a cue that most individuals and businesses have not picked up on.


What to do with SAP DP Best Fit can depend on the client. However, there are also some standard things that can be done to improve Best Fit. Best fit in companies that use DP usually is always a problem, and it has been since DP was first introduced

(As a note, this article was updated in 2018, and nothing has changed since this statement was first made. DP is currently at the end of life with new development going into IBP, so it will likely not change in the future.)

It is important that a company triangulate the output of Best Fit with the results of a prototype environment to know the results are correct. We also work with planners to provide them with more transparency into the best fit results, to allow them to pick the SKU-Locations that should use the best fit selected forecasting method and those that should not.

Our Recommendation

This and several other experiences have to lead me to prefer to run best-fit outside of SAP, in software that is much better at doing it, and which can provide my projects with more visibility and a better teaching tool. Many companies are running themselves ragged attempting to run the best fit in SAP DP when inexpensive applications can do a better job of running best fit at much lower effort level. Companies are often disappointed to hear this, but SAP APO often needs help from other applications and often cannot do all the heavy lifting itself. Clients often want everything to be done in SAP, but in fact, all APO projects have some enhancement. Whether you call it enhancement and code it yourself, or use a third-party application, the fact is that SAP is not doing all the work.

If the client must use DP, then we can code the supply chain forecasting method to be used in SAP DP after the fact. This can allow the best fit to be run much more often and much more efficiently. Best fit does not have to be run for every forecast but can be run just periodically.

Other Forecasting Products

  1. Demand Works Smoothie starts off with the best fit as a default, and there is nothing to configure.
  2. MCA Solutions ( a best of breed service parts solution),  has an ingrained best fit (when using statistical forecasting) that selects on the basis of the length and mean of the observed demand history, the trend, etc…, and the selection can be configured through just a single parameter (whether you want to code the forecast methodology hard or have it “Computed”) which means apply the best-fit algorithm to the selection.

Comparative Design Rating

So I would rate both the above solutions are roughly equal concerning the ease by which best fit can be initiated.

With regards to the setup for SPP, my intuition is that this configuration is over-engineered and that clients would prefer something more simple in this area. Knowing what I know about the more restricted budgets in service parts accounts, I would have made SPP more simple and with fewer areas to customize vs DP and SNP, not more complicated.


The best fit is not universally applicable to all products in a forecasting database. Some applications make using best fit (when applicable) very smooth. SAP DP makes using best fit difficult, and this is especially problematic because SAP DP Forecast Profiles take a while to tune. This both stretches the patience of the business, but also cause the business — which is most often not funded to support such an application like DP which has so much maintenance involved.

At one time, it was thought that the best fit could always be used to perform the right selection. This is something promoted not only by SAP but many software vendors. And it is quite untrue. Several clients, I have worked with enabled, and then disabled best-fit forecasting in SAP DP. I cover this topic in this article. And when I perform best-fit forecasting, I don’t use DP as DP’s best fit is not worth using. The only people that use the best fit in DP are those that don’t have another application to use and feel they have to use the best fit in DP in order to be “SAP compliant.”

The Problem: Restricted on the Ability to Compare Forecast Errors Beyond Item by Item

DP is not designed for the first analysis of data, or the initial selection. DP is problematic for any comparison or figuring out what to do. It was never designed for this and compared poorly to many other far less expensive forecasting applications.

For this, we recommend buying another inexpensive application. However, for forecast error measurement, we recommend using our application. It allows for the semi-automation of forecast error measurement, which is the number one constraint when trying to find the best models to assign.

It took me years to figure out the best way to manage around SAP ERP’s shortcomings and was part of an overall design I developed to mitigate against the general shortcomings related to forecasting error measurement that exists in all forecasting systems I have tested. I have not tested every forecasting application on the market — as new ones appear all the time online, but I have reviewed all of the major forecasting applications, and they normally work the same way with respect to forecast error measurement. I cover six forecasting applications in the book Supply Chain Forecasting Software.

Being Part of the Solution: Monetized Forecast Error Measurement

Forecasting is much more involved than just producing forecasts. For example, when I produce forecasts using any number of forecasting applications, I spend relatively small amounts of time in the forecast creation portion of the effort. While most the time is spent in checking the error of each forecast. The vast majority of forecasting applications are more about creating the forecast than performing forecast error measurement. Once the external forecast error measurement has been performed, then the forecasting models that have proven to be most effective can be hardcoded into SAP ERP’s forecasting sub-module. This hardcoding is temporary and can be changed the next time the forecast error checking is performed.

The application I use for this external error measurement is the Brightwork Explorer. It has been designed around automating error measurement, and therefore the determination of what forecasting method should be assigned to the product location combination. To learn more see link below.

Search Our Other Forecasting Basics Content


Gilliland, Michael, “Worst Practices in Forecasting,” SAS

Supply Chain Strategy, McGraw Hill, Edward Frazelle

Sales and Inventory Planning with SAP APO by SAP Press

Supply Chain Management for Advanced Planning, Springer Press

I cover best-fit forecasting in the following book.

Forecasting Software Book


Supply Chain Forecasting Software

Providing A Better Understanding of Forecasting Software

This book explains the critical aspects of supply chain forecasting. The book is designed to allow the reader to get more out of their current forecasting system, as well as explain some of the best functionality in forecasting, which may not be resident in the reader’s current system, but how they can be accessed at low-cost.

The book breaks down what is often taught as a complex subject into simple terms and provides information that can be immediately put to use by practitioners. One of the only books to have a variety of supply chain forecasting vendors showcased.

Getting the Leading Edge

The book also provides the reader with a look into the forefront of forecasting. Several concepts that are covered, while currently available in forecasting software, have yet to be widely implemented or even written about. The book moves smoothly between ideas to screen shots and descriptions of how the filters are configured and used. This provides the reader with some of the most intriguing areas of functionality within a variety of applications.


  • Chapter 1: Introduction
  • Chapter 2: Where Forecasting Fits Within the Supply Chain Planning Footprint
  • Chapter 3: Statistical Forecasting Explained
  • Chapter 4: Why Attributes-based Forecasting is the Future of Statistical Forecasting
  • Chapter 5: The Statistical Forecasting Data Layer
  • Chapter 6: Removing Demand History and Outliers
  • Chapter 7: Consensus-based Forecasting Explained
  • Chapter 8: Collaborative Forecasting Explained
  • Chapter 9: Bias Removal
  • Chapter 10: Effective Forecast Error Management
  • Chapter 11: Lifecycle Planning
  • Chapter 12: Forecastable Versus Unforecastable Products
  • Chapter 13: Why Companies Select the Wrong Forecasting Software
  • Chapter 14: Conclusion
  • Appendix A:
  • Appendix B: Forecast Locking
  • Appendix C: The Lewandowski Algorithm.