How to Understand the Availability Maximization Spare Parts Management Method

Executive Summary

  • There is basic parts selection logic of the availability maximizer which includes the cost of the part.
  • Service parts management must be able to deal with zero demand situations.

Introduction

This approach and program was written expressly for a low demand environment at the dealer level. Most independent demand inventory algorithms ( EOQ, POQ, Silver-Meal, Parts-Period Balancing, etc.) rely on the statistics of high demand per replenishment period to determine inventory holding and purchasing amounts. And this brings up the question of the applicability of this environment to low demand per replenishment periods.

The Problem of Applicability of Stable Demand Methods to Spare Parts

For environments with relatively stable demand, these methods work reasonably well. Spare parts demand, in contrast, is generally both low in demand per part with significant variability from period to period. As we will find later, low demand and demand variability are related characteristics.

A second limiting factor is that while the spare part demand is low per item, the average spare parts depot will have to carry many times more parts which must be carried than a manufacturing depot to generate comparable fill levels.

Many Years or Versions of Parts

Spare parts operations must carry not only this model year’s inventories but inventories going back decades.

These three characteristics of the spare parts environment:

  1. Low demand
  2. Highly erratic demand
  3. Massive parts databases

These all present difficult challenges to a company dedicated to order fulfillment.

Who Owns the Dealership Network?

A second feature of the client’s environment was the lack of ownership of the dealership network by the client.

In fact, many of the dealers sold competing agricultural-construction equipment brands and maintained spare parts for these brands in their stockrooms in addition to the OEM spare parts. Because the dealers would not allow the OEM to manage their inventories with the Availability Max model without experiencing significant benefits, any system used would have to both increase fill and reduce the inventory carrying amounts.

From the environmental challenges described above, it should be clear that the OEMs needed an inventory system much different to address the characteristics specific to its part business.

The Basic Part Selection Logic of the Availability Maximizer

The entire logic within the Availability Maximizer is based on the following inventory goals.

  1. Limited Resources for Inventory
  2. Capital Required to Carry Inventory Over Order Intervals
  3. Physical space in the stockroom at the dealer
  4. The Company’s Interest in Filling as Many Customer Orders as Possible

These two inventory goals are incorporated into the Availability Max as the following:             

  1. The Cost of the Part (limited resources
  2. Expected Additional Demand Satisfied (companies interest in filling customer orders)

Why the Cost of the Part?

As was noted in the first section this paper, due to the low demand per part, the erratic nature of demand, and the vast numbers of parts in a typical spare parts database a depot would have to carry many times as much inventory as a manufacturing operation of the same general size in order to generate a comparable order fill.

This is because it would be so expensive.

The typical spare parts operation must accept a certain level of stock out on some parts, and a 100% stock out on the lowest demand parts in its database. Even after significant inventory intelligence has been used, at some point, the fill level is based upon the aggregate inventory dollars the dealer is willing to commit to order fulfillment. At some point, the customer is no longer willing to subsidize higher fill levels with higher part prices.

Inventory Investment

Therefore inventory dollars investment is a key component order fill. Inventory investment composed of the aggregate of all parts in inventory. The inventory system can either buy less expensive parts or fewer more expensive parts. Therefore, in the algorithm in the Availability Max, the cost of the part is the denominator to the objective function which Availability Max is attempting to maximize for each part.

Why the Additional Expected Demand Satisfied?

Consider yourself in the situation of the parts manager at an OEM dealership. Every week you must decide as to which parts to order. You, presumably, want to order parts which will sell quickly, which would mean your new purchases would take up less space on your shelves, free up monies for further purchases, and please more customers.

But, which parts are the best parts to order?

You could purchase whatever you sold the past week, and that would get you part of the way there. Or, you could analyze the past year’s demand and with statistical methods determine the probability of demand on different parts. This analysis would yield the Additional Expected Demand Satisfied given a certain order amount. The EADS will always be smaller, or in rare instances the same as the amount that you chose to order. As it is impossible to satisfy the demand for parts you do not have, EADS will never be bigger than the order amount. The calculation of the EADS is very simple.

The Current Inventory Position Versus the Yearly Demand

The current inventory position is compared to the yearly demand of the part to determine the probability of additional demand in some multiple of the order size. If the beginning inventory position is small about the lead time demand, then there is a probabilistically larger chance of unfulfilled demand than if the beginning inventory position were larger than the lead time demand. ( later in this paper the specifics of the probability distributions used and their calculations will be expanded upon). 

Remember that the second basic objectives for which the model is built are the companies interest in filling as many customer orders as possible. 

EADS is simply the following formula.

Basic Rule of The Incremental Benefit of Ordering Order Amount (Q)

EADS = % of Q

The Objective Function

The objective function is where the two mathematical expressions of the two inventory goals are put to use. The objective function is the goal that is to be optimized by the Availability Max. In this case, we want the objective function to be maximized.

This will allow the model to select parts that have a high EADS about their cost.

Objective function :

Maximize  (Expected Additional Demand Satisfied) / (unit cost)     

Determining the numerator, the EADS, to the objective function is where the majority of effort in the Availability Max model is expended. The relative cost of a part compared to its probability of being subject to a customer demand determines it’s ranking as either a high or low opportunity part.[1] [2]  For two equivalently priced parts, the higher opportunity part is the one with the highest ENDS as a percentage of its order amount. The Availability Max performs the objective function above iteratively.

This means that beginning from a current inventory position; it calculates the objective function for every part in the parts order amount as many times as is necessary. After a single iteration where high opportunity parts are selected for purchase, the purchase amount is then added to the current inventory, and the objective function is calculated again. For the parts purchased on the previous iteration, their opportunity is reduced to reflect the new higher stocking position of those items.

It is important to remember that no part will be identified as a high opportunity part for all model calculations. At some point, the sufficient inventory is purchased through prior iterations that the part no longer an attractive alternative to adding more inventory. To provide perspective, for the average dealer used in the development of this model, it is common for the model to perform 7000 iterations for purchases and returns before arriving at the optimum holding position.

Example 1 shows how the model would choose the parts at different iterations with the demand and cost characteristics in Table 1.

Example. 1

Below are the demand probabilities and costs for part A and part B:

[1] In practice, since there is neither a strong correlation between expensive parts nor cheap parts with demand history, the more expensive parts are at a disadvantage and typically the last to be purchased by the Availability Max. The degree to which it purchases mostly medium to less expensive depends upon the desired overall service level used as an input. The higher the desired service level, the higher the model will purchase on the cost scale.

[2] The model is technically defined as an optimizer. This is because it iteratively compares every part until it finds the optimum combination given the objective function, or until it hits a constraint. The constraints are set by the user and include, total inventory dollars, new inventory purchases, individual service level, global service level, and iteration cap.

 Demand of 1Demand of 2Demand of 3Part Cost
Part A.4.2.1$5
Part B.6.1.05$8

First Iteration   .4/5 > .6/8, carry 1 of Part A4

Second Iteration .2/5 < .6/8, carry 1 of Part B 

Third Iteration .2/5 >.1/8, carry another of Part A

Final total after three iterations: carry 2 of A and 1 of B

In the first iteration, the probability of a Demand of 1 of Part A divided by Part Cost A. This is compared with Demand of 1 of Part B divided by Part Cost B. However, notice that after the first iteration, the relevant question becomes the probability of a Demand of 2 on Part A vs. a probability of a Demand of 1 for Part B. This is because 1 of Part A has already been purchased. [1] Therefore, the Availability Max model asks, “What is the incremental probability of moving from 1 to 2 demand  of Part A vs. the probability of a demand of moving from 0 to 1 demand of Part B.” It is important not to skim over the past paragraph as it is the basic operating logic of the model.

The Specifics of the Objective Function for Part Purchases

As explained above, the EADS is generated by comparing the current inventory position to the demand of the past year. Given a certain level of demand, and a certain level of inventory,  there is a section under the curve which is left uncovered by the current inventory holding position. Graph 1 displays a situation with a lead time demand of 4 units, and an average inventory of 5 units. Clearly, demands of 6, 7, 8 units and above would be stocked-out 1 unit (6-5), 2 units, (7-5), and 3 units (8-5) respectively. Any demand up to 5 will be covered by the current inventory.

Graph 1

Probabilities of Demands Above the Inventory Level 5

[1] Two control sets of data were run through the Available Max with the (1- cum service level) opportunity calculation. On one input file, all fields but the cost field were kept constant, and in the other input file, all fields but the demand field were kept constant. In both cases the model’s output was consistent. It ordered more of higher demand parts and more of low-cost parts. In addition, it ordered parts with the consistency and the magnitude which would be expected.

The logic of the model used the formula:

(1-Cumulative probability of (beginning inventory))

This statement would be the mathematical expression of the situation in Graph 1. The output from this equation provides the right side of the distribution (from 5 units and higher) while the following formula would provide the left side of the distribution (from 5 units and lower):

(Cumulative probability of (beginning inventory))

By minimizing the right side of the distribution, (1-Cumulativeprobability of (beginning inventory)), the first version of the model was using the correct concept for minimizing Expected Demand Not Satisfied not the Expected Additional Demand Satisfied. For execution purposes, the basic equation of (1-Cumulativeprobabiliy of (n)) was altered so as to be more robust for the operational version. The following improvement to the basic formula is called the Expected Additional Demand Satisfied purchasing equation.

EADS Purchasing Equation

Q = Incremental increase in inventory (order size)

n = beginning inventory

EXPECTED ADDITIONAL DEMAND SATISFIED = EADS

EADS = -[(Q-1)*PROB.(n+1)  +  (Q-2)*PROB.(n+2)+…1PROB(N+Q-1)]  + Q[1-CUMPROB (n)] 

Objective Function = Max ( EADS/(Cost of Part * Q) )[1]

The EADS is a variation of the original formula, in that the right side of the equation (in bold) is identical to the original formula. The difference lies in the left side of the equation, which subtracts the current iteration (1,2,3,4, etc..) from the order size (Q) and multiplies this number by the current iteration added to the beginning inventory (n). This equation is performed for iterations from 1 to infinity until the outcome from the equation is sufficiently close to 0. The model is set such that computation of less than .000001 triggers the model to cease calculating this equation.[2]

Rather than attempting to identify the uncovered or unprotected portion of the distribution curve (the right side in Graph 1), the EADS formula determines the size of the incremental increase in probability of demand (the portion of probability between the lines from a demand of 5 units to a demand of 6 units in Graph 2).

This addition has benefits regarding the model operation, as well as the increased accuracy of probability estimation.

Graph 2

Incremental Probability Added to Fill Rate by Moving from an Inventory of 5 to an Inventory of 6

[1] This (EDS) formula and serves as the basis for two other derivations of the Avail Max decision system, one which handles returns and a second which estimates order fill.

[2] .000001 was selected as it is reasonably close to 0. By setting this parameter, we save the model computation time which can be better utilized on relevant calculations. This parameter is especially important when the model is dealing with parts with higher demand patterns.

Fill Rate Estimation

A second alteration to the Avail Max performed by was to the way in which the model estimates fill rate. The Availability Max estimated fill rate by just adding the probabilities of demand for the inventory which were in stock.

For instance, if the lead time demand was four units, and the inventory position of 3 units was chosen as an optimum holding amount, then the probabilities of demands of 1,2 and three units were added together to arrive at the estimated fill rate. If, for instance, the probabilities of demands for demands of 1, 2 and three were .20,.25, and .15 respectively, then the model would report a 60% fill rate for that particular part.

The Availability Max estimated fill rate to a useful approximation of order fill by modifying the EADS formula explained in the previous section. This new fill estimation mimics how one would calculate fill rates on a spreadsheet. Table 2 provides an example of just such a spreadsheet fill rate estimation.

Table 2

InventoryDemandProbabilityDemand Not FilledProbability *Demand Not FilledProbability *Demand
31.100.3
32.2500.75
33.35001.05
34.151.15.45
35.12.2.3
36.053.15.15
Totals.53

% not filled = .5/3 = 0.16667

% filled =    1-.1667 = .8333

The Zero Demand Situation

A third area which improved the Availability Max was in the model’s recognition of situations when there is no demand for a part. With any part, no matter how high or low the past period’s demand, there is always the possibility that the part will experience zero. With the vast majority of parts in a spare parts database, this probability of zero demand is significant as most parts have demands of less than two units over a two week lead time.

For example, a part with a Poisson distribution to its demand pattern which had a lead time demand the previous year of 2 units would have a 13.5% chance of not being subject to demand the following year (given use of the naive forecast). This would not amount to 13.5% fill rate estimation for that part. As the part experienced zero demand, any attempt at fill rate estimation is an illegitimate endeavor. The Availability Max added the probability of zero demand into its fill calculation.

The fill rate estimation is simply a modification of the EADS formula used to purchase parts. The same algorithm is used with 0 used as the beginning inventory variable (n) and the ending inventory substituted for the order amount (Q). The fill rate is then estimated by dividing the EADS by the mean demand of the past year.

Fill Rate Estimation

Q = ending inventory

n = 0

EXPECTED ADDITIONAL DEMAND SATISFIED = EADS

EADS = -[(Q-1)*PROB.(n+1)  +  (Q-2)*PROB.(n+2)+…1PROB(N+Q-1)] + Q[1-CUMPROB (n)]

Fill Rate =    EADS / Mean Demand[1]

Factoring in Returns

A third change made to the Availability Maximizer was to the method by which the model chose to return parts. When first presented the model used the same logic that was in the original part purchase equation (EADS). However, this did not run in reverse (returns) very well. It displayed a tendency to minimize the right side (uncovered and unprotected) of the distribution as the current inventory was reduced by the order size. For the EADS modification, (Q), this time the incremental decrease in inventory, is subtracted from the current inventory (n) to generate (z), the substitute factor for (n) to enter into the modified EADS equation.

[1] The Availability Max model contains both a global and individual fill rate cap which can be entered into the model’s screens before the model is run. The OEM wanted to achieve a global fill rate of 85%. This was entered before the model ran, and in addition, individual caps were set somewhat higher than that level. However, the minimums and package quantities in whose increments forced the model to purchase in larger increments meant that rarely were the individual fill levels close to the global or individual cap. It is important, when analyzing the model’s output file, to remember that the caps do not limit the fill which an individual part can attain. They only prevent the model for purchasing additional pieces if the estimated fill rate is above the cap on a particular iteration.

Graph 3

From Graph 3, it is clear that the probability gained of moving from a demand of 3 units to a demand of 5 units ( a purchase quantity of 2) and the probability lost of moving from a demand of 5 units to a demand of 3 units (a return quantity of 2) is identical. Therefore it is only necessary that the formula for a part purchase be modified to generate the probability lost of a return. This is generated by changing the semantics of (Q) in the equation from order amount to return amount. This is performed by subtracting the return amount from the current inventory (n) and using the output from this activity (which we call (z)) to enter as a substitute for current inventory (n). This new output could then be called the Expected Demand Lost (EDL) as opposed to the (EADS).

The EADS Equation (EDL) Modified for Returns

Q = incremental decrease in inventory

n = current inventory

z = n – Q

EXPECTED DEMAND LOST = EDL

EDL = -[(Q-1)*PROB.(z+1)+(Q-2)*PROB(z+2) +…1PROB(z+Q-1)] + Q[1-CUMPROB (z)]

Objective function = Min( EDL/(Part Cost * Q) )

Is This Logic The Correct Logic to Use for Service Parts Inventory Management?

Dr. Hau Lee, Professor of Industrial Engineering at Stanford University, viewed the Availability Max model in operation and recognized it as an application of the greedy heuristic. [1] As it happens, Dr. Lee had jointly published a paper on the greedy heuristic’s use in inventory management in which he supports its use for situations with large numbers of parts (a large number of parts in his opinion was over a thousand). In experimental results taken from his paper  Multi-Item Service Constrained (s,S)[2] Policies for Spare Parts Logistics Systems published in Naval Research Logistics, Lee, Kleindorfer, Pyke, and Cohen used a multi-item algorithm with a Poisson distribution for both high and low demand types.

Two hundred and fifty periods were simulated in order to reduce any random error. The results were that the greedy heuristic approximation was very accurate, with average errors ranging from .0006 to .031 for low service level requirements, and from .005, to .008 for high service level requirements. The following quote is from the Naval Research Logistics article. 

“It is possible to apply a greedy heuristic to both S (order up to level) and s (order point) incrementing with either S or s, for the part and control variable that provides the largest incremental increase in service for the minimum cost.” (570)

The Poisson, the Normal and the Compound Poisson Distribution Assumptions and the Problem of Specification[3]

In order to develop the probabilities of different demands for different items for use in EADS, and EDL, it becomes necessary to choose a probability distribution which will closely fit the future expected demand. The Normal distribution is used when the demand is sufficient in volume such that the law of large numbers allows for accurate forecasting. (The graphical representations of the Normal distribution can be found in Graph 2 which is up a few pages.) However, for service parts, only the smallest minority of parts fit this description. For the rest either a Poisson, Gamma, or Compound Poisson s conventionally believed to offer the correct approximation.[4] The Poisson and Gamma are very similar leftward leaning, positively skewed probability distributions. The graphical representations for both are as follows in Graph 3.

Graph 4

[1]The Poisson and Gamma are both positively skewed distributions (positively skewed means that the longer tail is in the positive number direction). They are typically used when there is a high degree of randomness the historical data pattern. Both can be used to predict events like the timing of customers arriving at bank teller window, trucks arriving at a dock, in addition to the demand pattern for C items. The Poisson distribution has been extensively tested and found to be most effective at approximating future demand when the average lead time demand is below 10 units over the test period.[2] [3] The Compound Poisson distribution is used when the demand is both random and extremely “lumpy.”

This distribution is especially applicable when items experience demand in conjunction with one another, for instance, the demand of a left shoe with a right shoe, or the demand for complimentary repair parts. The problem with the Compound Poisson is in its calculation which is complex. In most low demand situations, either the Poisson or Compound Poisson can be used effectively, and it was the ease of computation which was the deciding factor in favor of the Poisson for the Availability Maximizer model.

When the model was first presented, it only used the Poisson distribution. Later the Normal distribution for parts with more than a historical demand of 10 over the replenishment lead time was added. The Normal distribution is calculated in the Availability Maximizer through the use of polynomial exponents displayed below. Polynomial exponents are simply a method for approximating the Normal distribution given a certain normalized value for x.

Polynomial Exponential Approximation for the Normal Distribution

k = ((beginning inventory + Q) – mean demand)/ (standard deviation of demand)

for

(0 <= k <= infinity)

1-.5(1+.196854 * k + .115194 * k2 + .000344 * k3 + .019527 * k4)

Minimums and Package Quantities and Return Thresholds

The model’s logic for choosing parts to buy and hold is known as the “greedy heuristic.” However, while it is single-minded in its search for the best opportunity, it may create purchasing scenarios that are uneconomical. For this reason, a minimum order quantity on the input file was added. The minimum order quantity was based on an EOQ with an order cost of $5 and a holding cost of .24 per year.[4] Also, to guarantee orders consistent with the client’s system, a package quantity column was entered into the input file.[5] Both minimums and package quantities are used when deciding how much to buy. The first purchase will always be in the minimum order amount, and then successive purchases will be in increments of the package quantity.

However, When returning parts, the minimum field is not used. To ensure that the model did not return parts that may be needed at another time, a third column was added to the input file.[6] This column was generated by a nine-month supply of yearly demand. This was called the return threshold field.[7] [8]

Forecasting

The focus of the project on which the Availability Max model was developed was to test the inventory replenishment logics for the purposes of selecting a professional software package which would perform functions similar to the Availability Max. It was decided by the team members that the model would be fed as a naive forecast and that when the software for inventory replenishment was selected, a software package for forecasting would be chosen.

This basic naive approach was further augmented to capture the seasonal nature of the parts of the client. The naive approach was supplemented as the following paragraphs explain.

For parts with average annual dollar volume x <= $10

If part has demanded of 6 months, then looking forward 1 and two years ago use the total of 12 months of demand divided by 2 to generate the bi-monthly demand forecast.

For parts with average annual dollar volume $10< x > $300

If part has demanded of 3 months looking forward to 1 and two years ago use the total of 6 months of demand divided by 2 to generate a bi-monthly demand forecast.

For parts with average annual dollar volume > $300 parts

If a part has demanded of 2 months looking forward to 1 and two years ago, then use the total of 4 months of demand divided by 2 to generate a bi-monthly demand forecast.

When the forecasting software is finally chosen, this methodology would no longer be used. However, the spare parts databases promise challenges which must be dealt with. The vast majority of parts would be classified as C items under traditional inventory theory, and according to Silver and Peterson,  C parts do not lend themselves to anything but naive forecasts.

However, for a small segment of the database, there are parts which can be forecasted reasonably well. When we say “reasonably” we mean better than a 25% forecast error.

Conclusions

In testing, the Availability Max purchased both inexpensive parts and higher demand parts. Spreadsheets which mimicked the logic in the Availability Max were used to test the ordering and return amounts as well as the corresponding fill rate calculations. These tests to the model’s operating logic indicated the model was selecting parts in conformance with its programming. As of the time of this writing the largest issue is the size of the order minimums. After preliminary runs, the model appeared to be ordering up to the minimum level for the majority of parts. There is evidence that these minimums may be set too high, even though the order cost used is only $5 per line.

During the development of the Availability Max, it was a common occurrence for extra requirements to be projected upon the model. While it may often be intuitively appealing to attempt to include all inventory considerations into the model through the addition of parameters, there are two drawbacks to this approach.

  • Number one, the attempted optimization of more than a few basic parameters can lead to a “middling effect” whereby the parameters tend to neutralize one another.
  • Number two, with each additional parameter a level of complexity is added to the modeling process. This is undesirable as it requires additional resources from the development team. Second, in developing a day-to-day operational inventory management system, the simplicity of execution is a necessity.

Intermittent Demand and Service Parts Databases

Our Solution for Managing Intermittent Demand

The number of service parts companies that actually use service parts software is small. We offer some of the most important features of managing service parts in an easy to use SaaS application that can be used to improve the management of any ERP system for service parts. It’s free until it receives “serious usage” and is free for students and academics to access. Select the image below to find out more.

 

References and Footnotes

[1] Continuous Distributions – specified outcomes cannot be defined, but the range of outcomes can be defined

Discrete Distributions – specified outcomes can be defined, and a range of outcomes can be defined.

[2] Archibald, B., E. A. Silver, and R. Peterson (1974). “Selecting the Probability Distribution of Demand in a Replenishment Lead Time.” Working Paper No. 89, Department of Management Sciences, University of Waterloo.

[3] The Availability Max model does not operate under any lead time parameters. It simply analyzes the demand it is fed as demand over some interval, the manipulation for the purposes of adjusting for lead time is performed on the input file. The project team is currently using a baseline of a two week total lead time ( review + replenishment ) which means that all parts with demand less than 234 per year fall into the Poisson assumption. This means that for a typical dealer, less than 100 parts will fall into the Normal calculations in the model.

[4] Variable order costs ( r )  and holding costs ( A ) are recommended as it is generally difficult to pinpoint actual costs. For this reason, Silver and Peterson recommend creating exchange curves displaying the effect on order frequency and cycle stock $ with various A/r  fractions. At the OEM, while 24% holding cost is un-controversial, the order cost is subject to discussion.

[5] For the model to operate correctly, minimums are always entered as a multiple of package quantities.

[6] During the project, the OEM voiced a need for the model to deal with non-quantitative issues or issues which were not feasible to put into a mathematical form. These included substitutions, multi-substitutions, and unit of measure issues. The substitution issue dealt with the transfer of demand data from an old part which had been in some way improved and thus been given a new part number. In some cases, one part may be re-engineered into two parts or two parts reengineered and combined into one part. These are defined as multi-substitutions. As for unit of measure issues, it was common for the dealer and the OEM to have incompatible data records. For instance, if a hose is regularly sold in 50-foot lengths, the demand data may be corrupted when a sale of one 50 foot length is reported as a sale of “50” which may be interpreted as a sale of fifty 50 foot lengths. These types of issues were left to “post processing” in which the data from the output file would be analyzed on an exception basis

[7] One outcome from all of these changes is that the model was altered to fit the clients’ day to day needs better. A second outcome is that the degree of optimization was effectively reduced as more constraints were placed on the outcome of inventory purchases and returns. Between individual parts, the fill rates became more staggered, there were many parts with 99% fill rates reported, and fewer parts with midrange fill estimations of 83, 86, 92%, etc… With these added constraints the model chose to leave many parts with no fill rate and others with fill rates well beyond the 85% target.

[8] The model has no time horizon or time orientation. It accepts whatever demand it reads from the input file as the demand over the interval it is calculating. If demand over five years were on the input file, then the model would calculate an optimal purchase quantity for a five year period. As we have assumed a two week total lead time (1 week for review and one week for replenishment), the yearly demand was divided by 26 to arrive at the demand over lead time. Also, the standard deviation, which is used in computation for the probability of demand the higher demand parts, was available to us from the client’s information systems on a monthly basis. To change the standard deviation to a bi-monthly variance, the monthly variance was divided by the square root of 2.

[1] Lee, Pyke, Kleindorfer, and Cohen. “Multi-Item Service Constrained (s, S) Policies for Spare Parts Logistics Systems.” Naval Research Logistics Vol. 39 pp. 561-577 (1992)

[2] The notation s in (s,S) = reorder point, S = order up to point

[3] The Problem of Specification defined as the attempt to fit a historical pattern to a probability distribution for the purposes of using statistical methods on the data. There are a few quantitative techniques such as the Lillefors test for normalcy, but more frequently the problem of a specification is resolved through the application of probability distributions used for different situations taken from published works.

[4] Another popular distribution is the Negative Binomial which is useful for approximating binary events. However, as the Compound Poisson is very similar to it, only the Compound Poisson will be analyzed in this paper.

How to Best Understand a Heuristic Algorithm for Service Parts

Executive Summary

  • What is a heuristic algorithm and how can a heuristic be compared against an algorithm as well as what is a meta-heuristic?

Introduction to Heuristic Algorithms

This post documents an email discussion between myself and Wayne Fu regarding the heuristic algorithm.

Question for Wayne Fu

“What is a heuristic based optimization algorithm, or a heuristic algorithm?

I thought that heuristics were one form of problem solving, and optimization was another. How is a heuristic based algorithm or heuristic algorithm  different from a non-heuristic based algorithm? That would help me and readers out a lot.” Shaun Snapp

The Answer

Optimization can be classified as deterministic and stochastic, while all inputs are a constant in deterministic optimization. Inventory related optimization is definitely stochastic since the demand is never been a constant, but a given distribution. The most classic optimization method in deterministic is linear programming.

heuristic-based-algorithmAnother name for stochastic is meta-heuristic. Meta-heuristic is a vast topic and used very broadly, because it is much more flexible, contingent, and even could yield a better result than deterministic methods while inputs are deterministic.

Heuristics in Major Solvers

Like ILOG’s CPlex, they are very robust linear programming solvers, but eventually when it tries to determine a solution; it uses heuristics. i2 Technologies used to use CPlex in master planning to provide draft outcomes, and then MAP as the heuristics solver to fine-tune the solution.

A Metaphor for Comparing a Heuristic Versus Optimization

One extremely simplified way to see the deterministic and heuristics is like searching for a house. Using a deterministic approach would be like zooming out to a couple of thousand miles always from earth, and then picking a location you think is best by giving all the criteria you can check at that distance. Then heuristics would be like standing in front of a train station, start asking the people around or checking local newspaper to figure out where is the better place to live. Then you move over there, check around again and narrow the scope further down or even jump out to next place.

Meta-Heuristics

So, inventory optimization is meta-heuristic. In METRIC, it is using margin analysis as the criteria of heuristic.

It starts by searching the for the part which provides the best value to increase its inventory, the next one, the next one in the belief that we will stop at some point and that will be the optimal inventory position overall.

Most people who work in this area are familiar with the term heuristics, but much less so with the term “metaheuristics.” Metaheuristics are important for problems that are computationally infeasible to solve with optimization.

In computer science, metaheuristic designates a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics do not guarantee an optimal solution is ever found. Many metaheuristics implement some form of stochastic optimization. – Wikipedia

Optimization?

Optimization is a word with a number of meanings. In operations research, it means to meet an objective function, usually within some constraints. To the laymen, optimization has often been used to mean to “improve.” To many people, it is considered normal that optimization is always possible, or that finding an optimal solution is always possible. However, that is not the case. Some problems, of course, are not worth optimizing and some problems are so complex that they don’t bear optimization easily. This leads to an interesting quote.

In this book we refer to evolutionary algorithms and metaheuristics as improvement methods. In standard business software finding the optimum of a nonlinear or hard to solve problem is often approached by using evolutionary algorithms /iterated search which – after a pre-set maximum calculating time – in a wide variety of cases encountered in business optimization return an acceptable solution in a vicinity of a local optimum (hopefully) close to the global optimum. – Real Optimization in SAP APO

This describes methods that while they do not result in an optimal result, can get reasonably close to the global optimum.

One of the complicating factors in understanding the difference between heuristics and optimization is that they are often taught as separate methods. A generalization is that an optimizer has an objective function, while a heuristic does not.

However, in practice and many important foundational research papers, in fact, heuristics are combined with optimization. I think you provided an excellent explanation of meta-heuristics. It enables a person who reads METRIC (an acronym for Sherbrooke’s foundational Multi-Echelon Technique for Recoverable Item Control), to understand it much better.

Author Thanks:

I wanted to thank Wayne Fu for his contribution.

Interviewee Profile

Wayne Fu is a Senior Product Management in Servigistics. With operation management background, Wayne has worked in service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization, and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.

References

“Real Optimization with SAP APO,” Josef Kallrath, Thomas I. Maindl, Springer Press, 2006

Intermittent Demand and Service Parts Databases

Our Solution for Managing Intermittent Demand

The number of service parts companies that actually use service parts software is small. We offer some of the most important features of managing service parts in an easy to use SaaS application that can be used to improve the management of any ERP system for service parts. It’s free until it receives “serious usage” and is free for students and academics to access. Select the image below to find out more.

 

Why SAP SPP Continues to Have Implementation Problems

Executive Summary

  • SAP created a partnership with MCA that was designed to get into the service parts planning market.
  • We cover the outcome from this partnership.

Introduction

The pathway is not clearing for SPP as the successes have been few and far between. However, there is a solution.

Bringing Up SAP SPP in the Market

SPP has been a long haul for SAP. First of all, this product was an attempt to bring service parts planning into the mainstream. Rightly so, SAP identified service parts planning as a key underinvested in area in the enterprise.

SAP thought it could grow this business and combined part of the code bases of SAP Demand Planning, SAP Supply Network Planning and then added service specific capability that had been sitting in other best of breed applications for some years. These include:

  • Inventory Rebalancing
  • Leading Indicator Forecasting
  • Repair Buy Functionality
  • Partial Service Level Planning (planning low on the service level hierarchy)
  • More details on the service level hierarchy see the link.

SAP even surprised me by coming up with in my opinion the best interface for planning in all of SAP SCM, the DRP Matrix. This helped address a historical weakness in the SCM modules, (at least for one module). However, the initial problems began when SAP approached clients and explained the SPP solution to them. Instead of focusing on just SPP, instead, clients were shown a demo that included a smorgasbord of SCM functionality which brought many different modules into the solution (such as GATP) and even the SAP Portal.

This was a mistake because even the biggest service organizations have a lot less money to spend on software, so getting them just to purchase SPP would have been a success. Furthermore, service organizations are far further down the capability totem pole than the finished goods side of the business, so their ability to even implement the solution that SAP presented to them would have been unlikely. I have spoken to SAP product management about this, and they have re-stated that this is their strategy and that they think it is gaining purchase with clients.

The Partnership with MCA

The second part of their strategy was to partner with best of breed service parts planning company MCA Solutions and created a “xApp” which combined the forecasting functionality of MCA SPO with the supply planning portion of SPP. I have written previously that I am very much opposed these types of arrangements for many reasons.

There are several thorny issues with these partnerships.

It’s unclear that vendors should be selecting vendors of clients. The large vendor may not select the smaller vendor that is best for clients vs. best for, the larger vendors. These partnerships allow SAP to say they have functionality that they did not originate and are claiming extraordinary IP rights vis-a-vis the smaller software company

SAP’s partnership agreements require that the smaller vendor declare their IP and that IP that is undeclared can be taken by SAP. This was rather shocking, and I think shameful that such an agreement would even be drafted.

Unequal partnerships like this are inherently inconsistent with the type of economy that a lot of Americans say they believe in. The Federal Trade Commission has a role, which they don’t seem to take very seriously anymore to prevent over concentrations of power in any industry, and that includes software.

However, as luck would have it, the xApp program is currently dying or dead (the xApp program includes something like 140 different applications vendors that SAP has “partnered with”) and by in large they have not caught on. MCA and SAP’s contract for the xApp program was not renewed.

SPP Project Problems

Despite their missteps, SAP was able to get several companies to buy and implement SPP. However, two of the biggest implementation sites of SPP, which are Caterpillar Logistics and the US Navy, is after a number of years and significant expense not anywhere. Navy is not live with SPP, and unlikely to ever go live. This is something the folks over at Navy don’t like to talk about much, as a whole lot of US taxpayer dollars went to Deloitte and IBM for very little output. The blame does not squarely lie with SAP even though SPP does not work properly. I plan to write a future article entitled “I follow Deloitte,” which describes how every post-Deloitte SAP SCM project I seem to work on is barely functional. However, Deloitte continues to get accounts somehow because too many corporate decision makers are not performing their research. How About Ford?

Another major implementation for SPP is Ford, but they have seen little value from their SPP implementation. The best prediction I receive from those that have worked on the project is that Ford will eventually walk away from SPP. However, they cannot publicly do this because they have invested at least nine years and huge amounts of money into the implementation. Therefore, SPP now has no large reference accounts for SPP. A hybrid of SPP has been implemented at Bombardier. However, this is the old SIO architecture where MCA Solutions performs most of the heavy lifting. Therefore, it can also not be considered a live SPP implementation.

None of this surprises me, as after working with SPP, it is not possible to take the application live without custom development work or combining with functional service parts planning applications. This solution turns SPP into a shell, which can make some executives happy, as it means they are using SAP, but the work is done by a different application.

Reference Accounts for SPP?

This is a problem because they were to be used as the major reference accounts to selling into other accounts. The problems at Caterpillar are particularly galling as SPP was developed at Caterpillar. Caterpillar Logistics is plastered all over a large amount of SAP marketing literature and is the gold reference account for the solution. Here there is not much to reference unless as a potential client you are willing to wait that long to bring a system live. And secondly, the degree to which Cat is live is a matter for dispute.

Cat will do what it can to continue the impression that they have at least some functionality live because to walk away would mean a PR problem for them. What would be interesting is to see if SPP can be implemented without a large consulting firm as neither IBM nor Deloitte have had success with SPP. SAP should consider backing a smaller firm or doing it themselves as they need a success in the SPP space. At this point the biggest reference-able account for SPP is Ford.

Where Do We Go From Here?: The Blended Approach

SAP’s Product Management Approach with SPP

Some decisions that have been made by SPP product management are very poor. I think the major consulting companies are out of their depth in implementing SPP, and it needs to be radically improved to make more if its functionality effective. A significant amount of functionality that is in the release notes simply is broken or does not work properly.

I have performed SPP consulting and would like to see the module, and service parts planning, in general, to become more popular and widely implemented than it is. However, it’s important to consider that SPP only introduced some of the functionality that brings it partially up to par with other best of breed solutions in the current version (7.0). Before 7.0, SPP was not really competitive, and it can take several versions of SAP’s newest functionality to work correctly.

For this reason, including my personal experiences configuring SPP, it would be difficult for me to recommend relying upon SPP exclusively. I think the experiences at Caterpillar Logistics, Ford, and the US Navy lend credence to the idea that going 100% with SPP is a tad on the risky side.

To fill in the areas of SPP that are lacking, I would recommend a best of breed solution. Some things like leading indicator forecasting need to be improved. Furthermore, if you want to perform service parts planning with service level agreements (SLAs), there is no way around a best of breed solution. There are a number of very competitive solutions to choose from, and it all comes down to matching the way they operate vs. the company needs.

Simulation Capability Enhanced with Best of Breed

I will never be a fan of performing simulation in SCM entirely. The parameters in SAP SCM are too time-consuming to change, and the system lacks transparency. However, several of the best of breed service parts planning solutions are very good at simulation. While it may be conforming to use a single tool, it’s generally a bad idea to try to get the software to do something it’s not good at. For simulation, I would recommend going with a hosted solution and a best of breed service parts planning vendor.

As few companies want to invest to staff a full-time simulation department (planners are often too busy, and lack the training to perform simulation), it makes a lot of sense to have the application with the vendor. As they are experts in the application, they can make small tweaks to the system and provide long-term support to the planning organization. All of this can be built in at a reasonable rate to the hosted contract.

Conclusion

It only makes sense to use the history of an application to adjust future implementations. In doing so, it is most advisable to pair SPP with a best of breed vendor that best meets the client requirements. The additional benefit of this approach is that you get access to consultants who have brought numerous service parts projects live. And those consultants primarily reside in the best of breed vendors.

We were recently contacted by a major consulting company to support them in a client who is looking at SPP (we don’t work for consulting companies). The consulting company was simply focused on getting the client to implement SPP, so knowing the company, it is not difficult to imagine the stories that were told, and what was covered up to get the client to sign on the dotted line.

Companies interested in the full story on SPP’s functionality and how it compares to what else is available can contact us by selecting the button below.

Intermittent Demand and Service Parts Databases

Our Solution for Managing Intermittent Demand

The number of service parts companies that actually use service parts software is small. We offer some of the most important features of managing service parts in an easy to use SaaS application that can be used to improve the management of any ERP system for service parts. It’s free until it receives “serious usage” and is free for students and academics to access. Select the image below to find out more.

 

References

Discussing the underinvestment in parts.

https://www.servigistics.com/solutions/parts.html

On the precise date, the SPP initiative was kicked off at Caterpillar Logistics.

https://logistics.cat.com/cda/components/fullArticle?m=115228&x=7&id=382143