How to Understand the Availability Maximization Spare Parts Management Method

Executive Summary

  • There is basic parts selection logic of the availability maximizer, which includes the cost of the part.
  • Service parts management must be able to deal with zero demand situations.


This approach and program were written expressly for a low demand environment at the dealer level. Most independent demand inventory algorithms ( EOQ, POQ, Silver-Meal, Parts-Period Balancing, etc.) rely on the statistics of high demand per replenishment period to determine inventory holding and purchasing amounts. And this brings up the question of the applicability of this environment to low demand per replenishment periods.

The Problem of Applicability of Stable Demand Methods to Spare Parts

For environments with relatively stable demand, these methods work reasonably well. Spare parts demand, in contrast, is generally both low in demand per part with significant variability from period to period. As we will find later, low demand and demand variability are related characteristics.

A second limiting factor is that while the spare part demand is low per item, the average spare parts depot will have to carry many times more parts, which must be carried than a manufacturing depot to generate comparable fill levels.

Many Years or Versions of Parts

Spare parts operations must carry not only this model year’s inventories but inventories going back decades.

These three characteristics of the spare parts environment:

  1. Low demand
  2. Highly erratic demand
  3. Massive parts databases

These all present difficult challenges to a company dedicated to order fulfillment.

Who Owns the Dealership Network?

A second feature of the client’s environment was the lack of ownership of the dealership network by the client.

Many of the dealers sold competing agricultural-construction equipment brands and maintained spare parts for these brands in their stockrooms in addition to the OEM spare parts. Because the dealers would not allow the OEM to manage their inventories with the Availability Max model without experiencing significant benefits, any system used would have to both increase fill and reduce the inventory carrying amounts.

From the environmental challenges described above, it should be clear that the OEMs needed an inventory system much different to address the characteristics specific to its part business.

The Basic Part Selection Logic of the Availability Maximizer

The entire logic within the Availability Maximizer is based on the following inventory goals.

  1. Limited Resources for Inventory
  2. Capital Required to Carry Inventory Over Order Intervals
  3. Physical space in the stockroom at the dealer
  4. The Company’s Interest in Filling as Many Customer Orders as Possible

These two inventory goals are incorporated into the Availability Max as the following:             

  1. The Cost of the Part (limited resources
  2. Expected Additional Demand Satisfied (companies interest in filling customer orders)

Why the Cost of the Part?

As was noted in the first section this paper, due to the low demand per part, the erratic nature of demand and the vast numbers of parts in a typical spare parts database a depot would have to carry many times as much inventory as a manufacturing operation of the same general size to generate a comparable order fill.

This is because it would be so expensive.

The typical spare parts operation must accept a certain level of stock out on some parts, and a 100% stock out on the lowest demand parts in its database. Even after significant inventory intelligence has been used; at some point, the fill level is based upon the aggregate inventory dollars, the dealer is willing to commit to order fulfillment. At some point, the customer is no longer willing to subsidize higher fill levels with higher part prices.

Inventory Investment

Therefore inventory dollars investment is a crucial component order fill—inventory investment composed of the aggregate of all parts in inventory. The inventory system can either buy less expensive parts or fewer more costly parts. Therefore, in the algorithm in the Availability Max, the cost of the part is the denominator to the objective function, which Availability Max is attempting to maximize for each part.

Why the Additional Expected Demand Satisfied?

Consider yourself in the situation of the parts manager at an OEM dealership. Every week you must decide as to which parts to order. You, presumably, want to order parts that will sell quickly, which would mean your new purchases would take up less space on your shelves, free up monies for further acquisitions, and please more customers.

But, which parts are the best parts to order?

You could purchase whatever you sold the past week, and that would get you part of the way there. Or, you could analyze the previous year’s demand and, with statistical methods, determine the probability of demand on different parts. This analysis would yield the Additional Expected Demand Satisfied, given a certain order amount. The EADS will always be smaller, or in rare instances, the same as the amount that you chose to order. As it is impossible to satisfy the demand for parts you do not have, EADS will never be bigger than the order amount. The calculation of the EADS is simple.

The Current Inventory Position Versus the Yearly Demand

The current inventory position is compared to the yearly demand of the part to determine the probability of additional demand in some multiple of the order size. If the beginning inventory position is small about the lead time demand, then there is a probabilistically larger chance of unfulfilled demand than if the starting inventory position were larger than the lead time demand. ( later in this paper the specifics of the probability distributions used and their calculations will be expanded upon). 

Remember that the second primary objectives for which the model is built are the companies’ interest in filling as many customer orders as possible. 

EADS is simply the following formula.

Basic Rule of The Incremental Benefit of Ordering Order Amount (Q)

EADS = % of Q

The Objective Function

The objective function is where the two mathematical expressions of the two inventory goals are put to use. The objective function is the goal that is to be optimized by the Availability Max. In this case, we want the objective function to be maximized.

This will allow the model to select parts that have a high EADS about their cost.

Objective function :

Maximize  (Expected Additional Demand Satisfied) / (unit cost)     

Determining the numerator, the EADS, to the objective function, is where the majority of effort in the Availability Max model is expended. The relative cost of a part compared to its probability of being subject to a customer demand determines it’s ranking as either a high or low opportunity part.[1] [2]  For two equivalently priced parts, the higher opportunity part is the one with the highest ENDS as a percentage of its order amount. The Availability Max performs the objective function above iteratively.

This means that beginning from a current inventory position, it calculates the objective function for every part in the parts order amount as many times as is necessary. After a single iteration where high opportunity parts are selected for purchase, the purchase amount is then added to the current inventory, and the objective function is calculated again. For the parts purchased on the previous iteration, their opportunity is reduced to reflect the new higher stocking position of those items.

It is important to remember that no part will be identified as a high opportunity part for all model calculations. At some point, the sufficient inventory is purchased through prior iterations that the part no longer an attractive alternative to adding more stock. To provide perspective, for the average dealer used in the development of this model, it is common for the model to perform 7000 iterations for purchases and returns before arriving at the optimum holding position.

Example 1 shows how the model would choose the parts at different iterations with the demand and cost characteristics in Table 1.

Example. 1

Below are the demand probabilities and costs for part A and part B:

[1] In practice, since there is neither a strong correlation between expensive parts nor cheap parts with demand history, the more expensive parts are at a disadvantage and typically the last to be purchased by the Availability Max. The degree to which it purchases mostly medium to less expensive depends upon the desired overall service level used as an input. The higher the desired service level, the higher the model will purchase on the cost scale.

[2] The model is technically defined as an optimizer. This is because it iteratively compares every part until it finds the optimum combination given the objective function, or until it hits a constraint. The constraints are set by the user and include, total inventory dollars, new inventory purchases, individual service level, global service level, and iteration cap.

 Demand of 1Demand of 2Demand of 3Part Cost
Part A.4.2.1$5
Part B.6.1.05$8

First Iteration   .4/5 > .6/8, carry 1 of Part A4

Second Iteration .2/5 < .6/8, carry 1 of Part B 

Third Iteration .2/5 >.1/8, carry another of Part A

Final total after three iterations: carry 2 of A and 1 of B

In the first iteration, the probability of a Demand of 1 of Part A divided by Part Cost A. This is compared with Demand of 1 of Part B divided by Part Cost B. However, notice that after the first iteration, the relevant question becomes the probability of a Demand of 2 on Part A vs. a probability of a Demand of 1 for Part B. This is because 1 of Part A has already been purchased. [1] Therefore, the Availability Max model asks, “What is the incremental probability of moving from 1 to 2 demand  of Part A vs. the probability of a demand of moving from 0 to 1 demand of Part B.” It is important not to skim over the preceding paragraph as it is the primary operating logic of the model.

The Specifics of the Objective Function for Part Purchases

As explained above, the EADS is generated by comparing the current inventory position to the demand of the past year. Given a certain level of demand, and a certain level of inventory,  there is a section under the curve which is left uncovered by the current inventory holding position. Graph 1 displays a situation with a lead time demand of 4 units, and an average inventory of 5 units. Demands of 6, 7, 8 units and above would be stocked-out 1 unit (6-5), two units, (7-5), and three units (8-5), respectively. Any demand up to 5 will be covered by the current inventory.

Graph 1

Probabilities of Demands Above Inventory Level 5

[1] Two control sets of data were run through the Available Max with the (1- cum service level) opportunity calculation. On one input file, all fields but the cost field were kept constant, and in the other input file, all fields but the demand field were kept constant. In both cases, the model’s output was consistent. It ordered more of higher demand parts and more of low-cost parts. Also, it ordered parts with the consistency and the magnitude, which would be expected.

The logic of the model used the formula:

(1-Cumulative probability of (beginning inventory))

This statement would be the mathematical expression of the situation in Graph 1. The output from this equation provides the right side of the distribution (from 5 units and higher) while the following formula would provide the left side of the distribution (from 5 units and lower):

(Cumulative probability of (beginning inventory))

By minimizing the right side of the distribution, (1-Cumulativeprobability of (beginning inventory)), the first version of the model was using the correct concept for reducing Expected Demand Not Satisfied, not the Expected Additional Demand Satisfied. For execution purposes, the basic equation of (1-Cumulativeprobabiliy of (n)) was altered to be more robust for the operational version. The following improvement to the basic formula is called the Expected Additional Demand Satisfied purchasing equation.

EADS Purchasing Equation

Q = Incremental increase in inventory (order size)

n = beginning inventory


EADS = -[(Q-1)*PROB.(n+1)  +  (Q-2)*PROB.(n+2)+…1PROB(N+Q-1)]  + Q[1-CUMPROB (n)] 

Objective Function = Max ( EADS/(Cost of Part * Q) )[1]

The EADS is a variation of the original formula, in that the right side of the equation (in bold) is identical to the original formula. The difference lies in the left side of the equation, which subtracts the current iteration (1,2,3,4, etc..) from the order size (Q) and multiplies this number by the current iteration added to the beginning inventory (n). This equation is performed for iterations from 1 to infinity until the outcome from the equation is sufficiently close to 0. The model is set such that computation of less than .000001 triggers the model to cease calculating this equation.[2]

Rather than attempting to identify the uncovered or unprotected portion of the distribution curve (the right side in Graph 1), the EADS formula determines the size of the incremental increase in the probability of demand (the portion of probability between the lines from a demand of 5 units to a demand of 6 units in Graph 2).

This addition has benefits regarding the model operation, as well as the increased accuracy of probability estimation.

Graph 2

Incremental Probability Added to Fill Rate by Moving from an Inventory of 5 to an Inventory of 6

[1] This (EDS) formula and serves as the basis for two other derivations of the Avail Max decision system, one which handles returns and a second, which estimates order fill.

[2] .000001 was selected as it is reasonably close to 0. By setting this parameter, we save the model computation time, which can be better utilized on relevant calculations. This parameter is especially important when the model is dealing with parts with higher demand patterns.

Fill Rate Estimation

A second alteration to the Avail Max performed by was to how the model estimates fill rate. The Availability Max estimated the fill rate by just adding the probabilities of demand for the inventory, which were in stock.

For instance, if the lead time demand was four units, and the inventory position of 3 units was chosen as an optimum holding amount. The probabilities of demands of 1,2 and three units were added together to arrive at the estimated fill rate. If, for instance, the probabilities of demands for demands of 1, 2, and three were .20,.25, and .15 respectively, then the model would report a 60% fill rate for that particular part.

The Availability Max estimated fill rate to a useful approximation of order fill by modifying the EADS formula explained in the previous section. This new fill estimation mimics how one would calculate fill rates on a spreadsheet. Table 2 provides an example of just such a spreadsheet fill rate estimation.

Table 2

InventoryDemandProbabilityDemand Not FilledProbability *Demand Not FilledProbability *Demand

% not filled = .5/3 = 0.16667

% filled =    1-.1667 = .8333

The Zero Demand Situation

A third area which improved the Availability Max was in the model’s recognition of situations when there is no demand for a part. With any part, no matter how high or low the prior period’s demand, there is always the possibility that the part will experience zero. With the vast majority of parts in a spare parts database, this probability of zero demand is significant as most parts have demands of less than two units over a two week lead time.

For example, a part with a Poisson distribution to its demand pattern, which had a lead time demand the previous year of 2 units, would have a 13.5% chance of not being subject to demand the following year (given use of the naive forecast). This would not amount to a 13.5% fill rate estimation for that part. As the part experienced zero demand, any attempt at fill rate estimation is an illegitimate endeavor and the Availability Max added the probability of zero demand into its fill calculation.

The fill rate estimation is simply a modification of the EADS formula used to purchase parts. The same algorithm is used with 0 used as the beginning inventory variable (n) and the ending inventory substituted for the order amount (Q). The fill rate is then estimated by dividing the EADS by the mean demand of the past year.

Fill Rate Estimation

Q = ending inventory

n = 0


EADS = -[(Q-1)*PROB.(n+1)  +  (Q-2)*PROB.(n+2)+…1PROB(N+Q-1)] + Q[1-CUMPROB (n)]

Fill Rate =    EADS / Mean Demand[1]

Factoring in Returns

A third change made to the Availability Maximizer was to the method by which the model chose to return parts. When first presented, the model used the same logic that was in the original part purchase equation (EADS). However, this did not run in reverse (returns) very well. It displayed a tendency to minimize the right side (uncovered and unprotected) of the distribution as the current inventory was reduced by the order size. For the EADS modification (Q), this time, the incremental decrease in inventory, is subtracted from the current inventory (n) to generate (z), the substitute factor for (n) to enter into the modified EADS equation.

[1] The Availability Max model contains both a global and individual fill rate cap, which can be entered into the model’s screens before the model is run. The OEM wanted to achieve a global fill rate of 85%. This was introduced before the model ran, and in addition, individual caps were set somewhat higher than that level. However, the minimums and package quantities in whose increments forced the model to purchase in larger increments meant that rarely were the individual fill levels close to the global or individual cap. It is essential, when analyzing the model’s output file, to remember that the caps do not limit the fill which a single part can attain. They only prevent the model for purchasing additional pieces if the estimated fill rate is above the cap on a particular iteration.

Graph 3

From Graph 3, it is clear that the probability gained of moving from a demand of 3 units to a demand of 5 units ( a purchase quantity of 2) and the probability lost of moving from a demand of 5 units to a demand of 3 units (a return quantity of 2) is identical. Therefore it is only necessary that the formula for a part purchase be modified to generate the probability lost of a return. This is produced by changing the semantics of (Q) in the equation from the order amount to the return amount. This is performed by subtracting the return amount from the current inventory (n) and using the output from this activity (which we call (z)) to enter as a substitute for current inventory (n). This new output could then be called the Expected Demand Lost (EDL) as opposed to the (EADS).

The EADS Equation (EDL) Modified for Returns

Q = incremental decrease in inventory

n = current inventory

z = n – Q


EDL = -[(Q-1)*PROB.(z+1)+(Q-2)*PROB(z+2) +…1PROB(z+Q-1)] + Q[1-CUMPROB (z)]

Objective function = Min( EDL/(Part Cost * Q) )

Is This Logic The Correct Logic to Use for Service Parts Inventory Management?

Dr. Hau Lee, Professor of Industrial Engineering at Stanford University, viewed the Availability Max model in operation and recognized it as an application of the greedy heuristic. [1] As it happens, Dr. Lee had jointly published a paper on the greedy heuristic’s use in inventory management in which he supports its use for situations with large numbers of parts (a large number of parts in his opinion was over a thousand). In experimental results taken from his paper  Multi-Item Service Constrained (s, S)[2] Policies for Spare Parts Logistics Systems published in Naval Research Logistics, Lee, Kleindorfer, Pyke, and Cohen used a multi-item algorithm with a Poisson distribution for both high and low demand types.

Two hundred and fifty periods were simulated to reduce any random error. The results were that the greedy heuristic approximation was very accurate, with average errors ranging from .0006 to .031 for low service level requirements, and from .005, to .008 for high service level requirements. The following quote is from the Naval Research Logistics article. 

“It is possible to apply a greedy heuristic to both S (order up to level) and s (order point) incrementing with either S or s, for the part and control variable that provides the largest incremental increase in service for the minimum cost.” (570)

The Poisson, the Normal and the Compound Poisson Distribution Assumptions and the Problem of Specification[3]

To develop the probabilities of different demands for different items for use in EADS, and EDL, it becomes necessary to choose a probability distribution which will closely fit the future expected demand. The Normal distribution is used when the demand is sufficient in volume such that the law of large numbers allows for accurate forecasting. (The graphical representations of the Normal distribution can be found in Graph 2, which is up a few pages.) However, for service parts, only the smallest minority of parts fit this description. For the rest, either a Poisson, Gamma, or Compound Poisson s conventionally believed to offer the correct approximation.[4] The Poisson and Gamma are very similar leftward leaning, positively skewed probability distributions. The graphical representations for both are as follows in Graph 3.

Graph 4

[1]The Poisson and Gamma are both positively skewed distributions (positively skewed means that the longer tail is in the positive number direction). They are typically used when there is a high degree of randomness in the historical data pattern. Both can be used to predict events like the timing of customers arriving at the bank teller window, trucks arriving at a dock, in addition to the demand pattern for C items. The Poisson distribution has been extensively tested and found to be most effective at approximating future demand when the average lead time demand is below ten units over the test period.[2] [3] The Compound Poisson distribution is used when the demand is both random and extremely “lumpy.”

This distribution is especially applicable when items experience demand in conjunction with one another, for instance, the demand of a left shoe with a right shoe, or the demand for complimentary repair parts. The problem with the Compound Poisson is in its calculation, which is complex. In most low demand situations, either the Poisson or Compound Poisson can be used effectively, and it was the ease of computation, which was the deciding factor in favor of the Poisson for the Availability Maximizer model.

When the model was first presented, it only used the Poisson distribution. Later the Normal distribution for parts with more than a historical demand of 10 over the replenishment lead time was added. The Normal distribution is calculated in the Availability Maximizer through the use of polynomial exponents displayed below. Polynomial exponents are simply a method for approximating the Normal distribution given a specific normalized value for x.

Polynomial Exponential Approximation for the Normal Distribution

k = ((beginning inventory + Q) – mean demand)/ (standard deviation of demand)


(0 <= k <= infinity)

1-.5(1+.196854 * k + .115194 * k2 + .000344 * k3 + .019527 * k4)

Minimums and Package Quantities and Return Thresholds

The model’s logic for choosing parts to buy and hold is known as the “greedy heuristic.” However, while it is single-minded in its search for the best opportunity, it may create purchasing scenarios that are uneconomical. For this reason, a minimum order quantity on the input file was added. The minimum order quantity was based on an EOQ with an order cost of $5 and a holding cost of .24 per year.[4] Also, to guarantee orders consistent with the client’s system, a package quantity column was entered into the input file.[5] Both minimums and package quantities are used when deciding how much to buy. The first purchase will always be in the minimum order amount, and then successive purchases will be in increments of the package quantity.

However, When returning parts, the minimum field is not used. To ensure that the model did not return parts that may be needed at another time, a third column was added to the input file.[6] This column was generated by a nine-month supply of yearly demand. This was called the return threshold field.[7] [8]


The focus of the project on which the Availability Max model was developed was to test the inventory replenishment logics to select a professional software package which would perform functions similar to the Availability Max. It was decided by the team members that the model would be fed as a naive forecast and that when the software for inventory replenishment was selected, a software package for forecasting would be chosen.

This basic naive approach was further augmented to capture the seasonal nature of the parts of the client. The naive approach was supplemented, as the following paragraphs explain.

For parts with average annual dollar volume x <= $10

If part has demanded of 6 months, then looking forward one and two years ago, use a total of 12 months of demand divided by 2 to generate the bi-monthly demand forecast.

For parts with average annual dollar volume $10< x > $300

If part has demanded of 3 months looking forward to 1 and two years ago, use a total of 6 months of demand divided by 2 to generate a bi-monthly demand forecast.

For parts with average annual dollar volume > $300 parts

If a part has demanded of 2 months looking forward to 1 and two years ago, then use a total of 4 months of demand divided by 2 to generate a bi-monthly demand forecast.

When the forecasting software is finally chosen, this methodology would no longer be used. However, the spare parts databases promise challenges that must be dealt with. The vast majority of parts would be classified as C items under traditional inventory theory, and according to Silver and Peterson,  C parts do not lend themselves to anything but naive forecasts.

However, for a small segment of the database, some parts can be forecasted reasonably well. When we say “reasonably” we mean better than a 25% forecast error.


In testing, the Availability Max purchased both inexpensive parts and higher demand parts. Spreadsheets that mimicked the logic in the Availability Max were used to test the ordering and return amounts as well as the corresponding fill rate calculations. These tests to the model’s operating logic indicated the model was selecting parts in conformance with its programming, as of the time of this writing the largest issue is the size of the order minimums. After preliminary runs, the model appeared to be ordering up to the minimum level for the majority of parts. There is evidence that these minimums may be set too high, even though the order cost used is only $5 per line.

During the development of the Availability Max, it was a common occurrence for extra requirements to be projected upon the model. While it may often be intuitively appealing to attempt to include all inventory considerations into the model through the addition of parameters, there are two drawbacks to this approach.

  • Number one, the attempted optimization of more than a few basic parameters can lead to a “middling effect,” whereby the parameters tend to neutralize one another.
  • Number two, with each additional parameter, a level of complexity is added to the modeling process. This is undesirable as it requires additional resources from the development team. Second, in developing a day-to-day operational inventory management system, the simplicity of execution is a necessity.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.

References and Footnotes

[1] Continuous Distributions – specified outcomes cannot be defined, but the range of outcomes can be defined

Discrete Distributions – specified outcomes can be defined, and a range of outcomes can be defined.

[2] Archibald, B., E. A. Silver, and R. Peterson (1974). “Selecting the Probability Distribution of Demand in a Replenishment Lead Time.” Working Paper No. 89, Department of Management Sciences, University of Waterloo.

[3] The Availability Max model does not operate under any lead time parameters. It merely analyzes the demand it is fed as demand over some interval, the manipulation to adjust for lead time is performed on the input file. The project team is currently using a baseline of a two week total lead time ( review + replenishment ), which means that all parts with demand less than 234 per year fall into the Poisson assumption. This means that for a typical dealer, less than 100 parts will fall into the Normal calculations in the model.

[4] Variable order costs ( r )  and holding costs ( A ) are recommended, as it is generally difficult to pinpoint actual costs. For this reason, Silver and Peterson recommend creating exchange curves displaying the effect on order frequency and cycle stock $ with various A/r  fractions. At the OEM, while 24% holding cost is un-controversial, the order cost is subject to discussion.

[5] For the model to operate correctly, minimums are always entered as multiple package quantities.

[6] During the project, the OEM voiced a need for the model to deal with non-quantitative issues or issues which were not feasible to put into a mathematical form. These included substitutions, multi-substitutions, and unit of measure issues. The substitution issue dealt with the transfer of demand data from an old part, which had been in some way improved and thus been given a new part number. In some cases, one part may be re-engineered into two parts or two parts reengineered and combined into one part. These are defined as multi-substitutions. As for the unit of measure issues, it was common for the dealer and the OEM to have incompatible data records. For instance, if a hose is regularly sold in 50-foot lengths, the demand data may be corrupted when a sale of one 50 foot length is reported as a sale of “50” which may be interpreted as a sale of fifty 50 foot lengths. These types of issues were left to “post processing” in which the data from the output file would be analyzed on an exception basis

[7] One outcome from all of these changes is that the model was altered to fit the clients’ day to day needs better. A second outcome is that the degree of optimization was effectively reduced as more constraints were placed on the result of inventory purchases and returns. Between individual parts, the fill rates became more staggered, there were many parts with 99% fill rates reported, and fewer parts with midrange fill estimations of 83, 86, 92%, etc… With these added constraints, the model chose to leave many parts with no fill rate and others with fill rates well beyond the 85% target.

[8] The model has no time horizon or time orientation. It accepts whatever demand it reads from the input file as the demand over the interval it is calculating. If demand over five years were on the input file, then the model would calculate an optimal purchase quantity for five years. As we have assumed a two week total lead time (1 week for review and one week for replenishment), the yearly demand was divided by 26 to arrive at the demand over lead time. Also, the standard deviation, which is used in computation for the probability of demand for the higher demand parts, was available to us from the client’s information systems every month. To change the standard deviation to a bi-monthly variance, the monthly variance was divided by the square root of 2.

[1] Lee, Pyke, Kleindorfer, and Cohen. “Multi-Item Service Constrained (s, S) Policies for Spare Parts Logistics Systems.” Naval Research Logistics Vol. 39 pp. 561-577 (1992)

[2] The notation s in (s,S) = reorder point, S = order up to point

[3] The Problem of Specification defined as the attempt to fit a historical pattern to a probability distribution to use statistical methods on the data. There are a few quantitative techniques such as the Lillefors test for normalcy. But more frequently, the problem of a specification is resolved through the application of probability distributions used for different situations taken from published works.

[4] Another widespread distribution is Negative Binomial, which is useful for approximating binary events. However, as the Compound Poisson is very similar to it, only the Compound Poisson will be analyzed in this paper.

How MCA Solutions Should be Remembered

Executive Summary

  • MCA has been acquired.
  • What are some of the critical MCA solutions contributions to supply chain planning software, and what will happen to the MCA product?


Servigistics recently acquired MCA Solutions. This is a significant development as the two companies were the top two software vendors in the service parts planning space. Some articles will undoubtedly cover the strategic angle of what this merger means for the service parts planning software market. However, in this article, I wanted to focus on some of the significant contributions for which MCA Solutions should be remembered.

My Exposure to MCA Solutions

I first attended MCA training in 2007, which was a month or so after my first introduction to the company. After attending training at their headquarters in Philadelphia, I worked on an MCA implementation for a year. During that year, I learned quite a bit about their application, and used their software, read through their documentation and interacted with MCA consultants.  My interaction with MCA’s people, and the product was how I first became educated in inventory optimization and multi-echelon planning (MEIO). This is a topic on which there is also a blog. And for which I have a book coming out which highlights several vital features in MCA’s product that helps demonstrate concepts related to MEIO (MCA screenshots are included in the book, but they will now be described as Servigistics screenshots).

What Will Happen to MCA’s Application?

The MCA Solutions product will eventually be discontinued, and some of the functionality will be ported to Servigistics’ service parts planning product. Because the MCA application will not exist as a product far into the future, I wanted people who had not worked with the product to know some of the critical contributions of MCA Solutions.

A Sampling of Their Ideas and Contributions

MEIO Innovation

MCA was one of the first MEIO applications. MCA was founded by Morris Cohen, a highly regarded academic and sometimes consultant, and along with the people they brought in, they were able to implement in a commercial product something that had previously been primarily of academic interest.

A High Degree of Control Over the Supply Plan

MCA developed one of the most powerful supply planning applications, either in service parts planning or finished goods planning, that I have used (MCA’s solution was also forecasting in a way customized explicitly for service parts). A few of the reasons that MCA’s application was so powerful are listed below:

  1. By leveraging MEIO, which is more powerful and controllable than other supply planning methods (MRP/DRP, heuristics, allocation, and cost optimization), the application was able to control the supply plan very precisely.
  2. The application interface was compact, with easy access to different screens.
  3. The application’s parameter management was one of the easiest to review and change any application that I have worked with. Parameter maintenance is one of the most underrated areas of supply chain application usability, and a major maintenance headache with many applications, however, MCA made it look easy to develop a straightforward way to adjust configuration data. It was straightforward, and I have wondered several times why more companies don’t copy it.

MCA’s solution had an excellent combination of a mathematically sophisticated backend, with an easy to use frontend. This is one of the primary goals of advanced supply chain planning software generally, and it is infrequently accomplished.

Alerts and Recommendations in One View

MCA developed an ability that I had never seen before, which was the Network Proposed View. In this view, which is shown in the upcoming book, sorted the recommendations by their contribution to the service level. It is a combined straight analytical view on the application recommendations (Procurement Orders – so-called “New Buys,” Repair Orders, and Stock Transfers, so-called Transshipments, and Allocations) as well as an alert system — in that it told planners where to focus. It also required no configuration and was an out of the box capability.


MCA had mastered redeployment, something which all service parts planning clients need. And many finished goods companies also need (but often refuse to admit, the comment on this topic often being “we need to improve our forecast, and we won’t need to redeploy“). MCA’s redeployment was also highly customizable and could be very precisely tuned.

Simplified Simulation

MCA’s application was an excellent simulation environment. It displayed two planning runs results right next to each other in the user interface. This allowed a planner to keep one result, and then make adjustments, and rerun the optimizer with new service level or inventory parameters. The planner could then perform a direct comparison between the old and new runs. If the new run was not an improvement, a few changes could user-friendly, and then the optimizer could be rerun, and the simulation would be overwritten. This provided simulation capability in the same screen as the active version and made it very easy to use.

This is another area in which many vendors have a hard time making user-friendly, and which MCA had mastered.

Optimizing Service Level or Inventory Investment

The MCA MEIO optimizer could be run bi-directionally. That is, it could maximize service level and cap inventory investment, or minimize inventory investment and cap service level. While inventory optimization is known as controlling service levels, by capping inventory investment, MCA allowed companies to stock their network based on their budget.

While inventory optimization is known as controlling service levels, by capping inventory investment, MCA allowed companies to stock their network based on their inventory budget. This is quite realistic, as companies do track the amount of their inventory investment and are given objectives to reduce the inventory investment as much as possible. However, with MCA, one could manage the inventory investment quite specifically.

Clear and Highly Educational Documentation

MCA’s documentation on its solution was top-notch. Through accumulating research papers, books, and other sources, I have a vast library of MEIO documentation, and MCA’s Principles of Operation, in particular, maybe my favorite MEIO document. I still frequently refer to MCA documentation when I have a question about how MEIO or service parts concept can be implemented in software. MCA had both functional and technical documentation, and all of it was extremely helpful and was written with great attention to detail. Many vendors could learn from how MCA documented their products.


From any angle one would wish to view these items, these are significant contributions, and this is not even the full list.


Things change. However, I will miss MCA Solutions. They were a real innovator, and had a great vision and executed on this vision exceptionally well. MCA showed the benefits associated with focusing on one area. Many of their consultants were not the only expert in MCA software, but they also knew service parts planning inside and out. Their software and their people got me to think about things differently on a variety of topics than I had before. While MCA did not exist as an independent entity for that long (although software companies tend to have shorter lives than most other companies), their innovation should be remembered.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


How to Use Order Fill Rate Versus Backorder as a Service Measurement

Executive Summary

  • One can use backorders or the order fill rate.
  • We compare a recent paper and its use of backorder service measurement and why backorder is superior.


The vast majority of articles on this website that discuss service measurement tend to focus on order fill rate, as this is the most popular service level measurement method in business. In this article, you will learn about the order fill rate versus the backorder.

Backorders or Order Fill Rate

The majority of early work on inventory optimization and multi echelon planning that began in the late 1950s and now drives the best of breed service parts planning software applications was designed around backorders as a service measurement. This is because the research was primarily paid for by the Air Force and carried out by the RAND Corporation. The focus was squarely on solving the problem of managing military service parts networks.

Therefore, it is interesting to compare and contrast two quotations from research papers that focused on minimizing backorders. This first is from Craig Sherbrooke and his METRIC (an acronym for Multi-echelon Technique for Recoverable Item Control) paper written in 1966.

Sherbrooke Explanation

This is how Sherbrooke explains his use of backorders over order fill rates in his paper.

“Order Fill Rate: defined as the fraction of demands that are immediately fulfilled by supply when the requisitions are received — concentrates nearly all stock at the bases. The result is that when a non order fill occurs, the backorder lasts a very long time. Similarly order fill rates behaves improperly in allocating investment at a base when the item repair times are substantially different. Consider two items with identical characteristics except that one is base-reparable in a short time, and the other is depot reparable with a much longer repair time. Assume that our investment constraint allows us to purchase only one unit of stock. In that case, the order fill rate criterion will select the first item, and the backorder criterion the second.

The fill rate possesses an additional defect. A fill is normally defined as the satisfaction of a demand when placed. But if we allow a time interval T to elapse, such as a couple of days, on the grounds that some delay is acceptable, the policy begins to look substantially different. As longer delays are explored, the policy begins to resemble the minimization of expected backorders.

In summary, the backorder criterion seems to be the most reasonable. The penalty should depend on the length of the backorder and the number of backorders; linearity is the simplest assumption. This is the criterion function most often employed in inventory models.” – Craig Sherbrooke

The Superiority of the Backorder Versus Fill Rates for Service Measurement

Sherbrooke explains that he considers backorders superior for his purposes of service measurement due to the following:

  • Order fill rates tend to concentrate stock at the bases (bases in Sherbrooke’s papers would correlate to DCs in industry-speak, with the depot being the regional DC or (RDC))
  • Order fill rates measure the satisfaction only at the point of initial delay and do not measure how late a fulfillment occurs.

Therefore Sherbrooke designed an algorithm as part of METRIC, a penalty that multiplies the length of the backorder by the number of backorders.

Leanard Laforteza states similar reasoning in his paper for selecting backorders as a service measurement for his paper designing a multi-echelon system for supplying Marine military deployments.

Leanard Laforteza on Fill Rate

“Fill rate is the percentage of demands that can be met at the time they are placed, while backorders are the number of unfilled demands that exist at a point in time. In commercial retail, if customer demand cannot be satisfied, a customer either goes away or returns at a later time when the item is restocked. the first case can be classified as lost sales while the second case creates a backorder on the supplier or manufacturer. In military applications, especially in most critical equipment, any demand that is not met is backordered. The backorder is outstanding until a resupply for the item is received, or a failed item is fixed and made available for issue.

These two principle measures of item performance – order fill rate and backorders – are related, but very different. Commercial retailers are more interested in order fill rate than in backorders because fill rate measures customer satisfaction at the time each demand is placed. Not only is fill rate easy to calculate, but it also helps retialers form a picture of how well they are meeting customer demand. Experience may tell them that a 90% fill rate on an item is not acceptable and will create customer complaints. On the other hand, backorders are not easy to compute as fill rate. Unlike commercial retail business, the military is not concerned with lost sales. The military measures performance not in terms of sales, but in terms of equipment availability.

In terms of supply support service measurement, we recommend tracking backorders. Although fill rate tends to have clearer meaning to commercial suppliers, the rate does not have the same meaning in militar applications. Using the concept of backorders, a unit can determine the status of its supply support not just when the order is placed, but up to the time the item was received.” – Leanard D. Laforteza

Here Laforteza does an excellent job explaining why backorders are more relevant for military applications than the order fill rate.

The Greater market for MEIO Applications

However, as the greater market for MEIO applications is civilian, vendors added order fill rates, and the order fill rate is not the dominant method of MEIO implementation. MCA Solutions, service parts planning vendor with a substantial military client base, can measure service level by order fill rate or by availability (i.e., the uptime of equipment). However, while it does not measure the order fill rate by backorder as do Sherbrook’s METRIC or Laforteza’s approach, MCA allows for the flexible setting of back ordering for different locations. MCA allows for the following settings:

  1. All locations to have a backorder applied
  2. Only the root locations to have a backorder applied
  3. No locations to have a backorder applied (which is the default).

MCA describes its management of backorders in the following way:

“A Location is called backorderable if the unmet demand at that Location gets backordered at that Location and waits until the inventory is available at that Location. A Location is not backorderable (also referred to as lost-sales) if the unmet demand is passed to another Location or outside the supply chain. In backorderable models, preference is given to destinations that do not have enough inventory position to meet their child Location needs.” – MCA Solutions

Introduction to S, s-1 Inventory Policy

In several articles previously, the (S, s-1) inventory policy is discussed.

For this reason, it made sense to create an article that explains what it does and how it can be used to improve inventory management.

The following was written by Wayne Fu.

Muckstadt and the (S,s-1) Inventory Policy

In Muckstadt’s book, section 1.1.2, it explains briefly about the (s, sS) inventory policy.

  1. Sis the normal reorder point
  2. S is the order up to level

When inventory position (which is on hand plus on-order minus backorder) falls to or below s, it triggers an order to raise the inventory position to S.

(S,s-1) as a Specialized Form

  • (S, s-1) is just a specialized form of (s, S).
  • It is s-1 only. In section 1.2, Muckstadt states the fundamental assumption of his model.
  • He assumes the costs of parts are high enough to be managed by (S, s-1) policy. (S, s-1) is an ordering policy that says if the inventory level is one below (S-1), place an order to bring inventory level to S.

It is very commonly used in long lead-time environments such as aerospace.

Author Thanks

I wanted to thank Wayne Fu for his contribution. I was not aware of many of the details which are described above, and I think this should be of interest to anyone who practices in this field.

Author Profile

Wayne Fu is a Senior Product Management in Servigistics. With an operation management background, Wayne has worked in the service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization, and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.


The service measurement selected must fit the application. The early MEIO research papers were centered around military application and thus used backorders and backorder, which is often the number of backorders times the average backorder duration serves as a common service level measure. However, civilian applications require an order fill rate as the service level measure.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


“METRIC: A Multi-echelon Technique for Recoverable Item Control,” C.C. Sherbrooke, RAND Corporation, 1966

“Inventory Optimization of Class IX Supply Blocks for Deploying in U.S. Marine Corps Combat Service Support Elements,” Leanard D. Laforteza, Naval Postgraduate School Monterey, California, June 1997

Principles of Operation, MCA Solutions, 2007

How to Best Understand a Heuristic Algorithm for Service Parts

Executive Summary

  • What is a heuristic algorithm, and how can a heuristic be compared against an algorithm as well as what is a meta-heuristic?

Introduction to Heuristic Algorithms

This post documents an email discussion between myself and Wayne Fu regarding the heuristic algorithm.

Question for Wayne Fu

“What is a heuristic based optimization algorithm, or a heuristic algorithm?

I thought that heuristics were one form of problem solving, and optimization was another. How is a heuristic based algorithm or heuristic algorithm  different from a non-heuristic based algorithm? That would help me and readers out a lot.” Shaun Snapp

The Answer

Optimization can be classified as deterministic and stochastic, while all inputs are constant in deterministic optimization. Inventory related optimization is stochastic since the demand is never a constant, but a given distribution. The most classic optimization method in deterministic is linear programming.

heuristic-based-algorithmAnother name for stochastic is meta-heuristic. Meta-heuristic is a vast topic and used very broadly, because it is much more flexible, contingent, and even could yield a better result than deterministic methods while inputs are deterministic.

Heuristics in Major Solvers

Like ILOG’s CPlex, they are very robust linear programming solvers, but eventually, when it tries to determine a solution, it uses heuristics. i2 Technologies used to use CPlex in master planning to provide draft outcomes, and then MAP as the heuristics solver to fine-tune the solution.

A Metaphor for Comparing a Heuristic Versus Optimization

One extremely simplified way to see the deterministic and heuristics is like searching for a house. Using a deterministic approach would be like zooming out to a couple of thousand miles always from the earth, and then picking a location you think is best by giving all the criteria, you can check at that distance. Then heuristics would be like standing in front of a train station, start asking the people around or checking local newspaper to figure out where is the better place to live. Then you move over there, check around again and narrow the scope further down, or even jump out to next location.


So, inventory optimization is meta-heuristic. In METRIC, it is using margin analysis as the criteria of heuristic.

It starts by searching for the part which provides the best value to increase its inventory, the next one, the next one in the belief that we will stop at some point, and that will be the optimal inventory position overall.

Most people who work in this area are familiar with the term heuristics, but much less so with the term “metaheuristics.” Metaheuristics are important for problems that are computationally infeasible to solve with optimization.

In computer science, metaheuristic designates a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Metaheuristics make few or no assumptions about the problem being optimized and can search very large spaces of candidate solutions. However, metaheuristics do not guarantee an optimal solution is ever found. Many metaheuristics implement some form of stochastic optimization. – Wikipedia


Optimization is a word with several meanings. In operations research, it means to meet an objective function, usually within some constraints. To the laymen, optimization has often been used to indicate to “improve.” To many people, it is considered normal that optimization is always possible, or that finding an optimal solution is always possible. However, that is not the case. Some problems, of course, are not worth optimizing, and some issues are so complex that they don’t bear optimization easily. This leads to an interesting quote.

In this book we refer to evolutionary algorithms and metaheuristics as improvement methods. In standard business software finding the optimum of a nonlinear or hard to solve problem is often approached by using evolutionary algorithms /iterated search which – after a pre-set maximum calculating time – in a wide variety of cases encountered in business optimization return an acceptable solution in a vicinity of a local optimum (hopefully) close to the global optimum. – Real Optimization in SAP APO

This describes methods that, while they do not result in an optimal result, can get reasonably close to the global optimum.

One of the complicating factors in understanding the difference between heuristics and optimization is that they are often taught as separate methods. A generalization is that an optimizer has an objective function, while a heuristic does not.

However, in practice and many important foundational research papers heuristics are combined with optimization. I think you provided an excellent explanation of meta-heuristics. It enables a person who reads METRIC (an acronym for Sherbrooke’s foundational Multi-Echelon Technique for Recoverable Item Control), to understand it much better.

Author Thanks:

I wanted to thank Wayne Fu for his contribution.

Interviewee Profile

Wayne Fu is a Senior Product Management in Servigistics. With an operation management background, Wayne has worked in the service part planning domain for more than a decade. In Servigistics, he led the research and development of various areas like install-base (provisioning) forecasting, inventory optimization, and distribution planning. Currently, he is focusing on the effectiveness of forecast techniques in Last Time Buy.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


“Real Optimization with SAP APO,” Josef Kallrath, Thomas I. Maindl, Springer Press, 2006

Why SAP SPP Continues to Have Implementation Problems

Executive Summary

  • SAP created a partnership with MCA that was designed to get into the service parts planning market.
  • We cover the outcome of this partnership.


The pathway is not clearing for SPP as the successes have been few and far between. However, there is a solution.

Bringing Up SAP SPP in the Market

SPP has been a long haul for SAP. First of all, this product was an attempt to bring service parts planning into the mainstream. Rightly so, SAP identified service parts planning as a key underinvested in an area in the enterprise.

SAP thought it could grow this business and combined part of the code bases of SAP Demand Planning, SAP Supply Network Planning. And then added service specific capability that had been sitting in other best of breed applications for some years. These include:

  • Inventory Rebalancing
  • Leading Indicator Forecasting
  • Repair Buy Functionality
  • Partial Service Level Planning (planning low on the service level hierarchy)
  • More details on the service level hierarchy see the link.

SAP even surprised me by coming up, in my opinion, the best interface for planning in all of SAP SCM, the DRP Matrix. This helped address a historical weakness in the SCM modules (at least for one module). However, the initial problems began when SAP approached clients and explained the SPP solution to them. Instead of focusing on just SPP, alternatively, clients were shown a demo that included a buffet of SCM functionality, which brought many different modules into the solution (such as GATP) and even the SAP Portal.

This was a mistake because even the biggest service organizations have a lot less money to spend on software, so getting them just to purchase SPP would have been a success. Furthermore, service organizations are far further down the capability totem pole than the finished goods side of the business, so their ability to even implement the solution that SAP presented to them would have been unlikely. I have spoken to SAP product management about this, and they have re-stated that this is their strategy and that they think it is gaining purchase with clients.

The Partnership with MCA

The second part of their strategy was to partner with the best of breed service parts planning company MCA Solutions and created a “xApp” which combined the forecasting functionality of MCA SPO with the supply planning portion of SPP. I have written previously that I am very much opposed to these types of arrangements for many reasons.

There are several thorny issues with these partnerships.

It’s unclear that vendors should be selecting vendors of clients. The large vendor may not choose the smaller vendor that is best for clients vs. best for the larger vendors. These partnerships allow SAP to say they have functionality that they did not originate and are claiming extraordinary IP rights vis-a-vis the smaller software company.

SAP’s partnership agreements require that the smaller vendor declare their IP and that IP that is undeclared can be taken by SAP. This was rather shocking, and I think shameful that such an agreement would even be drafted.

Unequal partnerships like this are inherently inconsistent with the type of economy that a lot of Americans say they believe in. The Federal Trade Commission has a role, which they don’t seem to take very seriously anymore to prevent over concentrations of power in any industry, and that includes software.

However, as luck would have it, the xApp program is currently dying or dead (the xApp program includes something like 140 different applications vendors that SAP has “partnered with”), and by in large, they have not caught on. MCA and SAP’s contract for the xApp program was not renewed.

SPP Project Problems

Despite their missteps, SAP was able to get several companies to buy and implement SPP. However, two of the biggest implementation sites of SPP, which are Caterpillar Logistics and the US Navy, is after several years and significant expense not anywhere. Navy is not live with SPP and unlikely ever to go live. This is something the folks over at Navy don’t like to talk about much, as a whole lot of US taxpayer dollars went to Deloitte and IBM for very little output. The blame does not squarely lie with SAP even though SPP does not work correctly. I plan to write a future article entitled “I follow Deloitte,” which describes how every post-Deloitte SAP SCM project I seem to work on is barely functional. However, Deloitte continues to get accounts somehow because too many corporate decision makers are not performing their research. How About Ford?

Another major implementation for SPP is Ford, but they have seen little value from their SPP implementation. The best prediction I receive from those that have worked on the project is that Ford will eventually walk away from SPP. However, they cannot publicly do this because they have invested at least nine years and vast amounts of money into the implementation. Therefore, SPP now has no large reference accounts for SPP. A hybrid of SPP has been implemented at Bombardier. However, this is the old SIO architecture where MCA Solutions performs most of the heavy lifting. Therefore, it can also not be considered a live SPP implementation.

None of this surprises me, as after working with SPP, it is not possible to take the application live without custom development work or combining with functional service parts planning applications. This solution turns SPP into a shell, which can make some executives happy, as it means they are using SAP, but the work is done by a different application.

Reference Accounts for SPP?

This is a problem because they were to be used as the major reference accounts for selling into other accounts. The issues at Caterpillar are particularly annoying, as SPP was developed at Caterpillar. Caterpillar Logistics is plastered all over a large amount of SAP marketing literature and is the gold reference account for the solution. Here there is not much to reference unless, as a potential client, you are willing to wait that long to bring a system live. And secondly, the degree to which Cat is live is a matter for dispute.

Cat will do what it can to continue the impression that they have at least some functionality live because to walk away would mean a PR problem for them. What would be interesting is to see if SPP can be implemented without a large consulting firm as neither IBM nor Deloitte have had success with SPP. SAP should consider backing a smaller firm or doing it themselves as they need success in the SPP space. At this point, the biggest reference-able account for SPP is Ford.

Where Do We Go From Here?: The Blended Approach

SAP’s Product Management Approach with SPP

Some decisions that have been made by SPP product management are deplorable. I think the major consulting companies are out of their depth in implementing SPP, and it needs to be radically improved to make more if its functionality effective. A significant amount of functionality that is in the release notes simply is broken or does not work correctly.

I have performed SPP consulting and would like to see the module, and service parts planning, in general, to become more popular and widely implemented than it is. However, it’s essential to consider that SPP only introduced some of the functionality that brings it partially up to par with other best of breed solutions in the current version (7.0). Before 7.0, SPP was not competitive, and it can take several versions of SAP’s newest functionality to work correctly.

For this reason, including my personal experiences configuring SPP, it would be difficult for me to recommend relying upon SPP exclusively. I think the experiences at Caterpillar Logistics, Ford, and the US Navy lend credence to the idea that going 100% with SPP is a tad on the risky side.

To fill in the areas of SPP that are lacking, I would recommend a best of breed solution. Some things, like leading indicator forecasting, need to be improved. Furthermore, if you want to perform service parts planning with service level agreements (SLAs), there is no way around a best of breed solution. There are some very competitive solutions to choose from, and it all comes down to matching the way they operate vs. the company’s needs.

Simulation Capability Enhanced with Best of Breed

I will never be a fan of performing simulation in SCM entirely. The parameters in SAP SCM are too time-consuming to change, and the system lacks transparency. However, several of the best of breed service parts planning solutions are very good at simulation. While it may be conforming to use a single tool, it’s generally a bad idea to try to get the software to do something it’s not good at. For simulation, I would recommend going with a hosted solution and a best of breed service parts planning vendor.

As few companies want to invest to staff a full-time simulation department (planners are often too busy and lack the training to perform simulation), it makes a lot of sense to have the application with the vendor. As they are experts in the application, they can make small tweaks to the system and provide long-term support to the planning organization. All of this can be built in at a reasonable rate to the hosted contract.


It only makes sense to use the history of an application to adjust future implementations. In doing so, it is most advisable to pair SPP with a best of breed vendor that best meets the client requirements. The additional benefit of this approach is that you get access to consultants who have brought numerous service parts projects live. And those consultants primarily reside in the best of breed vendors.

We were recently contacted by a major consulting company to support them in a client who is looking at SPP (we don’t work for consulting companies). The consulting company was simply focused on getting the client to implement SPP, so knowing the company, it is not difficult to imagine the stories that were told, and what was covered up to get the client to sign on the dotted line.

Companies interested in the full story on SPP’s functionality and how it compares to what else is available can contact us by selecting the button below.

Search Our Other Services Parts Planning Content

Intermittent Demand and Service Parts Databases

Our Solution for Managing Intermittent Demand

The number of service parts companies that actually use service parts software is small. We offer some of the most important features of managing service parts in an easy to use SaaS application that can be used to improve the management of any ERP system for service parts. It’s free until it receives “serious usage” and is free for students and academics to access. Select the image below to find out more.



Discussing the underinvestment in parts.

On the precise date, the SPP initiative was kicked off at Caterpillar Logistics.

How to Best Understand The Target Stocking Level and Minimum Stock Level

Executive Summary

  • The target stocking level is the target per product location combination and is a very important concept.
  • The target stocking level is different from the maximum stock level.
  • We cover how different supply planning methods can incorporate a target stock level value.

Introduction: Using a Target Stocking Level

The target stocking level is rarely discussed in companies but is a critical supply chain planning concept. You will learn the components of the TSL as well as how the TSL can be calculated by external systems and used in ERP systems.

What is the Target Stocking Level?

I have listed a short definition from MCA’s SPO Glossary, which I think is quite good.

“TSL is the quantity available to meet demand within the lead time and thus becomes the basis for computing the customer service levels. The TSL for each Location part is determined on the impact of what the TSL will have on the service level.”MCA Glossary

TSL can be considered the target inventory to be held at a product location combination. Stock is, of course, continually fluctuating with issues and receipts. It is still an excellent practice to have a target inventory level for every product location in the supply network.

What Does Inventory Optimization Optimize?

Inventory optimization does not optimize the safety stock but optimizes what MCA Solutions has coined the “target stocking level” (TSL), and does so for the entire supply network.

Safety stock, on the other hand, is calculated independently at each location product combination.

How is the ISL Derived?

Safety stock is only a subcomponent of the TSL. The main functionality in MEIO goes toward the calculation of the initial stocking level (ISL). It is from the ISL that the safety stock is derived by combining the ISL and the safety stock the TSL is derived.

The relationship is as follows:

TSL Components

TSL = ISL + Safety Stock See the graphic and formula here:

Therefore, the best way of thinking of TSL is as the total stock target at a location product combination. Safety stock, on the other hand, is simply the specialized subcomponent of the TSL quantity that accounts for the variability in supply and demand.

Safety stock is represented conceptually by the following formula:

Safety Stock = (ISL x Supply Variability) + (ISL x Demand Variability)

What It Means When One Says That “Safety Stock is Optimized”

Safety stock calculated with a MEIO application will be lower than the safety stock calculated by any other supply planning method. However, this is not due to MEIO’s inventory optimization functionality, but rather to its multi-echelon functionality. Multi-echelon functionality can both see and interpret the relationships between locations that non-multi-echelon systems cannot.

Therefore, while it is true to say that safety stock is optimized by MEIO applications, the way it accomplishes this is a bit circuitous.

Socializing This Concept on MEIO Projects

Explaining this fact and validating the understanding of it is integral to the success of MEIO projects, because implementing MEIO is, as with other supply planning methods, about more than setting up the system and ensuring it works properly. It is also about educating the users of the system so that they can make sense of the results. I have witnessed several projects where MEIO has either not been adequately explained or the knowledge provided was not accepted, and this invariably leads to ineffective use of the MEIO planning output. When this understanding has not been socialized within the company, planners and higher-ups will view the recommendations created by the MEIO application as a suspect. This typically leads to system output being overwritten manually, an action that can be initiated directly by planners or by supply-chain directors or vice presidents.

TSL in Common Usage

A search through the web shows that this term is not very common. However, it is not hard to find it listed in books through Google Book Search.

We found a formula for it in the book Best Practices in Inventory Management by Tony Wild, which we have listed below.

TSL = [Usage Rate * (Lead Time + Review Period)] + Safety Stock

This is a similar adjustment to increase the safety stock level:

TSL = [Usage Rate * (Lead Time + Review Period)] + Safety Stock

Safety Stock = [Customer Service Factor * MAD * SQRT(Lead Time + Review Period) 

TSL and Target Inventory in SAP

We had never run into the concept of TSLs in SAP until we searched for it in SAP Help. We found it in the following area:

  • SAP ERP – The concept exists as a “range” within purchasing
  • SAP SCM Forecasting and Replenishment

TSL in SAP SNC and Minimum Stock Level and Maximum Stock Level

The one area where TSL is both used and used in a module that is alive is in SAP Supply Network Collaboration or SNC. Interestingly, it is not called a TSL but instead is called a minimum stock level and a maximum stock level. We quote from the book Supplier Collaboration with SAP SNC.

“The projected stock and actual stock on hand are compared with the minimum and maximum stock levels agreed upon by customer and supplier for a location product. If the threshold values are not reached, or are exceeded, alerts are generated.”Mohamed Hamedy and Antia Leitz

This concept of the minimum stock level and the maximum stock level is quite essential. This essentially sets a threshold of targets that allow one to determine if stocking levels are out of their boundaries. However, planning systems usually don’t have this concept of a minimum stock level and a maximum stock level. At first blush, the reorder point may seem like the minimum stock level. However, the reorder point triggers before the minimum stock level. The maximum stock level has no corollary whatsoever.

The Concept of Planning Alerts

In planning systems, alerts are often used for supply planning. A typical alert would be to generate an alert when the stock reaches a particular level. However, the trick is in calculating what the minimum stock level and maximum stock level should be.

There is no perfect answer as to how to calculate this. However, it should be calculated external to the system (as few systems will calculate this for you), and of course, there should be a mathematical formula that calculates the value for the entire product location database. If we think of obsolete inventory calculations, this concept of calculating a minimum stock level and the maximum stock level is essentially the same.

Integrating a Pre-existing Stock Level with Different Supply Planning Methods

Often when a company implements a new supply planning method, it is faced with a pre-existing stock level setting process that the firm used. The company will have usually invested significantly in and is comfortable using it.

I will take two supply planning methods as examples, CTM (allocation) and inventory optimization multi-echelon planning (MEIO).

  • MEIO can calculate target stocking levels very intelligently by co-planning inventory using the mathematics of effective lead time.
  • The target stocking level is then published to the ERP system or even the advanced planning system. Either system can then be set to respect these TSLs.

At several clients that I worked with, the concept of multi-echelon planning was not well socialized (a common problem on optimization projects). The VP and director of the supply chain required that the planners maintain the same days of supply that they had before the MEIO implementation.

Interestingly, when I asked the team that managed the MEIO application, if they thought the solution had been socialized, they said “yes.” However, when I asked the business the same question, they said it had not.

Therefore in this example, the company  had two methods of maintaining target stock level:

  • The old way through a days supply
  • The new way (being MEIO), and these were incompatible. What about another method? For instance, what if a company wants to integrate its current target stock level with SAP CTM? Below we can see where the TSL is set in the product location master.


Target stocking level is not yet a common term in the industry but does have several books that both cover it and work with the concept.

The concept is a powerful one in that is manifests all of the complex inputs of stock determination into a single number, or a number range. This number(s) can then be compared to actual stock values, to develop stock transfers, unserviceable item repairs (for service parts), or can be communicated to suppliers or customers using a collaboration tool like SAP SNC.

  • Meshing a pre-existing target stock level with a new supply planning method can be a challenge. A TSL connects differently depending upon the supply planning methods, which is to be used.
  • As I discussed, keeping pre-existing days of supply value in addition to a TSL calculated by a MEIO application makes little sense. However, a further question is whether the currently used target stock system makes sense. This a question that must be analyzed on a case by case basis.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


Inventory Optimization and Multi echelon Book

What is MEIO?

This book explains the emerging technology of inventory optimization and multi-echelon (MEIO) supply planning. The book takes a complex subject and effectively communicates what MEIO is about in plain English terms. This is the only book currently available that describes MEIO for practitioners, rather for mathematicians or academics.

The Interaction with Service Levels

The this book explains how inventory optimization allows the entire supply plan to be controlled with service levels, and how multi-echelon technology answers the question of where to locate inventory in the supply network.
This is the only book on inventory optimization and multi echelon planning which compares how different best of breed vendors apply MEIO technology to their products. It also explains why this technology is so important for supply planning and why companies should be actively investigating this method.
The book moves smoothly between concepts to screen shots and descriptions of how the screens are configured and used. This provides the reader with some of the most intriguing areas of functionality within a variety of applications.
  • Chapter 1: Introduction
  • Chapter 2: Where Inventory Optimization and Multi-Echelon Planning
  • Fit within the Supply Chain Planning Footprint
  • Chapter 3: Inventory Optimization Explained
  • Chapter 4: Multi-Echelon Planning Explained
  • Chapter 5: How Inventory Optimization and Multi-Echelon Work
  • Together to Optimize the Supply Plan
  • Chapter 6: MEIO Versus Cost Optimization
  • Chapter 7: MEIO and Simulation
  • Chapter 8: MEIO and Service Level Agreements
  • Chapter 9: How MEIO is Different from APS and MRP/DRP
  • Chapter 10: Conclusion
  • References
  • Vendor Acknowledgements and Profiles
  • Author Profile
  • Abbreviations
  • Links in the Book
  • Appendix A: MEIO Visibility and Analytics
  • Appendix B: The History of Development of MEIO Versus MRP/DRP

How to Understand Why Auto Parts Distribution is So Inefficient

Executive Summary

  • How the aftermarket car parts market in the US works.
  • The major problems with automotive dealer networks for aftermarket car parts.


In our previous post, we discussed the problems with how automotive service parts websites are dominated by dealers. We also explained how this is inefficient and why these websites should be centralized and either managed by the manufacturer or simply outsourced to a company that has this as a focus.

However, after further research, it turns out automotive service networks have even bigger problems than this. This quote is from the HBR article called Winning in the Aftermarket:

Some years ago, when we studied the after sales network of one of America’s biggest automobile manufacturers, we found little coordination between the company’s spare parts warehouses and its dealers. Roughly 50% of consumers with problems faced unnecessary delays in getting vehicles repaired because dealers didn’t have the right parts to fix them. Although original equipment manufacturers carry, on average 10% of annual sales as spares, most don’t get the best out of those assets. People and facilities are often idle, inventory turns of just one to two times annually are common and a whopping 23% of parts become obsolete every year. – HBR

Improper Parts Planning

When consultants for aftermarket car parts planning software company MCA Solutions goes into an account and uses its SPO software to perform inventory re-balancing, they often find that parts are kept too low in the supply network (i.e., at the dealers). This is usually because fill rates are only being locally managed, and local managers are attempting to move parts to where they will eventually be consumed. The problem with this is that transferring parts from a forward location to another forward site is less efficient than moving parts from the parts depot to the forward location. Secondly, there is no reason to move a part to a forward location unless there is a high probability of consumption, or unless transportation lead times are particularly long.

This analysis of where parts of the field should be located goes by some names, including multi-echelon inventory optimization, re-distribution, and inventory re-balancing. See the diagram below.

The independent dealer model continues to work against rational inventory pooling. AMR Research (now part of Gartner) does have a good point when they bring up this point in their paper Service Parts Planning and Optimization.

During the course of this research, we found SPP applications tended to be very tactical nature, solving specific inventory, fill rate, or service-level goals. Oftentimes service is still being viewed as a cost center, and SPP applications are not necessarily viewed as the keys to a greater world of service nirvana. One explanation is that the buyers of SPP software tend to be planning managers or director-level planners who have no jurisdiction over service and repair or other areas of the SLM model. Other reasons include outsourcing, where OEMs have outsourced the service process but retain the planning aspects, or the fact that the company was never in charge of service in the first place—think of an auto OEM and the dealers that actually provide the service. – AMR Research

Better Aftermarket Car Parts Planning Begins with Cooperative Planning

Rather than having every dealer attempt to manage its inventory, a much more rational and effective setup is for the dealers to pool their parts at a local depot and for the depot to handle the parts for them. Daily local “milk runs” would ensure part flow to the dealers, and would reduce the poor inventory turn off the parts at the dealer location.

A series of these depots can then be large enough to be electronically connected. And to have their inventory represented in a web order fulfillment system that can better match supply and demand than can a series of disconnected dealers all trying to manage a smaller amount of inventory locally. Honda (for instance) could manage this themselves, or instead, could outsource the management to a company like, that knows how to produce transactional websites and knows how to match supply and demand. This solution would be vastly superior to the current one where small dealers attempt to manage their aftermarket car parts websites (and where it took us 2 hours searching various dealer sites to find that we would have to call in to order a part).

What is happening in the dealerships is a disinterest in making changes or becoming more flexible to adopt new technologies. Companies can make a lot of money in the short-term by simply living off of monopoly power. GM was the poster child for inept management, inward thinking, abusive supplier relations, and unresponsiveness to customers. A good catchphrase for management consultants could be “Don’t be Like GM.” While Honda’s quality is better than GM’s ever was, Honda’s dealer network on their aftermarket car parts management is not all that much different or better. Most manufacturers seem to employ the same inefficient system, a system that gets little coverage from media outlets. This demonstrates the restrictive influence of the dealership system that no matter how good the car company, the dealer system remains anachronistic.

It often seems that the large American car companies have little interest in their service operations. Instead, they prefer to spend their money on advertising. They have lost the battle for the aftermarket, and this reflects in their new sales, although they are unable to make the connection.

To quote again from the HBR article Winning in the Aftermarket:

In the automobile industry, for example, there’s a distinct correlation between the quality of after sales service and customer intent to repurchase. Brands like Lexus and Saturn inspire repeat purchases by providing superior service, and, consequently, they have overtaken well established rivals like Ford and Chrysler. – HBR

Why Car Dealerships are Mostly Useless for Car Service

The degree to which dealers are “taking it easy” is evident in the latest Consumer Reports survey where despite the overwhelming advantages of being part of a dealer network, dealers, on average, provide a customer experience that is 7% lower than that of independent maintenance shops.

However, it gets a lot worse when actual repairs are needed. For those that required repairs, only 57% of customers were satisfied with dealers vs. 75% who were satisfied with independents.

The consumer reports survey is a severe condemnation of automotive dealers.

Why Do Dealers Perform So Badly?

So the natural question is why dealers are performing so poorly in car service.

The traditional concept is that dealers provide better, although more expensive car service, and that they offer better service because of the following:

  • They are trained by the manufacturer.
  • They have information available from the manufacturer.
  • They are more expensive.
  • They know the cars better because they work on the same make, over and over again.

The outcome (service performance) does not match the bullet points above, and in fact, some of the bullet points above are dated. For instance, Honda stopped sending its mechanics to its internal training program several years ago. Which by most accounts was excellent, and has instead outsourced its mechanic training to a trade school in Arizona, which is nowhere near as good, and which does not specialize in Honda. Knowing little beyond basic repair, mechanics are now increasingly reliant upon Honda’s remote service technicians that are available by phone out of Honda’s Southern California main office. Honda, at one time, had a sterling reputation in service maintenance, and now no longer does. If you bring your Honda into a dealer now, you can expect a technician trained by a generic trade school, which was a low-cost bidder to Honda.

The Monopoly Explanation

Most likely, automotive dealers are not better because they do not need to be to survive. This is the best explanation when everything else is tilted in the dealer’s favor, and they still cannot perform in a manner competitive with companies with far fewer advantages. Of course, this does not even include the costs that dealers charge, which is widely known to be exorbitant and far more than independent stops.

Car Service Parts Website Incompetence

I first found how first hand now bad dealer’s performance is when I tried to find service parts on their website for online purchases. See this link for the full article. As I recount in the article, I had a devil of a time finding a simple part. The right interior door handles cover to a 97 Honda Accord. After visiting many sites for hours, I can state with confidence that dealers have no idea how to put together a service parts transaction website. Why they even try to create their individual transaction sites is beyond me, and the overall industry is in drastic need of “Amazonification.”

Furthermore, I question the logic of having manufacturers outsource their parts management to dealers. When manufacturers are much more capable of doing themselves or outsourcing it to companies that know how to manage extensive service parts inventories, ( is extremely capable of creating a service parts shopping site.) Is this a strategy designed around enhancing the customer service, or a compromise thrown out to the dealers to increase dealer profits? What makes this story even worse is that the automobile manufacturers or OEMs (Toyota, GM, etc..) do not even make the service parts that they sell. Therefore their value add is even less when one accounts for this fact, as can be read at this post.

A Better Model

Cars should be built to order items. They should be ordered online, out of small showrooms that just stock test models. This, combined with the fact that dealers can neither maintain websites nor provide car service that is superior to independent shops, the dealer’s value-add to the car buying and maintenance cycle is not apparent.

Dealerships typically have magnificent buildings and interiors. However, aside from architectural flair, dealers are not a value-added part of the purchasing or service chain. Wise automobile manufacturers of the future will offer their cars direct from their website (or from a small retail outlet with test models) – saving vast amounts of money in reduced inventory (not having cars sitting around on lots). 

This would allow independent shops to flourish through both providing a top-notch service parts website (for both dealers and customers). And through offering extensive service documentation with the creation of a service parts portal which publishes and builds on maintenance information by allowing mechanics in the field to contribute to its content.

A Dealer-less Model

Any car company that was to operate under the dealer-less model would be extremely cost competitive with the current manufacturers running the cost heavy. And inefficient dealer model that has to base many of their decisions not on what is right for the customer but what makes the dealers happy. They could not be effectively competed against by either cost or service parts management or overall service level.

Why Auto Parts Websites Are a Problem

It is always amazing to come upon a technology that is so amazingly underutilized. This would be the case for service parts online databases.

The Story

I needed a door handle assembly part for a 1997 Honda Accord. First, I started with eBay, which had a pretty small inventory. I could only find the door handle assembly for a four-door, not for a two-door. This was a dealer only item. The trouble began when we started looking through dealer websites for the item. The experience started to get us thinking that the dealer value-add is seriously in question.

Dealers are not necessary to buy cars (they could be purchased online, but tested at a manufacturer-sponsored center in a mall that had just a few models). The care could then be either transshipped from a different location or only build to order. However, instead of this, we have this medieval auto dealer system that holds massive amounts of inventory so that buyers will make impulse purchases “that day.”

Service Databases

When looking through the websites of dealers, it was annoying to try to navigate them. Most of the sites are caught in a time warp and exhibit the worst of web navigation and design. Some of them ask for contact information so they can treat the desire to purchase parts as a “lead.”

San Francisco Honda, like 99% of the dealerships, seems to seriously misunderstand what the web can do, and how it can help automate transactions. Now we will be calling to the dealer, just like we would have back in 1940.

Why Has Online Parts Supply Demand Matching Been Decentralized to Dealers?

Why does Honda allow dealers, who lack the interest or size to develop competent transactional websites to sell auto-parts online? Why are Honda and other major manufacturers, not managing this with a single site and a national network. It appears as if the dealer network (a way for manufacturers to sell franchises and not have to worry about retail, is interfering with the new realities and efficiencies of the web.

Automobiles may have to be serviced locally, but there is no reason, with our fast shipping network, for parts to be managed at dealer locations. And especially when a customer wants to order a part, there is absolutely no reason they should have to a dealer to do so. It does not have to be this way. The fulfillment could be performed by dealers, but Honda could manage the front-end, much like

Learning from

The lesson from Amazon is that the web-based supply-demand matching no longer needs to be performed by the same organization that conducts fulfillment. See this article on and how they serve as a supply-demand matcher.

IT and Monitoring Competence and Fourth Party Logistics Providers

The concept of multi-partner coordination enabled by monitoring tools is a concept in logistics called fourth party logistics and is covered in this post. It’s a sad fact that there is simply not a lot of thinking going on in the management of service parts.

Structure of the Auto Industry

What we learned from the book Who Made Your Car, by Thomas H Klier and James Rubenstein, is the following interesting tidbits of information:

  • 70% of the parts of automobiles are made by suppliers
  • Manufacturers are now primarily assemblers of sub-assemblies produced by vendors.
  • Much of the intellectual property and complex component manufacturing is owned and provided by the supplier/component manufacturers.


Suppliers Actually “Make” the Car

Vendors are producing most of the car and providing many different manufacturers with similar items. This is explained in the graphic below, which provides a great insight into the many various places that the car’s major components are coming from. The sourcing pattern seems identical to, although far more complex than that of laptop manufacturers. (although laptop manufacturing is even more outsourced, with contract manufacturers producing HP and some other major brands out of the same factory and sometimes the same production line.


From Automotive Weekly

We took the example of one vendor called Dura. A visit to their website demonstrates that they make numerous automotive components, which they sell to many different manufacturers.


Dura’s Part Distribution Model

Dura does not sell parts directly to retail customers, but they do to dealers and independent shops. (However, dealers do have a stranglehold on the industry, and many parts are carried only by dealers) This is one of some areas where business is opposed to “free markets,” and instead select tying agreements and monopolistic competition. I keep hearing about how so many people and companies are for free markets, but when it comes to real-life examples, it seems monopolistic arrangements are the preference.


Why Doesn’t eBay Own the Auto Aftermarket?

eBay is the largest service parts database in the world. However, for some reason, eBay is not prominent in automotive service parts. The fact that automotive service parts are expensive, yet only a modest service component market has developed on eBay, is an indication that there are significant restrictions on who can get access to parts. And that there are all likely substantial restrictions on part suppliers, in the context of their agreements with manufacturers as to who they may sell parts to in the aftermarket. No such limitation exists for computer components, where anything can be found and purchased on eBay.


Even the most challenging service parts for computers are available at low-cost on eBay.

What This Means For Service Parts Network Design

What this means is that the dealer system for distribution is even less efficient than we initially thought. People are going to dealers to get parts they believe are made by manufacturers (Honda, Toyota, etc.), that are made by suppliers. All of these middlemen could be eliminated from the system and actually should be. These suppliers are the creators of these components, and they should not be controlled by manufacturers, much less have to go through dealers – so dealers or independent repair shops can add an extra markup with no value add – to service parts. If the product is not produced by a company, that it cannot claim ownership of it or should not be able to be the sole source of the product, mainly if they do a poor job of it, and if they charge a high price for doing this job badly.

Why OEMs Should Stop Controlling Service

It has come to my attention after reviewing several of our previous articles that the less control OEMs (original equipment manufacturers = companies like Ford, Apple, Cisco, etc..) and service organizations have, the better it is for consumers.

Automotive Service Restrictions to Competition in OEM Parts

In our article, Why Automotive Parts Networks Area a Mess, I cover how automotive dealers are retarding the development of service parts businesses through their monopoly over many “dealer only parts.” These parts are not even made by the manufacturers, but instead by the manufacturer’s supplier base (on average 70% of a car is not made by the name on the vehicle).

The only reason this situation exists is that OEMs compel parts manufacturers to sign exclusive contracts with the OEM that restricts the selling of parts to OEM parts or the dealer network.

For this reason, OEM parts are generally exorbitantly priced, but the OEM does not, in most cases, even manufacture the OEM parts. OEM parts are a major way for OEMs to overcharge their customers.

This is bad for consumers in a couple of different ways.

  • Dealers lack the competence or interest to create service part websites, and thus most dealer parts cannot be purchased online in any way.
  • Consumers have to pay a significant premium for their parts because of the control exerted by dealers and their antiquated supply chain and inventory systems. That is, they are forced to purchase OEM parts when they could have more options.

OEM Service

There are some types of products that are so rare that they can only be fixed by OEM service providers. However, in the automotive field, where independent dealers compete with OEM dealers, the independent dealers, according to Consumer Reports, report much higher customer satisfaction with the independent service providers.

Yet again, OEM service is another example of how OEMs gouge their customers. An OEM service or dealer network is set up to create a monopoly over service. This works less well in automotive, but OEM service monopolies are more effective with other types of products. Interestingly the lack of competitiveness of OEM service is little discussed.

Unprincipled OEM Tying Agreements

There is something ethically wrong with these types of agreements. Suppose a company is not making an item, its hard to see how they have the right to determine how that item is sold and distributed. Not only is the item not made by the OEM, but the technical knowledge and intellectual property are not theirs either, which also resides with the parts supplier. There are laws in the US against what is referred to as “tying agreements.”

It is typically applied to an OEM pressuring a retailer to sell one of the OEM’s new or less popular items in exchange for gaining access to the right to sell another more established item. I don’t see why the tying arrangement law could not be applied by the Federal Trade Commission to break up exclusive OEM distribution arrangements with their parts suppliers.

Video Repair Guides and Information Exchange

In our article, Using Online Videos for Service and Repair, I discuss how, in the case of repair guides and repair information, OEMs have historically restricted information to users, and how it is the user community that is doing the OEM’s work for them by making repair videos available on YouTube.

OEMs have done remarkably little innovating, and placed little effort towards creating quality instructional material for the servicing of their items. Their manuals are belabored, sleep-inducing to read and unnecessarily expensive to produce compared to the benefit obtained by consumers. OEMs may find this topic incidental or a non-issue, but it is wasting a lot of consumer time. If a regulatory body appeared and placed a label that listed the average number of hours required to assemble or repair items right on product packaging, OEMs would start taking this issue of repair information a lot more seriously. Again, many would call this an unnecessary restriction of the market. However, they would be basing this on a flawed understanding of what makes an efficient market. A market cannot develop without information. Here is an example.

Market Information Example

Let us say a consumer is looking at two items in a store. They have identical features and both from reputable manufacturers, but one is $15 less. Would it make economic sense for the consumer to buy the lower cost item correct? Not necessarily. What if the lower priced item, because of a bad manual or bad design takes an hour longer to assemble, and two more hours to maintain over the life of the item. Furthermore, let us say the consumer values his time at $20 per hour.

In this example, the buyer would be in actuality paying $45 more by buying the less expensive item ($15 – (3 x $ 20 / hour) = $45. However, if the consumer is not made aware of this information, they will not be able to make a rational choice. Thus the current information model – which is no information about long-term service costs promotes manufacturers to compete on price, to not invest in designing effective instructional material, and to make less serviceable items. This results in a less efficient market.

Dumping Manuals

The ineffectiveness of manuals is well researched. The vast majority of users never read them. They also lack effectiveness because, unlike a video, they cannot show the manipulation of items in a 3-dimensional space. Several YouTube videos for each product could probably replace most of the instruction manuals for products that are sold.

More complex products would require more videos, which is fine. They are cheap to produce and take less skill to create than written manuals. To write a good manual, one has to be a good writer in addition to reproducing technical knowledge. However, to make an excellent instructional video, one only needs to know very basic video filming, and simply perform the activity on camera. Videos can show an entire assembly and disassembly of an item, providing maximum re-produce-ability.

Unapproved Uses

Service organizations and OEMs are losing control over the information of their products. While they controlled this information in the past, this information was never theirs, to begin with. In a free society, anyone can publish whatever they like about whatever product or service they use. History shows that users will come up with many shortcuts and other uses that OEMs never thought of. In a way, this is similar to the benefits of open source software.

While threats like “voiding warranties” have been used to limit the user’s customization of products, a person has a right to do whatever they like to the products that they buy. Users are posting videos for doing unapproved things (such as replacing iMac hard drives) to their items. What has changed is that users now have the distribution mechanism – YouTube specifically, but the web, more generally, to provide their content. Much of this content is of outstanding quality, and this demonstrates that content like this is not that difficult to produce, and of great benefit to users.


The current dealer-centric automotive service distribution system is an anachronism and is probably one of the reasons that dealerships have such high costs. Instead of attempting to reduce these costs, dealers are merely passing on their inefficiency to the consumer. However, dealers should be wary. While they have used political finagling to prevent web-based car purchases, this will eventually come to pass. The only thing that the dealers are necessary for is for providing local service. They should do what they can to make their service operations, which includes aftermarket car parts planning and management as efficient as it can be. A big part of the answer to this is to begin cooperatively or centrally planning and to pool inventory.

Parts Hub

The parts hub concept has also been proposed by John Snow, at Enigma, which is a software company focused on parts procurement decision support. The post on this topic can be found here.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


Service Parts Planning and Optimization, ARM Research 2007

Is SAP PLM for Real?

Executive Summary

  • How SAP has been busy pushing a solution that does not exist as a distinct product.
  • An analysis of SAP PLM.

Pushing SAP PLM

For sometimes, SAP has been promoting its product lifecycle management (PLM) solution. PLM is a problematic term that more than a few companies have had a problem defining.

Analyzing SAP PLM

When I perform an analysis of SAP PLM for a client, I learned that SAP PLM was not an actual product, but was, in fact, a “solution.” What he means is that various pre-existing modules have been created around the material master to meet PLM requirements. This is much like SAP’s non-existent digital asset management solution – where digital media are entered as materials into SAP. Digital asset management and PLM have a lot in common because both solutions require a lot of functionality regarding multi-media files.

For PLM, these files take the form of images and schematics, while in digital asset management, the files take the form of images, music, and video. However, the material master functionality in SAP is not designed to manage these files or make them easy to find or reference. There is no big surprise, why. The material management functionality was intended to hold textural data on products for accounting and supply chain management. Changing this functionality around to meet the needs of asset and document management is no easy task. When one goes through the BOM functionality and compares it to a real BOM management solution, the difference is like night and day.

Problems with Managing Changing Materials in SAP ERP

If a company makes changes to their material masters and they have SAP ERP, they have maintenance problems. SAP ERP has limited methods for making adjustments to materials, with the consequence that new elements are created as copies of old materials, and the old materials have no real way of being connected to the new materials.

The overall maintenance problems with SAP mean these materials that are no longer or little used, clog up the system. This severe limitation for material master management was one of the motivations for bringing out SAP PLM. However, instead of bringing out a new “product,” SAP should have just addressed the underlying material management functionality of the existing software their customers had already purchased. They, of course, did not do this. However, while the BOM management is one part of PLM, the solution from SAP is much more encompassing than just BOM.

Lifecycle Planning in APO

The confusing part about PLM, which SAP does not adequately explain, is that SAP lifecycle planning exists in the supply chain planning suite offered by SAP. For instance, in Demand Planner, which is the forecasting module of SAP APO, lifecycle planning is incorporated. DP allows you to introduce an existing product at a different location – using profiles to base historical data from current locations. Phase in profiles enables the reduction of the forecast for the period of introduction. (more details)

However, this capability in SAP DP and the product interchangeability functionality that is available in other modules of the APO suite (notably SNP, CTM, PPDS, and GATP) is quite a bit different from the integrated SAP PLM solution that SAP presents to clients. Again, this gets back to the problem with the SAP PLM solution, confusing messaging from SAP and functionality, which is PLM related but does not exist within the official SAP PLM solution. PLM functionality can exist in different areas of supply chain applications. However, it does not mean that the answer is offering an excellent bill of material management functionality which includes:

  • Multimedia file management
  • Document management
  • Engineering change management
  • Collaboration management (between marketing, engineering, and production)

SAP Has Had Its Shot in PLM

SAP PLM has not taken off, and it does not appear to be an area they have or intend to put real development effort behind. However, they still make their white papers available on the topic. SAP’s entry into the PLM market’s main effect has been to discourage companies from implementing real PLM solutions and damaging the PLM’s image more generally due to the problems SAP has in bringing SAP PLM live on accounts. Part of this is due to the limitations of the solution, but then another part is related to SAP’s positioning and messaging in the solution.

Here you can see one of the main graphics for SAP PLM (listed under Life-Cycle Data Management at the top). However, a major flaw in this diagram is apparent. PLM is based upon document management, but SAP does not have any serious document management capability. The best evidence of this is the state of SAP Solution Manager that is causing project heartburn on SAP projects globally.

PLM and Service Parts

PLM is, of course, imperative for service parts. Many of the service parts planning applications have built-in control fields in the form of things like shelf life, and of course, supersession is a manifestation of product lifecycle needs (out with the old – in with the new). While doing some research on PLM for service parts, I came upon a company called Arena Solutions, and I have tested their software extensively. I think it’s time more companies gave it a try. It is incredibly easy to use, offers hosted solutions, and just has tons of PLM functionality.

I have, over time, interacted with Arena Solutions and always come impressed with their solution and their people. I have written a series of articles and video interviews with them. You can find these articles at this link. However, the market is mostly immune to such information and continues to believe SAP offers a solution where it has none, and to not use “real” answers in this space because they don’t have a major brand name attached.

PLM and PDSs

SAP has an object in SAP APO called the Production Data Structure (PDS) that proposes to have PLM capabilities and ties with the rest of SAP’s “PLM” functionality. However, as companies do not use SAP for PLM, it makes little sense to use the PDS for that purpose. However, SAP still advises using this object on projects. Read about SAP’s message to clients in this post.


SAP has some disadvantages when it comes to competing in the PLM market. One is that the material master is not an active object for BOM lifecycle management. The material master lacks the functionality and is extremely far behind of the best of breed Arena Solutions in all functionality related to change and collaboration. Attempting to bring lifecycle capabilities as well as collaboration to the material master functionality is stretching it beyond its original design.

Secondly, SAP’s messaging is confusing and does not take into account the PLM functionality that is distributed throughout many applications, including SAP APO/SCM, but without explaining how it leverages them or even interacts with them. Many analysts who write in the PLM/BOM management space seem to have no idea about the subject matter and cannot help clients differentiate quality solutions from vaporware.

In summation, PLM is a very high-risk solution, and implementation and companies evaluating it must be extra careful to check what is there.


Since this article was written, SAP has attempted a reboot of its PLM solution. This article is still relevant to read, because, in many ways, things have not changed. However, to read the latest on this topic, see this post.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.



How to Perform Service Parts MTBF Calculation and Forecasting

Executive Summary

  • MTBF or mean time between failure is how service parts are forecasted.
  • Service parts demand can use dependent department.
  • It is not that common for companies to use MTBF.

Introduction to MTBF

Service parts for products can be predicted based upon installed base and usage.

The mean time between failure or MTBF calculation or forecasting is a subset of causal forecasting.

This is seen in the graphic below.

MTBF Calculation and Dependent Demand

All service part demand is dependent demand. That is, the need for service parts is based upon purchases that have already been made.

Service parts can be forecasted using simple demand history, as with finished goods, or they can take advantage of the installed based and usage of the equipment that is in the field.

For some things, just the population information is available (population information is of course much easier to attain. only large and expensive equipment like airplanes or construction and heavy industrial equipment has the usage tracked.)

The bigger and more valuable the asset, the easier it is going to be to get usage information. And the easier it will be to perform MTBF calculation. But even aerospace and defense are known to a shortage of good MTBF data. 

However, usage data would not be available for many consumer items.

How Does the MTBF Mean Time Between Failure Fit In?

MTBF is one particular modality of causal forecasting. Most causal forecasting simply uses one or many independent variables to predict the future dependent variable. However, causal forecasting with MTBF in service parts uses a developed failure rate for them in the field item.

MTBF of 6 months for the service part to be forecasted

100 Serviceable Items in the Field

100 x (12 Months/6 Months) =

200 Service Parts (forecast)

This is a simple example, but it captures how MTBF prediction works.

Combining Mean Time Between Failure and Other Methods

Often the different prediction categories are thought of as only being used independently. That is, if you one for a product or group of goods, you cannot also use another. MCA Solutions allows you to use both a time series and an MTBF calculation or forecast. They call this the composite forecast, and this forecast can give different weights to each forecast type. For instance, you could weigh the MTBF at 70% and the time series forecast at 30% or any other set of percentages that you wanted.

Prevalence of MTBF Data and the Usage of This Type of Forecasting

Many companies talk about forecasting using the mean time between failure data, but few of them are interested in doing the work to maintain the data. What is unfortunate here is that the data is not that difficult to manage.

There is not one level of granularity that companies have to drive to use causal methods. They can get benefits from using just a basic high-level value of their installed base. This should be available for even consumer items by taking previous sales data and applying degeneration percentages (for items that fall out of service) to develop a basic installed base number. Once this number is attained, it can be used for mean time between failure calculation or forecasting.

Part Breakage Prediction Math

Some basic mathematical estimation can get companies close to real values. Once these basic installed base numbers are generated, it opens a new opportunity to begin managing the service forecasting process differently.

It would be nice to report that the causal method only underused in service organizations. However, this is not the case. It also extends to most supply chain organizations. See this article for details.

How to Get Public MTTR, MTTR Statistics for Hard Drives

It is unusual for companies to maintain causal information (such as aircraft landings, or installed base) that could be used to perform causal forecasting. After the hard drive in our iMac went out, and I was performing a search for the most reliable model to replace it with.

We learned that MTTF (mean time to failure), MTTR (mean time to repair) figures are not available for even the most commonly purchased item by companies and by individuals. Mean time between failure, mean time to failure, and mean time to repair are the standard terms used in the service industry.

Meantime to failure and mean time to repair are the terms used to describe what is the same thing. This is the length of time between the failure of a part or component on a product. This is, of course, an estimate and is provided by the engineering department of the company that manufactures the item.

Meantime to Failure Service Parts Forecasting

Two pieces of data are necessary to perform causal forecasting for service parts, which is critical for service parts planning:

  1. MTTF, MRRT of the causal value
  2. Installed base or other causal value

With just the MTTF, consumers and organizations can make informed purchase decisions. However, with both these values, companies can use service parts planning software to drive our forecast and stocking.

At this point, it is well-known that the official MTTF statistics published by vendors are unreliable and pure fantasy. Because there is no objective third-party that does drive comparisons across vendors and publishes the results, there is no reliable source for failure information (if anyone knows of one, please comment on this post).

Although I know that companies, especially companies that purchase and deploy scores of disks, may keep their individual statistics. When asked questions about this topic, the vendor spokesperson moves into a degree of doublespeak that would make Henry Kissinger green with envy.

Where are the Studies on MTTF or MTTR for Hard Disks?

According to a white paper by Wiebetech – a drive enclosure maker Manufacturers are reluctant to give out real-world statistical information.

All of the drive vendors do what they can to obscure any differences between their drives regarding the quality or  MTTF or MTTR. This allows them to compete by retail box design and marketing, as well as personal business to business relationships, which appears to be their preference. A quote from a recent article on this topic in PC World reinforces how much OEMs like to dance around the issue of reliability and failure. Several drive vendors declined to be interviewed.

“The conditions that surround true drive failures are complicated and require a detailed failure analysis to determine what the failure mechanisms were.”

..said a spokesperson for Seagate Technology in Scotts Valley, Calif., in an e-mail.

“It is important to not only understand the kind of drive being used, but the system or environment in which it was placed and its workload.”

This is hilarious.

Hard drives are the only thing for which MTBF statistics cannot be developed. Interestingly, companies like Google or any company with a large number of servers have this information because they have many drives and their drives fail as time passes. As they are all in servers in the same building, the usage is similar, and therefore comparable.

Vendor Studies on MTTF, and MTTR from Russia

One of the few vendor studies on the failure rate of hard drives was performed by a company in Russia.

This image shows the most reliable drives with Hitachi leading all producers.

The drives I use most often are by Western Digital, but, interestingly, I can expect around 3.5 years of life from them, which squares with my experience after owning many Western Digital drives. This statement is of great interest, as it cautions against buying very high-capacity drives.

The remaining 41% exceeded 500 GB. Due to their construction and additional platters, these larger models are less durable, exhibiting an average lifespan of only 1.5 years. – Tom’s Hardware

The Costs of Publishing the Truth

It’s easy to publish positive information about vendors on. But a massive headache to release negative information, and vendors consider the mean time to failure and mean time to repair as negative information.

I know. I tested backup software several years ago and published my results online. My general finding was that PC backup software was very unreliable and difficult to use. Also that Norton Ghost, but in particular Acronis True Image, never actually recovered a computer image correctly after ten attempts. After publishing this, I was contacted by a representative from Acronis who told us I did not know the software and that my findings were wrong. They then offered to send us the newest software,….which I took a lot of time to test…and which also failed. Publishing negative information like this, if you make advertising, is even more challenging. This is one of the reasons, so few companies do it. CNET will publish on the different merits of products, but won’t touch the issue of reliability, nor will 98% of other publications.

Consumer Reports is one of the few that does. While their publication is a trailblazer in the area of reliability studies, they have to have a legal team ready because they are often sued. However, they do not publish at the level of detail of MTTR or other failure statistics. Something more is needed.

Planned Obsolescence and Why Items Are Becoming Less Serviceable

There is not much I own that I like better than our 24 inch iMac, but my sudden understanding of its core un-serviceability has been a real disappointment. iMacs are not the only things with lower serviceability.

What You Learn When Your iMac Goes Breaks

We recently had the hard drive in our iMac go out.

PC World recently wrote an interesting article on drive reliability. This article reinforces what we have experienced first hand — the MTBF numbers produced by drive manufacturers are false. Carnegie Mellon’s lack of differentiation among vendors in this study indicates their research was likely polluted by vendor pressure and or contributions.

What we learned is that iMacs are not designed to be serviced by users. The design of the iMac looks great but has a strange assembly that makes it even harder to work on than a laptop. The iMac has no screws or other fasteners on the case (except on the bottom for memory replacement). A hard drive is a major sub-component in a computer and tends to be one of the more problematic. It is something that should not only be designed to be easily replaced but should be designed to be swappable. As with media like CDs, there is no reason a door could not be added to any computer, and different hard drives could be added and removed to give the user maximum flexibility in booting to different disks. With a spare disk, this would mean that no computer could be brought down due to drive failure.


“Swappable” drives have been used in servers for some time, and are now available for home disk centers (which allow for RAID configurations) such as the Acer model above.

However, while no personal computer makes it as easy as we think it should be, Apple has designed a case with no entry through the back. So the user or service technician must pull off the glass cover with a suction cup and remove the display (delicately) to expose the screen. Next, the display must be removed to reveal the hard drive. Several specialized tools are required for the task. Waiting for tools to arrive from eBay, as well as the Apple Store’s $420 quote for the work, is why our iMac is sitting unused at the time of the writing of this article.

The Long Term Trend in Planned Obsolescence

This is part of a long-term trend in consumer items to hide the fasteners to increase the “coolness factor.” This trend extends to some different categories. If one looks back to the cars of the 1930s, one can see that they were more modular, and the rivets, pins, screws, and other fasteners were more apparent. What this meant was that cars had higher serviceability.


The Bentley Speed Six was a very serviceable car. The engine was easy to get to, the fenders were easily replaceable, and the exposed fasteners allowed the replacement of many parts by shade tree mechanics.

By the 1950s, almost all cars had moved to integrate the trunk and fenders into the body, and fasteners were no longer observable from the outside. This resulted in a smoother look, but also in a more complicated design and more expensive automobile to work on.


The 1950s Cadillac Series 62 was representative of cars from this era, in that it had an integrated body and hidden fasteners. The bodywork on this type of car is more time-consuming and expensive and must be done by professionals. However, since then, cars have become far more complex and, as a result, lower serviceability.

Serviceability Trend

The long-term is to decrease the serviceability in items. While this may be good for company profits, it is terrible for the consumer and bad for the environment. The more challenging and expensive it is for items to be repaired, the more quickly they are just replaced by new things. The problem is that companies do not seem to have an incentive to build long-lasting and easily serviced items. The finance area of the business appears to think it reduces sales of new projects (which it does), and new product design and marketing seem to think it reduces the “coolness” factor of goods.

Marketing and finance have come to dominate US corporations, so it is no surprise that their values have become the values of American business. This is not going unnoticed. According to industrial designer Victor J. Papanek, the following holds.

That while American products once set industrial standards for quality, consumers of other nations now avoid them due to shoddy American workmanship, quick obsolescence and poor value.

What is Planned Obsolescence?

Planned obsolescence is when a manufacturer makes deliberate changes to the design and manufacture of an item that reduces its longevity. Planned obsolescence can be seen as an extension of the engineering lifecycle model, where some components are reduced in quality to match the overall expected life of the item. But in this case, it is designed to bring the entire useful life of the product down. Planned obsolescence is probably best identified with US automakers who engaged in this activity when they had a near monopoly on the US auto market.

Historical View on Planned Obsolescence

There is this common impression, which is reinforced by advertising that this year’s model is better than last year’s and that in general, we are on a continual upward slope. This is not actually the case. There are many business practices and products that were “better” – better for the consumer and better for the environment — in the past.

In addition to serviceability, many products were simply designed to last longer half a century ago. As an example, there is a lively market for classic toasters from the 1950s on eBay.

This 60+ year old toaster still works, because they were built to last. The concept of a 60+-year-old item is unheard of today.

This 1950s SubBeam is still working, and adjusted for inflation, is probably selling for more on eBay than it did back when it was purchased in a store in the 1950s. Why can’t more items be built to last and be constructed to be serviceable?

In the book the Waste Makers published in 1960, the approach to planned obsolescence was laid out.

Beyond all these factors of quality debasement and by repairmen there were several objective factors about modern appliances that helped make them expensive to maintain and that helped increase the business volume of servicing agencies or replacement-parts manufacturers, and, in some cases, the manufacturers hoping to sell new replacement units. There were more things to go wrong. Those added luxury accessories that so delight copy writers were adding to the problems of products to break down. The rush to add extras on washing-machines in the form of cycle control, additive injector, increased the number of things that can develop ailments. The Wall Street Journal wrote: “Parts and accessory dealers naturally are pleased with the added extras put on new cars.” They should be. I have two neighbors who bought station wagons in 1958. One bought a model with power steering, power brakes, automatic shifting, and power windows. The other—a curmudgeon type who doesn’t think that shifting gears and raising windows by hand are too much of a strain—bought a car without any of the extras. His years of ownership of the car have been relatively trouble-free. (And by spurring the extras he saved several hundred dollars at the outset.) The other neighbor who bought the car with all the extras moans that he got a “lemon.” His car, he states, has been laid up at the garage seven times, usually because of malfunctioning of the optional equipment. Replacement parts were costing more. The gizmoed motorcar was a good case in point. A creased fender that in earlier years could be straightened for a few dollars was now, with integral paneling” and high-styled sculpturing, likely to cost I $100 to correct. The wrap-around windshield was likely to last three to five times as much to replace as the unbent/ windshields that motor cars had before the fifties. Ailing parts were increasingly inaccessible. In their pre occupation with gadgetry and production short cuts, and perhaps obsolescence creation—manufacturers often gave little thought to the problem of repairing their products (or made them hard to repair.) Sales Management dominated and demanded that“products are not designed for service.”It was of steam iron that could be repaired only by breaking it apart and taking out the screws. Some toasters were riveted together such that a repairman had to spend nearly an hour just getting to the right part. This is to replace a fifteen-cent or a ten-cent spring. Product analysts at Consumers Union told me that air-conditioning units in automobiles were often cluttering up the engine compartment so badly that it took an hour or two to remove a rear spark plug. Built-in appliances—which were being hailed as the wave of the future had to be disengaged from the wall before repair work could begin. Many of these built-ins were simply standard.

I suppose the question to ask is, what has changed? How did American businesses go from offering many durable products with high serviceability to offering products designed to be thrown away?

Secondly, how did both American and international consumers become habituated to this new consumption pattern? Thirdly, does anyone think that this trend can be reversed by “the market?” It would appear that on broader goals such as environmentalism (which the life-span of products are a contributing factor towards). That the market will drive product development in the opposite direction, towards planned obsolescence.

People need to have a better understanding of the relationship between product serviceability and sustainability. It ‘s hard to see companies making a focus on product serviceability without more pressure from consumers. However, consumers have become so habituated to disposable products, that most don’t know where to begin to ask for this level of build quality.

Planned Obsolescence Redefined

Planned obsolescence describes the active reduction in reliability by manufacturers to increase future sales. However, not all obsolescence is of this type. Some “planned obsolescence” is a lack of market pressure to make the sort of trade-offs that would result in higher quality items. With the current move to outsourcing manufacturing to China, the manufacturing costs drastically go down, but factors like long term usability decrease. But the profit incentives on the part of companies overwhelm all other considerations.

In fact, many luxury-brand items today are made on assembly lines in developing nations, where labor is vastly cheaper. I saw this firsthand when I visited a leather-goods factory in China, where women 18 to 26 years old earn $120 a month sewing and gluing together luxury-brand leather handbags, knapsacks, wallets and toiletry cases. One bag I watched them put together — for a brand whose owners insist is manufactured only in Italy — cost $120 apiece to produce. That evening, I saw the same bag at a Hong Kong department store with a price tag of $1,200 — a typical markup.

How do the brands get away with this? Some hide the “Made in China” label in the bottom of an inside pocket or stamped black on black on the back side of a tiny logo flap. Some bypass the “provenance” laws requiring labels that tell where goods are produced by having 90 percent of the bag, sweater, suit or shoes made in China and then attaching the final bits — the handle, the buttons, the lifts — in Italy, thus earning a “Made in Italy” label. Or some simply replace the original label with one stating it was made in Western Europe.

To please customers looking for the “Made in Italy” label, several luxury companies now have their goods made in Italy by illegal Chinese laborers. Today, the Tuscan town of Prato, just outside of Florence and long the center for leather-goods production for brands like Gucci and Prada, has the second-largest population of Chinese in Europe, after Paris. More than half of the 4,200 factories in Prato are owned by Chinese entrepreneurs, some of whom pay their Chinese workers as little as two Euros ($3) an hour.

What is Perceived Obsolescence?

While we have covered planned obsolescence, which is entirely the work of the producer. What about perceived obsolescence, which is a more complicated form of obsolescence, which is the work of both the producer and the consumer?

  • Fashion is one example of perceived obsolescence. Fashion goods have a shelf life where the wearer of out of date fashions results in social stigma, and it is probably the most obvious example of perceived obsolescence. The fashion industry receives considerably higher sales than it would ordinarily receive by employing perceived obsolescence.
  • Cell phones or other technological items that are replaced because they are not up to date rather than no longer adequately meeting the needs of the user is another example of perceived obsolescence.

Perceived obsolescence is in part due to the messaging put out by the producer. But perceived obsolescence also has its origin with the user and their social interactions. If a person uses a car or phone that works perfectly well but is seen not to be of the appropriate status or otherwise has negative social repercussions or has a negative emotional feeling for the user, this is viewed obsolescence. Messaging, which includes advertising from the producer who builds up the new item at the expense of the old, is part of perceived obsolescence.

Perceived obsolescence and planned obsolescence work together to reduce the longevity of products and to increase waste.


There can be no causal forecasting without casuals. This type of data should be elementary to maintain, but it is often not maintained. John Snow, in his Uptime Blog, which is associated with Engima, provides some real insight as to why below. It seems that the natural inclination of many service departments is to focus on quickly getting equipment back in service, with less concern for proper equipment maintenance and calibration.

During a break-fix event (unscheduled maintenance), this is a rational response: the equipment is down, revenue generation has stopped, so get the machines working again. However, even during scheduled service events, mechanics can become overly focused on speed. This is an example of reacting to the urgent rather than resolving the outstanding. The problem is that service departments are often measured more on productivity than on quality.

See the full article here.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


This is an interesting article on planned obsolescence in hard drives. We quote from it below.

Performance Based Logistics, Rolls Royce and Power by The Hour

Executive Summary

  • What is Performance Based Logistics?
  • We cover why Performance Based Logistics unlikely to happen based upon the institutional incentives and orientations of the entities involved.


Performance Based Logistics (PBL) is a much-discussed concept in the military and among defense contractors. We will discuss a range of issues with PBL, from its trend to an example of it in the Rolls-Royce Power by the hour program. However, this post questions whether it is an authentic pattern based upon the incentive structures of the military and their suppliers.

However, this post questions whether it is an authentic trend based upon the incentive structures of the military and their suppliers.

Performance-Based Logistics as a Trend

PBL has become a strong trend among the management class of companies in the A&D Environment Performance Based Logistics. Performance-based logistics builds upon a kernel of truth that may or may not spread from management conferences. This article discusses PBL and makes some educated guesses as to where Performance Based Logistics might be in 5 years from now.

Performance Based Logistics is often introduced as a way to improve service levels and increase the responsibilities of supplier service parts management and in some cases, service part service operations. In this way, it may be viewed as a form of outsourcing where the part planning and management are moved from the client to the suppliers. In cases where the military is the customer, it can be seen as a light form of military privatization.

Supporting Case Studies

The excellent case study for Performance Based Logistics in the A&D environment is Rolls Royce. While not called “PBL,” Rolls’ TotalCare engine service program is a long-term service contract where Rolls controls the engine service parts inventory. And, in a way, goes beyond Performance Based Logistics by offering direct guidance and instruction when certain parts are due for maintenance. Rolls actively monitor over 3000 engines aggregating a healthy level of service intelligence about engine maintenance. Rolls have, by most accounts, leveraged this capability to grow its market share, take business from larger competitors, and reinforce the premium reputation of its industry-leading engines.

Deviations Between the Strong Case Study and Other Projects Performance Based Logistics Clients and Environments

It would be a mistake to assume that the success at Rolls can be duplicated to every A&D supplier or can be generalized to other areas outside of engines. However, this is a common oversimplification and mistake made by the business press. By comparison, there were distinct organization differences between Toyota and US manufacturing firms. As well as geographic differences between the locations of the suppliers that make up the supply base in Japan vs. the US that prevented other companies from ever duplicating Toyota’s success with JIT, regardless of decades of attempts across thousands of factories. (Secondly, many of the concepts of Japanese manufacturing were not explained to US executives as they would have been unappealing to them, which is described in this post)

What this means is that the case for PBL with Rolls must be observed regarding how Rolls as a company and Rolls business is different from other companies that want to implement PBL type programs.

Some of the differences are listed below:

  1. Rolls are only managing a small proportion of the overall service
parts of an airplane. They are providing 100% of the parts for the
 engines under the TotalCare program. This means that a 95%
availability does mean a 95% availability for the engine as 
there are no other suppliers. However, this is not true with companies
 that provide the entire airplane. Therefore it must be considered that
Rolls is solving a much simpler problem than a supplier that 
supplies the whole aircraft would be.
  2. Rolls appear to be on the outer edge of competence within the 
industry. Secondly, this is not a new philosophy for Rolls. Their 
“Power by the Hour” program, which is substantially similar to the
 TotalCare program dates at least back to the 1930s. This means that
 Rolls has been organizationally oriented towards service for
 generations. (This has also been known about in the industry for ages, so attempts to present it as something new are missing the historical facts.) The competence in this are is not necessarily distributed across other A&D suppliers.

The Cultural and Business Model Changes Required

We have found several articles on how well this new concept fits into the existing culture of A&D suppliers. The consensus is that a great deal of cultural change will be required to move A&D providers to a PBL environment. However, less discussed is how PBL fits into A&D, and particularly defense contractor’s business models. There is probably a good reason for this. The reality of the security service parts business model would not be popular if it were known. Enough documentation is available to demonstrate an intense and lengthy line of continuity in service parts pricing.

How The Government is Price Gouged

Service parts are priced at the beginning of the program, and in subsequent years the service parts rise terrifically. This applies to so-called unique elements but is also true of what appear to be commodity items. Many of the oversight bodies for regulating component increases were removed in the past six years in particular but have been even before then as the public furor over military contracting overcharging died down from a decade and a half ago. Therefore, the part price escalation continues. This is simply how the industry has worked for many decades.

If this is a strategy of defense contractors, and there is real evidence that it is, it’s hard to see how they would want to move towards a performance-based logistics environment. The performance-based logistics contract would undoubtedly be for several years. If defense contractors intend to continue their price increases, the PBL would need to reflect that year-to-year increase. This would raise flags, so again it’s not something a defense contractor would want to do. The entire scenario is illogical.

The Actual Power Dynamic

There is a hidden assumption in the discussion of performance-based logistics about defense. It presumes the DLA (Defense Logistics Agency, which negotiates with suppliers) is very powerful vis-à-vis its suppliers. Indeed, as a monopsony (one buyer, many sellers), the DLA could be a powerful actor if it wanted to be. However, there is a good deal of evidence that it does not want to be. There are strong relationships between the DLA and the defense contractors. The decision-making level many of the DLA and, more broadly, the procurement decision-making apparatus in the armed forces look to defense contractors for their next job.

This results in the military being less willing to press their claims and hold defense contractors accountable. The evidence for this is the big year to year increases in service parts costs the military accepts, combined with the significant cost overruns in weapons systems they take. And the public policy they have for not going back to defense contractors and asking for refunds when parts break far before their stated expected lifetimes. If the military will not confront defense contractors on these more fundamental issues, it’s hard to see how they intend to punish the contractors for missing service level targets that are part of enforcing a PBL contract. However, this is the central thesis of PBL.

Coming Trends Limiting Performance Based Logistics’s Adoption for Defense

The US is at the high-end of a cyclical spending upswing due to a highly pro-military administration and a war with two countries. However, some bills are coming due for these wars that have not been fully funded.

There is significant evidence that veterans’ health care and long-term care have been substantially under-funded. Secondly, a large amount of equipment that is neither serviceable nor economically repairable has not been written off of the books. When these costs become apparent, the US military may move from strategy to having contractors provide “PBL” to continuing to do it themselves (to save money). An extended service contract is a luxury product; Rolls is considered an excellent provider of “PBL” type service, and however, it is also widely recognized as expensive. This is to say that the US military may move away from PBL itself when it has less money to spend.

For the reasons given above, PBL does not appear to be a trend with any staying power. The many articles on this topic are mainly a waste of time. Much of it has to do with consultants and executives publicly strutting and making self-importance statements about how they subscribe to this or that leading edge concept.

PBL and Alternatives

“Performance Based Logistics is a strategy for system support. Instead of goods and services a supplier is paid for a guaranteed level of performance and system capability. The supplier often has to guaranty the performance at lesser costs but has more control over all logistics elements. The performance is declared in Performance Based Agreements.”Wikipedia

Performance-Based Logistics can be in the commercial area of A and D or the government/military. A quote from the 2006 Quadrennial Defense Review Report indicates the orientation of the Department of Defense regarding PBL.

“There is a growing and deep concern in the Department of Defense’s senior leadership and in the Congress about the acquisition processes. This lack of confidence results from an inability to determine accurately the true state of major acquisition programs when measured by cost, schedule and performance. The unpredictable nature of Defense programs can be traced to instabilities in the broader acquisition system. Fundamentally reshaping that system should make the state of the Department’s major acquisition programs more predictable and result in better stewardship of the U.S. tax dollar.”

PBL has become a strong trend among the management class of companies in the aerospace and defense environment. This paper discusses PBL and makes some educated guesses as to where PBL might be in 5 years from now.

The Basis for Performance Based Logistics

PBL is introduced as a way to improve service levels and increase the responsibilities of supplier service parts management and, in some cases, service part service operations. In this way, it may be viewed as a form of outsourcing where the part planning and management are moved from the client to the suppliers. In cases where the military is the customer, it can be seen as a light form of military privatization.

Supporting Case Studies

The excellent case study for PBL in the aerospace and defense environment is Rolls Royce and their Power by the Hour program. While not called “PBL,” Rolls’ TotalCare engine service program is a long-term service contract where Rolls controls the engine service parts inventory. And, in a way, goes beyond PBL by offering direct guidance and instruction when certain parts are due for maintenance. Rolls Royce actively monitors over 3000 engines aggregating a healthy level of service intelligence about engine maintenance. Rolls have, by most accounts leveraged this capability to grow its market share, take business from larger competitors, and reinforce the premium reputation of its industry-leading engines. This is the basis for Rolls Royce’s power by the hour program.

Deviations Between the Strong Case Study and Other Projects PBL Clients and Environments

It would be a mistake to assume that the success at Rolls can be duplicated to every aerospace and defense supplier or can be generalized to other areas outside of engines.

By comparison, there were specific organizational differences between Toyota and US manufacturing firms. As well as geographic differences between the location of supplier base in Japan vs. the US that prevented other companies from ever duplicating Toyota’s success with JIT, regardless of decades of attempts across probably thousands of factories. What this means is that the case for PBL with Rolls must be observed regarding how Rolls as a company and Rolls-Royce business is different than other companies that want to implement PBL type programs. Some of the differences are listed below:

  1. Rolls Royce is only managing a small proportion of the overall service parts of an airplane. They are providing 100% of the parts for the engines under the TotalCare program. This means that a 95% availability does mean a 95% availability for the engine as there are no other suppliers. However, this is not true with companies that provide the entire airplane. Therefore it must be considered that Rolls is solving a much more straightforward problem than a supplier that supplies the whole plane would be.
  2. Rolls appear to be on the outer edge of competence within the industry. Secondly, this is not a new philosophy for Rolls. Their “Power by the Hour” program, which is substantially similar to the TotalCare program dates at least back to the 1930’s. This means that Rolls has been organizationally oriented towards service for generations. This is not necessarily the case for other A&D suppliers.

Power by the Hour

Rolls Royce’s Power by the Hour program is where those that purchase Rolls Royce engines are said to pay only per hour of usage.

  • This Power by the Hour program puts the burden of service on the producer rather than the customer.
  • The Power by the Hour is one of the ways a manufacturer can show its faith in its product.
  • Power by the Hour is somewhat unique in that while there is much discussion about Power by the Hour, Power by the Hour does not have many other adopters of this strategy.

The Expertise Required

Developing a PBL contract requires more than the capability to run an advanced service parts planning system like MCA or Servigistics. It also requires a way to cost the PBL contract. This is so the firm can determine the profitability of each contract and can use this information to adjust future contracts. SAP Project Systems is a SAP’s software solution to cost the transactions associated with a contract. The difficulty comes in trying the specific operation to the particular contract in question. To understand SAP Project Systems more, see this post.

What We Do and Research Access

Using the Diagram

Hover over each bullet or plus sign to see more explanation. To move to a different bullet point, just “hover off” and then hover over the new bullet.


Research Access

  • Do You Need to Access Research in this Area?

    Put our independent analysis to work for you to improve your spend.


Intermittent Demand and Service Parts Databases

Our Solution for Managing Intermittent Demand

The number of service parts companies that actually use service parts software is small. We offer some of the most important features of managing service parts in an easy to use SaaS application that can be used to improve the management of any ERP system for service parts. It’s free until it receives “serious usage” and is free for students and academics to access. Select the image below to find out more.


On the lack of funding for the war

On the need for procurement reform