How to Understand IT Risk Management Basics

Executive Summary

  • The basics of IT risk management are routinely ignored on projects.
  • We cover what they are and how they can be followed.

Introduction

To understand how to manage IT project risks, we must first level set on what we mean by risk. First, there are many different degrees of success or failure on a project. Any attempt to pick a specific value of success results in an interpretive approach which places an implementation on a continuum between the best possible outcome, a system which universally loved by users, and which drives continuous business improvement to the abject failure side of the continuum where the software must be implemented from scratch, and a consulting company is either fired or fired and sued.

The Commonly Quoted Failure/Success Rates for Software Projects

The failure/success rate of enterprise software projects is frequently quoted. Articles on enterprise software risk or success will often list a statistic that around 50% of enterprise software projects are considered successful, and therefore 50% are considered failures. Another frequently quoted statistic is that only around 35% of projects are generally considered to show quantifiable business benefits.

These statistics are frequently quoted, but it is seldom discussed how these statistics were arrived at in the first place. As background research, I reviewed all of the literature on process success and failure. It turns out that the quoted statistics make the research sound far more conclusive than it actually is when one reads the research. In effect, journalists are quoting previous journalists. Still, they are not going back and reading the original research, and therefore many of the authors on this topic lack an authentic understanding of the research they are quoting. This would not be a problem if the original research were unambiguous, but it is, in fact, quite ambiguous. The following quotation is one example, which illustrates how ambiguous much of this research actually is.

“According to the estimation of the Standish Group International, 90% of SAP R/3 ERP projects runs late, another SGI study of 7400 IT projects revealed that 34% were late or over budget, 31% were abandoned, scaled back or modified, and only 24% were completed on time and on a budget. One explanation for the high ERP project failure rate is that managers do not take prudent measures to assess and manage the risks involved in these projects.” – Risk Assessment in ERP Projects

These statistics are guidelines, but I don’t consider them anywhere near as useful as so many journalists in this area seem to – of course, how they actually interpret the statistics to depend if you think they actually reviewed anything beyond reading the statistic in the magazine. Some important questions to ask concerning studies on project success are the following:

Question #1: What is the Definition of Success?

Different studies have different definitions of what makes an implementation successful. Furthermore, many of these types of classification statistics depend upon how survey questions are asked. The surveys also assume that the respondents actually themselves know if the project was, in fact, successful (which I addressed earlier). Also, what exactly is “successful?” For instance, if a project showed a 15% ROI but easily could have shown a 20% ROI if certain relatively easily made changes were the project successful? If the software that was replaced was truly horrible, then is this the same then if the new software was selected simply because a new CIO was hired and was simply more comfortable with a different vendor?

Question #2: The Percentage of Functionality Used

Another frequently used but ultimately unhelpful statement regarding software implementation success is the company uses a percentage of the functionality. If a company uses 10% of the functionality that an application offer, that sounds bad, doesn’t it? However, if that is all the functionality from the application that the company needs, and if the ROI is still positive, this would not be a problem. There is simply no way to determine success based on some percentage of functionality used – and even assigning a percentage of functionality used is a tricky assignment.

Question #3: Meeting Deadlines

The sad conclusion of research into rating enterprise software implementation success is that most companies primarily rely upon whether an implementation met project deadlines as to whether it was successful. The following quotation explains this.

“According to Parr and Shanks (2000) “ERP project success simply means bringing the project in on time and on budget.” So, most ERP projects start with a basic management drive to target faster implementation and a more cost-effective project… Summarizing, the project may seem successful if the time/budget constraints have been met, but the system may still be an overall failure or vice versa. So these conventional measures of project success are only partial and possibly misleading measures when taken in isolation (Shenhar and Levy, 1997)”A Framework of ERP Systems Implementation Success in China: An Empirical Study.

It should not be a difficult concept to accept that whether a project meets its deadlines is a completely different measurement from whether a project is successful. If we take the analogy of building a house – a house can be built even faster than its predicted project plan – however, it can be built with various leaks in the roof, which only become apparent after the buyer has moved in. Beating deadlines is a nice side benefit, but the primary benefit is a well-implemented system.

Unfortunately, whether something like a software implementation is successful is much more difficult to determine than in the example of a poorly constructed house. Some systems can fail to accept new users – the Healthcare.gov website being one of the most famous recent examples of this problem. However, most system problems or shortcomings are much more subtle than this. Systems may seem to operate properly, but only experienced analysis by those with the domain expertise can say for sure if the system’s output is correct. I have personally analyzed systems that have been performing very poorly for the client, but the client was not aware of it.

When millions of individuals signed up for health care through the healthcare.gov website, they were told that the site was not operating properly. When I signed up, I received permanent “In Progress” status for weeks.  

Are You Following a Broken Software Implementation Risk Model?

This post will cover an area where companies spend a great deal of time and resources, but much effort goes for naught. This is the area of enterprise software risk management. Paradoxically, most companies think they have this area well managed.

Confidence for No Reason?

Many companies do things that they think minimize their enterprise software risk, and these things are generally accepted as reducing risk. However, my research into the area and my consulting experience have led me to conclude that many of these risk management approaches do nothing to minimize risk and keep the company from focusing on where the real risk reduction opportunities lay.

The Problem with the Present Practice of Risk Management

Risk management for enterprise software tends to be dominated by risk management exercises that involve the use of spreadsheets where risks are enumerated, and then mitigation strategies are developed.

Anyone with experience in software implementation has been in these types of meetings. It all seems very responsible. These risk mitigation meetings typically result in most people in the meeting thinking that they did the right thing by listing risks and coming up with mitigation strategies. However, since those that implement are often required to compile these lists, it pushes the risk management down to the implementers’ level or the project or program manager.

Misunderstanding the Location of the Risk

Unfortunately, this is not where most of the opportunities to reduce risk actually reside. Rather the real opportunities reside at the level of the executive decision-makers.

Decisions made at the top are the most important to a project’s outcome – not the tactical approaches used at the levels below. For instance, no implementation team can make up for a bad software selection or the wrong implementation partner being utilized – and this is often what happens. The poor quality of the advice received is evidenced by the fact that in most cases, the buyer will not purchase the software that is the best match for their business requirements.

Reaching Out to High Risk

Most companies will tend to stretch to implement functionality they find desirable, rather than moderating their desires to match their capabilities and budget.

Furthermore, some of the riskiest software is some of the most popular – something which is covered at my site Brightwork Research & Analysis, where each application has a risk rating.

Bad Strategies

Approaches such as using ERP systems where they are weak — as I describe in this article concerning production planning and scheduling — is not only risky, it is, in fact, unlikely to be successful under any circumstance — yet it is often justified by the flawed logic of “getting more out of the ERP system.” If this is such a common approach — how much is really known about minimizing enterprise software risks within companies?

Application risk is just one of the categories of risk on a project. I use several categories of risk, which include the preparedness of the implementing company. Each application being evaluated should have an assigned risk during the software selection. Applications cannot be treated as if they have the same risk profile, which is often the case — or one stack the deck against the lower-risk applications.

Inaccurate Reporting on Enterprise Software Risk

All of this is covered up in the reporting on project failures. For instance, the reason for implementation failure is often attributed to training – very rarely will the topic of inappropriate software selection be brought up – as this points the finger to the executive decision-makers and their advisors.

On one recent project, I analyzed which had problems for at least 5 years — training — which was given as the problem was most likely not the primary problem. The primary problem — after such a long period of time was poor software selection and poor advice. First, the implementation company was a poor choice – placing billing hours above all else, and then the software vendor took over the prime role to much fanfare, but with little improvement in the outcome. The software vendor taking over can sometimes help — if, and this is a big if, the software met most of the requirements in the first place. Of course, the software vendor and the consulting company would like the public to think that there was some training mishap — but I would remind anyone that the vendor and the consulting company are part of the training program as well.

Excuses Galore

Regardless, for some reason, a “lack of training” has become the go-to get-out-of-jail free card for software that was either a poor fit for requirements and/or oversold to clients. However, you won’t hear any of this in the major IT media outlets. This is because vendors, massive and big consulting companies, take out the most advertising in IT publications. Their proposals are the dominant viewpoints that get through – accuracy is not the issue. The IT media outlets have to survive, and they can’t do it on reader subscription revenue alone.

This all adds to the focus being on managing risks once poor decisions have already been made and after the options have narrowed. This orientation is generally unquestioned and came across very strongly in both the books and literature that I reviewed before writing my book Rethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects. It may be comforting and career protecting to some, but it is not analytical risk management.

Real Risk Management

Unfortunately, few executive decision-makers understand how to minimize the risk of enterprise software selection and implementations, and in fact, this is not entirely their fault.

There is no course one can take, no authoritative source which explains what a minefield enterprise software is, and how difficult it is to obtain objective information to help make the right decisions that result in risk minimization.

The 2002 movie Catch Me If You Can is the cinematic version of the book by the same name, which chronicles the activities of a renowned check fraud criminal Frank Abagnale Jr, who did everything from impersonating an airline pilot to leaning the FBI on an international chase all while he was between 15 and 21 years old. Given the logic explained to me through several discussions around software sales ethics over the years, this title seemed quite appropriate. In this article, I will cover the similarity between the concept of “catch me if you can” with another phrase or logic that is commonly used to justify lying by those in the enterprise software space.

The Amount of Lying in Enterprise Software Sales

While difficult to quantify, it is quite safe to say that the amount of lying in enterprise software sales is large. Let us review some examples for those that have less experience in the industry.

  1. Exaggerated Success Ratios: Software vendors routinely exaggerate the success ratio of their software. In my debates with software vendors, which is normally initiated by me observing the actual success level of software implementations, vendors’ most common reply is their most effective implementations, not their average implementation (which, of course, includes failures).
  2. Multidimensional Lies: SAP lies about the performance of their database, the number of users that are live on nearly all their products (see the article How SAP Controls Perceptions with Customer Numbers), the integration between their applications, the duration of their implementations, and virtually every aspect of their software.
  3. Exaggerating Contributions Versus the Public Domain: Nearly all vendors exaggerate their contributions versus how much they pull from the public domain. This extends to consulting. While working as a subcontractor to a consulting company, I was told to remove references to other forecasting writers. It did not make it appear as if the consulting company had created the items I was presenting. While at Accenture, I was told not to tell the client I obtained an inventory calculation from a well-known book on inventory management.
  4. Pretending to Have More Experienced Resources Than They Do: It is the natural state of affairs that there is a shortage of consulting experience in recently developed applications. Quite obviously, it takes implementations to develop that experience. Yet, neither vendors nor consulting companies let this on during the sales process. A perfect example of this from the first-hand experience was participating in sales initiatives which included SAP S/4HANA. I found that resumes had S/4HANA implementation experience added to them that were, in fact, demo experience. When I brought up the problems in experience to the VP of sales, I was told not to worry about it and not bring it up. I was told that asking this question about implementation readiness was tantamount to “boiling the ocean.”
  5. Cloud Washing: SAP, IBM, and Oracle are renowned for their cloud washing, each of them massively exaggerating their cloud capabilities. Oracle’s use of the term cloud concerning Exadata is so deceptive. They render the term meaningless.

And this is just sampling. That is where I will stop to keep from being accused of beating a dead horse. Hopefully, this illustrates that lying is a big part of enterprise software sales. So now, let us move into perceptions of others on this exact topic.

Ethics in Software Sales Commentaries

Stefan de Kok published the article Ethics in Software Sales, and I found the following quotation of interest.

“Reality has shown time and time again however, that large vendors trail the best-of-breed vendors in both benefit and value creation in either case by a large margin, and that the largest ones generally fail to provide any value at all. The biggest publicly exposed lie told in marketing and during sales cycles is that large vendors acquired the best-of-breed solution and integrated it into the rest of the suite to make it best of both.”

This is quite true.

The largest software vendors try to move into a position where they compete minimally. Each of the largest vendors has a tail of consultants that will repeat uncritically what they say. They have the largest marketing budgets, which of course, is a nifty fit for our coin-operated IT media system. Companies often add intangibles to their valuations. The largest vendors can add the ability to lie with impunity as an intangible asset to theirs.

Lying as Part of Capitalism?

Lying has repeatedly been justified to me as part of a functioning capitalist system. It is unclear whether these same proponents propose that there is no lying in other economic systems, as we never seem to get to that point in the discussion. Still, it has been proposed on several occasions that lying is part of capitalism. This is curious, as capitalism is supposedly based upon functioning or efficient markets, and one of the foundational elements of a functioning market is quality information. In fact, this is part of something called the efficient market hypothesis, which is a primary component of modern economics. In fact, modern economists, whose elite interests have greatly influenced, enjoy promoting efficient markets because it minimizes regulation. However, for instance, food labels would not have occurred without government regulation (food companies opposed them, stating that consumers did not have a right to know the contents of food and would impose onerous overhead upon them). Things like transparent pricing are either a function of government regulation or competition.

To argue that lying is a natural part of capitalism, the proponents of lying need to demonstrate that it improves market efficiency. Suppose it reduces the efficiency of markets, then lying is maladaptive.

If we imagine a person visiting a farmers market, if they come back with orange juice past its expiry date, and then a bag of rotten melons, that is not a functioning market. The concept of markets is that informed consumers have access to sufficient quality information where their decisions actually mean something. They don’t waste effort ending up with the wrong thing and where the demand signal is directed to the most efficient supplier. However, if the suppliers are lying, or a large component of them is lying, how does that support market efficiency?

Far from promoting market efficiency, lying about software leads to ill-fitting software being used and non-functional software being purchased. We routinely are informed of projects that kick off that we know will fail before they begin. They are destined to fail because either the software selected is of inferior quality, or the application was released too early and is too immature to be implemented. But this information does not get to customers because they are often “double-teamed” by the vendor and the consulting company. 

For example, we called out SAP SPP years ago as a product that could not be implemented in the article Why SAP SPP Continues to Have Implementation Problems. This conclusion came from the first-hand experience of testing the software. Yet SPP was implemented in any case. Every SPP implementation has bombed and has either already been written off or will be written off. All of this was avoidable.

Probably the single easiest way to increase the success ratio of IT projects is to stop selecting applications that can’t be taken live or are so weak they will never be viable products.

Lying as Part of the Stock Market?

Another argument proposed to me is that executives are under great pressure from the stock market. Therefore to attain their bonuses, they are in effect forced to lie and forced to pressure their salespeople into lying. This is an interesting defense of lying, which amounts to.

“Wall Street made me lie.”

The existence of public stock markets has all manner of negative externalities, including promoting companies to look at short-term results, the fact that executive compensation is in many cases of the books (as the salaries are not declared, but stock options are granted, which had the effect of minimizing executive compensation.) Now it appears we need to add on the promotion of lying the other ills of the stock market.

Lying as a Function of Too Many Salespeople?

Ahmed Azmi brings up the following sobering point about sales staffing.

“Some studies show the average tenure in sales now is 18 to 14 months. Less than 50% make their quota. The sales model has been broken for decades. Compensation drives unethical behavior and mad target setting by territory “leaders” accentuates unethical behavior.

What’s worse is the destruction of value. In the midst of this mad rat race, sales reps have no time to learn anything. I have been in this industry for 25 years and never seen anything like what’s happening nowadays. Sales reps reading from scripts and mindlessly repeating marketing collateral. Ask a basic question and you discover they have not seen the software they’re selling.”

This is partly the fault of the individual salespeople but is as much (in my view) the responsibility of the vendor’s leadership. In the same way that mortgage quality can determine default rates, too many salespeople chasing too few leads reduces the average sale quality. And there are quite obviously too many people working in software sales, which leads to salespeople closing marginal deals. The idea of “qualifying out” only works if you can afford to give up the deal and keep your job as a rep.

Too often, marginal prospects are kept in the ridiculously low accuracy CRM pipeline because removing them would mean the salesperson’s job would be in jeopardy. Vendors have too many salespeople versus quality leads, then complain about their CRM information’s poor quality.

Ahh…The Get Out of Jail Free Card of Caveat Emptor

Customers should be aware that many vendors see lying as justified to attain sales goals. The logic goes that in a properly functioning capitalist system, it is the provider’s role to make any statement that they see fit to make, and it is the buyer’s responsibility to catch them. This is the “catch me if you can use argue” of what is more generally known as “caveat emptor.” Caveat emptor is Latin for “let the buyer beware.”

One sales rep we worked with said that she needed to do the shenanigans because I did not understand how difficult customers can be. This is essentially the Oracle defense. Oracle’s sales reps have a long history of seeing customers as their opponents. The internal culture is to cut your customer. The idea is that you have to cut them before they cut you. This philosophy flows from the top of Oracle and is spread throughout the Oracle sales organization. And it is tough to take something said by a company that makes a 94% margin on its support seriously. The entire model is highly elitist, particularly when one looks at where most of the compensation at Oracle goes. It goes to sales and marketing and executives, as Oracle is well known to allocate its resources to these specific groups, so customers have to be lied to so these groups can get wealthy…or wealthier. Larry Ellison has stated on several occasions that Oracle does not care much if customers use the software they purchase, but Oracle does not want to be paid for the software one way or another. This culture is why we score Oracle so low in the accuracy of information provided to customers. It is also why many Oracle salespeople complain about their customers not responding to them when they reach out.

There is something critical to understanding the caveat emptor argument, which is that it is only used in private. That is the many sales engagements I have been part of, and I never recall any of them beginning with the statement.

“A lot of what we tell you today will be lies. But some of it will be true. So caveat emptor.”

Just imagine how long that meeting would last. In fact, the presentation is quite the contrary. Both the software vendor and the consulting company talk about being..

“trusted advisors”

..and how they are..

“100% focused on the client”

..and so on. The vendors employ salespeople who are experts at creating superficial human bonds as quickly as possible. One salesperson I knew used the pictures on the desk to create those bonds in a prospect’s office. If the prospect had a picture of their daughter in a softball uniform, the prospect loved softball, perhaps played softball. I became impatient one day when one salesperson I went in with to an account was triggered by the image of the prospect’s horse photo to talk about her “beautiful pony” that she grew up with and how much “she loved that pony.” So not only do software vendors not say..

“Watch out, we are going to lie, and you have to catch us.”

Caveat Emptor or Best Freinds Forever?

Instead, the salespeople go beyond proposing that the vendor will tell them the truth and move into the fiction that the salesperson and the prospect will become friends.

Salespeople often have memberships to some sailing club or other such upper-end and desirable hobbies. A generalized feeling is given that the salesperson and the prospect will forge a friendship that will lift the lifestyle of the prospect. In short, everything possible is done to get the prospect to lower their guard and to trust the vendor. I find it quite disreputable for the comment to make that the environment is “caveat emptor.”

In fact, one can be made to question which is it? Is it a warm fuzzy trusting connection between the sales rep and the prospect, or a brutal festival of lying where the prospect has to fact check everything said?

The answer, of course, is “both.” When the sales rep wants to make a sale, it all about trust and relationships, but when lies must be justified after they have been exposed, it’s necessary to switch to “caveat emptor.”

Conclusion

The HealthCare.gov site did not allow individuals to sign up that was a clear sign of failure. However, this actually meant that the project missed its deadline. It does not mean that the website will not be a success (and after initially wrote this, it was widely reported that the website’s performance had been substantially improved). On the other hand, if the site had been functional, allowing people to sign up, but had resulted in worse or more expensive coverage than they had been able to access previously, then this would have been a more serious failure. If that had been the case, it would have been more difficult to determine, because it would have meant analyzing the output. Actually, HealthCare.gov did result in higher premiums for many people who bought their own insurance, but this was not related to the logic of the site (when it finally did come available) but instead was due to the fact that the regulations of the Affordable Health Care Act essentially raised the bar on what type of insurance could be purchased, placing restrictions on what were clear abuses by insurance companies. This topic brings up a litany of complexities and different viewpoints as to price changes – which makes the point I believe regarding the complexity involved in measuring success. Clearly, a website that provides no output at all is easier to recognize it is not a success.

Lying is a big part of enterprise software sales. And larger software vendors have a competitive advantage in lying because their lies are endorsed by both a long tail of consultants, and because with more coins to place into IT media entities, they have a superior ability to get their lies out, and less concern that they will be called out in their lies.

The argument of caveat emptor software sales falls on its face both in terms of how it conflicts with the capitalist need of quality information for efficient markets, but also is the absolute height of hypocrisy, when one considers that sales reps spend so much effort to build interpersonal bonds that are designed to get prospects to lower their guard and to not spend resources fact-checking them.