How to Understand the Brightwork Risk Component Model

Executive Summary

  • We cover the risk model we use to assign risk to applications.
  • This risk model is made up of components that are combined to create an overall score.

Introduction

To manage risk, it is important to define the most important components of risk that an enterprise software implementation is likely to face and then focus one’s attention on these components. It is also important to properly defined “risk.” This is not always as straightforward as it may appear, as is explained in the following quotation.

If a program is behind schedule on releases of engineering drawings to the fabricator, this is not a risk, it is an issue that has already emerged and needs to be resolved and might be complicated.  Other examples of issues related to failure of components under test or analysis show a design shortfall. This practice tends to mask true risks, and it serves to track rather than resolve or mitigate risk. -Software Risk Management

I should say, I find the way risk is discussed on projects to be non-sensical. Often the company will go through a list of open issues, list the risk, and list the risk-mitigating approach. However, the risks listed tend to be quite tactical. Typical risks on a list like this are “Development team will not finish in time,” “Not hearing back on workaround from the software vendor.” The entire exercise makes it appear as if the project team mitigates the most important risks. Meanwhile, the risks to the project were, for the most part, determined by the executive decision-makers.

Listening to the Wrong Advisors

The risks of listening to the wrong advisor, buying from the wrong vendor, or using the wrong consulting company are the risks that have the greatest impact on the project’s overall risk as they essentially define the severity of the risks that will follow. I have never seen “the executives have not touched enterprise software in 15 years and have no idea what they are buying,” or “the consulting company selected we are relying upon for objective advice is not a fiduciary and has no legal requirement to place the client’s interests ahead of their own” ever listed as a risk on any of these lists. I worked at one client transitioning to a suite of software that was completely inappropriate for them and would never provide them with an improvement in their operations over what they already had. They had several posters up that had been placed around their office that made promises of efficiency improvement resulting from the software’s implementation that they had no hope of ever realizing.

The Unexamined Roll of Advisors in Project Risk

Implementing companies do not make enterprise software decisions alone – instead, they rely heavily on advisors – and these advisors are generally considered to be the experts in all areas related to enterprise software implementation. Most books will not address this issue because many authors don’t like controversy, and the advisors happened to be very influential and prestigious. Reducing the risk of software implementations and increasing the probability of success means doing some non-conformist thinking. I will provide all the information in this book; all you have to do as the reader is to consider the logic of what I have written and to be open to the message. If you do, you will be far ahead of your counterparts who are not open to different thinking and are simply following the consensus viewpoint in this area. The consensus viewpoint is conformist, and it is not implementing enterprise software any favors. In fact, the entities from which you will receive information in the enterprise software space are relying upon the fact that you will think in a conformist manner, and this allows them to most effectively control your thinking and to guide your decision making in a manner that is optimal for them, and suboptimal for you.

Experience with IT Project Entities

I have worked with and sometimes for all of the entities that are risk elements in a project. I tell the real story about how these entities work and how they operate, and the motivation and orientation of the information they provide to reduce the risk on your project. By knowing these entities so well, I can provide insight into how they think, the logic they will present to you, and what to do with this information.

Implementation Risk Management Versus Software Development Risk Management

Much of the academic literature on risk management relates to software studies software development rather than the software implementation. It’s unclear why this is, but it is overwhelmingly true. Interestingly there were few quantitative studies on software risk management, with most papers being qualitative studies and anecdotes of things are deemed to manage risk. One of the most influential writers in software risk management is Barry Boehner. Many of the research papers that I reviewed for this book referenced Barry Boeher at some point in the document. However, again his background is software development rather than implementation. There are similarities between software development projects and software implementation projects. However, there are important differences as well.

The Software Recommended by Major Consulting Companies

This software was recommended by a major consulting company, which never analyzed, did not much care what the business requirements of their clients were, and recommended the software they were familiar with implementing. Throughout the project, we had risk meetings where we went over tactical risks of the type I have already mentioned. The project management and executive leadership at the client felt good that they were “mitigating risk,” however, no matter how many tactical risks they mitigated, the project’s outcome had been set when they picked the wrong software and the wrong consulting company. No tactical risk management can overcome poor decision-making on the part of executives. The executives at this company did not understand the software they purchased and were never in any position from a knowledge perspective to make a good decision. For example, this company used a single forecasting method (although a highly flexible forecasting method) in the system they were migrating from, which they wanted to use in the new system. It took them until quite some time after they purchased the system to realize the forecasting method did not exist in the new system. They added that the new forecasting system did not have this forecasting method (which happened to be a very complex forecasting method) as a project risk! I have to be so bold as to ask – is this actually a risk or a mistake of software selection that has been magically transformed into a risk?

Risk management starts at the top, and decisions made at the top are the most important to the projects’ outcome. However, executive decision-making is seldom the topic of risk management. Instead, the focus tends to be on managing risks once poor decisions have already been made and after the options have narrowed enormously. That is not analytical risk management.

Defining The Components of Risk

No risk component model can account for every risk on every enterprise software project. Furthermore, it does not make sense to focus on every risk, and the most important risks deserve the most attention.

At Brightwork Research & Analysis, we rate enterprise software project risk on the following factors. Some of these factors are related to the application and vendor, and others are related to the other parties involved in implementation.[1]

Application Related Risk Categories

  1. Functionality:
  2. Implement ability:
  3. Usability:
  4. Maintainability:

Vendor Related Risk Categories

  1. Quality of Information Provided:
  2. Implementation Capabilities:
  3. Support Capabilities:
  4. Internal Efficiency:
  5. Current Innovation Level:

Project and Client Related Risk Categories

  1. The Complexity of Specific Functionality to be Implemented:
  2. The Complexity of Specific Installation (number of languages, number of instances, number of teams supporting instances, etc.):
  3. Preparedness of External Implementing Entities:
  4. Preparedness of the Buyer’s Implementation Team:

Of these risk categories, the Brightwork Research & Analysis website produces ratings for the Application and Vendor Related Risk Categories as a self-service offering. This is performed as a starting point for determining the risk of the entire project. Buyers that use this risk estimation can complete the risk estimate for client-specific risks themselves using this as a beginning point or can have Brightwork Research & Analysis complete the estimate. This includes an interview and analysis of the rest of the Project and Client Related Risk Categories.

We assign each application a specific combined application and vendor risk, which is simply the likelihood of success versus failure. Assigning a likelihood of failure is actually quite unusual. However, it allows buyers to understand the risk profile of the applications they are evaluating, and of course, to stay away from higher-risk applications. As long as the present method of assuming an identical risk level for all applications and all vendors, buyers will continue to buy high-risk applications and higher-risk applications than they had had any idea they were purchasing.

This risk estimate is an aggregated calculation from multiple application and vendor criteria that we assigned values. Our final risk value is not simply a straight average of all the input components but rather a weighted average. We have found some components to be more critical – even among this important group – than other risk factors.

The description of these items are listed below:

Application Related Risk Categories Descriptions

Functionality

This is how well the functionality of the application has the potential to match the business process and the functionality’s reliability. This is a composite score because it includes one score of functionality quality and one for functionality scope.

The Brightwork Research & Analysis website scores explain each application in terms of how well it scores on each subcategory of functionality. While many – particular larger software vendors would prefer if people believed that functionality scope trumped functionality quality, this is not borne out in our studies of actual projects. One of the most important lessons from enterprise software is that any application contains functionality in the release notes or its marketing literature — that does not mean that the functionality is equal with the same functionality with other vendors. This sounds completely obvious; in fact, I cringe somewhat when writing it for making such an elementary statement. However, I feel it’s necessary to discuss this point because many companies behave as if functionality between applications is equal.

Determining the application functionality score takes a detailed analysis of the application in terms of the real ability to leverage functionality. It also means making value judgments as to how frequently the functionality can actually be put into action.

Implement ability

Any application can be scored for how easily it can be implemented. Many factors go into this. One factor is master data parameter maintenance; another is how hard or difficult it is to configure the application. Implement ability is generally not measured, but it is, in fact, a specifically measurable entity.

Some of the lowest scoring applications regarding implement ability are tier 1 ERP systems and BI Heavy applications – and unsurprisingly, they tend to have the longest implementation timelines in enterprise software. Older applications also tend to be less implementable than newer applications. SaaS applications are generally more implementable than those delivered on-premises, which is due to several factors – a significant one because much of the complexity of setting up and maintaining the application’s hardware is taken care of. The more control the software vendor has over the application, the better the implementation ability, which is why SaaS scores so well in this regard.

Usability

Applications that rank high in usability have users naturally gravitate to them. They require less training and are inherently easier to understand and troubleshoot when things go wrong – even if the complexity of what the application does under the covers is high. Highly usable applications don’t need to be forced on users, as users naturally want to access them to do their jobs more efficiently.

At Brightwork Research & Analysis, rate the applications sometimes by using the applications ourselves – and other times by requesting that the software vendor demonstrates menus and functionality that we then compare against other applications. We do not adjust or normalizes the usability factor per software category, as we think this would not be very clear. Some software categories tend to have higher or lower average usability than other software categories.

Maintainability

This score is related to the implement ability score – but looks at longer-range factors. Applications differ drastically based on maintainability – and the maintainability of an application greatly affects its total cost of ownership. According to the TCO analysis database at Brightwork Research & Analysis, roughly 60% of the TCO of an application is related to its maintenance costs. It is both the highest cost but probably the least emphasized cost of all the costs. And while other costs tend to be loaded towards the beginning of the buyer’s interaction with the application, maintenance costs run until the system is decommissioned.

Vendor Related Risk Categories Descriptions

The vendor-related risk categories rate the vendor across the criteria that consider the most important for choosing a software vendor. Enterprise software purchases create a long-term relationship between the buyer and the vendor – and the vendor should be a highly favored criterion.

Generally speaking, application quality and vendor quality are strongly related to one another – but not always. One of the most common situations where the application scores high but the vendor scores low is acquired applications. The most common situation where the application scores low but the software vendor scores high is when the application is not within the core of what that vendor does.

The Quality of Information Provided

Enterprise software vendors differ greatly in the quality of information they provide to customers. Factors that influence the quality of information provided by the sales part of the software vendor includes the sales approach, how the software vendor motivates and compensates salespeople, how well their salespeople know the application, as well as how much their market is saturated versus the number of resources they deploy into sales as well as other factors. The quality of the information in terms of documentation depends on the software vendor’s emphasis on developing anything from their user manuals to their marketing literature. The best compliment we can pay to a software vendor is when they are either a thought leader in their space or make a genuine effort to educate with their documents. Another factor influencing the quality of information score is how clear the messaging is from the software vendor. Some vendors can simply explain what their application does and how it can be used better than others. At Brightwork Research & Analysis, an overall score is produced for the quality of the software vendor’s information.

Implementation Capabilities

The large software vendors tend to outsource most of their consulting to be recommended by the major software vendors. Therefore, the implementation consultant’s role for the large software vendors becomes to partially support the major consulting company’s implementation resources and provide value to the end client.

Smaller software vendors tend to staff much more of the overall external implementation team. Many factors work into how effective the software vendor’s implementation capabilities are, including how long the consultants have worked for the software vendor, their motivation, and the software vendor’s internal fairness concerning how they treat their employees.

Another factor, which is frequently overlooked, is how much authority consulting actually has versus the sales group. In many software vendors, the sales division is far too powerful about consulting/ implementation. This is the kind of statement that will immediately get me accused of bias by experienced salespeople – as I am an implementation resource myself and have never worked in sales.  However, if this is the case at a software vendor, it means that information provided by the consulting arm will be censored. This is done to be in line with earlier false information provided by sales that were used to close the sale. While this may help the software vendor get the sale, it’s hard to see how any logic could be proposed that this is good for the buyer. As I often say — as much as we try to prevent its occurrence, eventually, reality happens.

This topic is covered in the following article, What is an SAP Platinum Consultant? 

Support Capabilities

Obviously, support is a critical measurement of any software vendor. However, the only place we see this measured is at crowdsourcing sites like G2 Crowd.

Support is the horse’s mouth on the software after the implementation is live. They have resources with many years of experience in the application and will be going to the source when the buyer’s internally trained resources cannot figure out the answer.

It should be understood that poorly designed software cannot be overcome with effective support. Poorly designed software is a losing situation even if a great deal is invested in support because it is expensive to supply. This is why software selection using the criterion of maintainability is so important. When an application is well designed, the vendor support personnel can figure out what is wrong more quickly.

Internal Efficiency

There are enormous differences between various software vendors in terms of their internal efficiency, which is generally inversely related to their bureaucracy level. I have been in many software companies, and the feeling is very different depending on the particular vendor. Smaller and highly innovative software vendors are fun places to work, and even across different software categories, they have a similar feel to them. Meetings tend to be kept to a minimum – except for the senior members, and they tend to have an informal feel about them. Larger software vendors lost much of this culture – and in fact, employ more conservative individuals (more conservative individuals tend to be attracted to their stability). The conversations tend to orbit much more around business than around software.

It is no big secret that as companies enlarge, they become more bureaucratic, and their efficiency goes down. They make up for this with market power. However, market power really only helps with marketing and gaining acceptance for an application, not with things that actually support development or implementations. Like SAP and Microsoft, Mega-vendors have extreme difficulty in innovating – and I consider them more as marketing entities and stewards of software that was ordinarily developed or purchased some time ago than as originators of anything new.

However, while there is a strong tendency between vendor size and bureaucracy, bureaucracy is not identical for all similar size vendors. Some small vendors have a shocking amount of bureaucracy.

It is generally not discussed but really should be that the software vendor’s bureaucracy imposes a high cost on their customers. Once a buyer implements an enterprise software application, they are quite dependent upon the software vendor for support, upgrades, training, etc. When a software vendor tends towards more bureaucracy, questions take longer to get answered. Requests get lost; who can actually make important decisions on topics is increasingly doubtful, and political ends up determining what answers are received rather than what is technically true or false. I have experience working with both high and low-efficiency software vendors, and the differences are stark. For this reason, I consider the bureaucracy level of software vendors to be one of the most underestimated risks and costs of choosing among vendors during software selection. Interestingly, I have never once seen bureaucracy listed as a criterion in any software selection exercise by any major consulting company – perhaps because they rate very highly in bureaucracy themselves.

Current Innovation Level

All software vendors go through a lifecycle where they are small and innovative and then tend to calcify and become more marketing and financially driven entities while their development productivity drops significantly. At the end of their lifecycle, they may do almost no innovation and spend most of their energies in marketing, acquisitions, and chasing their tails in bureaucracy and management intrigue. Therefore, determining software vendors’ current innovation level is important for corporate buyers because enterprise software is a long-term commitment. Even SaaS application purchases, which hypothetically can be canceled within a month, still have a significant lock-in and costs for transferring to a new application related to retraining, data migration, becoming comfortable with a new software vendor, etc..

A buyer will typically use any enterprise application for at least seven years, and the buyer will normally upgrade throughout the lifetime of the application’s usage. Therefore, the buyer is buying the software in its present state and its future state. This rating provides the buyer with an idea of the future potential of the software vendor’s applications. We do not consider the historical level of innovation of the software vendor because this cannot be used to project the future. As stated, software vendors go through a lifecycle of innovation, where they are very innovative in the beginning and less and less innovative. The software vendor’s innovation level in previous years is not relevant to the software vendor’s predicted future innovation level.

Project and Client Related Risk Categories Description

The Complexity of Specific Functionality to be Implemented

Applications in every enterprise software category can be implemented using any amount of its functionality. Accessing more advanced areas of functionality increases project risk from at least two dimensions. First, the functionality itself is more complex, which tends to mean less reliability. Second, complex functionality can stretch the skill level of the consultants. Third, the complex functionality can be more difficult for users and the buyer’s decision makers to understand.

The Preparedness of Implementing Entities External

It isn’t easy to obtain good consulting support outside of software vendors. The major consulting companies generally only have resources trained in the software from the largest vendors – which is why they continually recommend this software. Keeping a bench of consultants trained on 5 of the major applications in anyone software category would be challenging. The likelihood that one resource would come available when the opportunity presents itself would be less likely. For this reason, the large consulting companies prefer as few applications a possible – and typically only specialize in a few brands – normally the same brands across the various software categories.

One easy way to improve the implementing company’s overall preparedness is to add more consultants from the software vendor versus consulting entities. Consultants from software vendors add far more value on implementations than consultants from consulting companies – at least on average. Many projects have run into problems because the consulting company partner was overly focused on maximizing their billing hours, which they did by replacing better-qualified vendor consultants with their consultants – and the project lacks the skills to work effectively.

Software vendors have the advantage of providing value both because they know the software better. They are typically less focused on maximizing billing hours as they make more of their money from software sales. They have a higher incentive than any consulting company to get the software live (so they can use the client as a reference account for future software sales). This is doubly advantageous because, in addition to charging a lower billing rate, software vendors do not have the incentive to stretch out the project as consulting companies do.[2]

The Preparedness of the Buyer’s Implementation Team

The buyer must assign both the right resources and for the right amount of time. One of the biggest issues concerning the internal team is when internal resources are assigned to software implementation, but they still have some of their normal job responsibilities.

As was stated earlier, these are not all of the risk factors, but we consider them the most important ones. For example, notice that we have no category for “The Preparedness of the Buyer’s IT.” This is because typically, the buyer’s IT department will install the software, patch software, acquire the hardware, etc.. Because almost all IT departments can do this, we don’t rate this highly. However, if the IT department is actually performing the implementation, which occasionally happens if the buyer has hired IT personnel that have the necessary implementation experience, then this moves over to the category “The Preparedness of External Implementing Entities,” in that the IT department is now acting as the consulting entity. In that case, their preparedness must be evaluated.[3]

[1] This website can be found at https://www.brightworkresearch.com/softwaredecisions/

[2] However, sometimes the consulting company will state that without a certain number of its own consultants, it can’t “guarantee the project.” I recall IBM once say this – as basically a mini tantrum for not getting there was on a project. However, one should recognize that consulting companies provide no guarantees. Check the consulting contract; the consulting company is only obligated to provide consulting services in some good faith capacity. Essentially, unless something outrageous happens, the consulting company has no real legal liability. Also, any project complaint from a senior member of any major consulting company on any topic can be neatly translated into “This does not maximize my billing hours and allow me to meet my quota.”

[3] While this is just an example, it should be stated that we have not seen this scenario work very well. IT departments tend to be better at maintenance and new application implementation. This can be driven by a desire to save money but will often result in poor skills. At one client we consulted with, they hired an experienced consultant who had related skills but not application experience. This consultant took a lower compensation as a full-time employee – implemented the software very poorly – used this experience to build his resume – and then left the company actually before they go live. He did this because the implementation had zero chance of success as he had no idea what he was doing, and he needed to leave before it all came crashing down. This was an example of being penny-wise but pound-foolish in terms of resourcing the project. One of the reasons consultants are paid more is because they can offer the exact skill desired. (and they can be quickly replaced if they do not fit the bill) However, independent consultants offer a far better value and are a better way to reduce sticker shock. However, they must be found directly through sources like LinkedIn. If one goes to a recruiter, there will be some cost savings from obtaining them from a consulting company, but not very much.