The Hidden Issue with the SD HANA Benchmarks

Executive Summary

  • SAP has used the SD benchmarks for years for other databases.
  • Four years into S/4HANA, SAP has no benchmarks for S/HANA SD or for ECC SD on HANA.

Introduction

SAP has used the SD benchmarks for years to test for transaction processing performance, but something peculiar happened when SAP released HANA and the benchmarking was to be performed for S/4HANA.

In this article, we will cover the hidden issues with SD benchmarks.

The SD Benchmark for HANA

Here are the SD benchmarks listed on the SAP website for HANA.

This looks like an impressive list of benchmarks for different hardware and different operating systems and database releases. At the time we reviewed this data, we counted 1066 benchmarks.

The Databases

Now notice the databases that were used for the benchmarks.

Each benchmark typically is a single database version or variants of the version, and there are many database versions or variants. For instance, there is a separate benchmark for Oracle 10g, Oracle 10g with Real Application Cluster.

The databases are as follows.

  1. DB2: (41 Benchmarks)
  2. Adabas: (1 Benchmark)
  3. SQL Server: (17 Benchmarks)
  4. SAP ASE (4 Benchmarks)
  5. Informix: (7 Benchmarks)
  6. MaxDB: (4 Benchmarks)
  7. SAP DB: (4 Benchmarks)
  8. Oracle (26 Benchmarks)

The Operating Systems

  1. IBM AIX
  2. Red Hat Enterprise
  3. IBM OS
  4. Solaris
  5. SUSE Linux
  6. Windows 2000, .NET, NT, Enterprise Server, Enterprise Edition, etc..

The most benchmarks were run for Windows Enterprise Server 2003 with 135.

ECC/R/3 Versions

The benchmarks span 52 different versions of ECC. the most frequently benchmarked ECC/R/3 versions were the following.

  1. ECC EHP 5 for ERP 6.0: (242 Benchmarks)
  2. R/3 4.6C: (124 Benchmarks)
  3. SAP EHP 4 for ERP 6.0: (118 Benchmarks)

Hardware Environment

  1. Bare Metal: (984 Benchmarks)
  2. Cloud: (44 Benchmarks)
  3. Virtualized: (36 Benchmarks)

Obviously, this means that 92% and therefore the bulk of the benchmarks were run on bare metal and on premises.

Hardware Vendors

The benchmarks used hardware from 28 different hardware vendors and AWS (not a hardware vendor but a cloud services provider that uses its own open specified hardware as we covered in How the Open Compute Project Reduced the Power of Proprietary Vendors).

The largest number of benchmarks per hardware vendor was as follows:

  1. Dell: (86 Benchmarks)
  2. Fujitsu (57 Benchmarks)
  3. Fujitsu Siemens (106 Benchmarks)
  4. Hitachi (49 Benchmarks)
  5. HP/HPE (209 Benchmarks)
  6. IBM (201 Benchmarks)
  7. NEC (48 Benchmarks)
  8. Sun Microsystems (64 Benchmarks)
  9. Cisco Systems (37 Benchmarks)

AWS only had 14 benchmarks, but this list of SD benchmarks goes back for years. The first benchmark was performed in 1995, and goes all the way up to 2019. AWS does not get its first published benchmark until 2013.

As an example for recent years, there were 32 benchmarks performed in 2018.

The Natural Question That Arises

S/4HANA was introduced in 2015 and just recently had its 4 year anniversary. HANA is going on its 8 year anniversary. Within that context let us take note of the following facts about SAP’s benchmarking.

  • There is not a single benchmark for S/4HANA SD.
  • There is not a single benchmark for HANA for any ERP or other transaction processing system.
  • SAP has only published benchmarks for BW (both the BWH and the BWAML benchmarks)

The HANA benchmark could be run without S/4HANA as HANA works for ECC in something called suite on HANA. However, this setup was never benchmarked.

Conclusion

SAP made enormous claims for both HANA and S/4HANA. However, it has published exactly zero transaction processing benchmarks for HANA (with ECC) or HANA with S/4HANA. The only HANA benchmark that was published where there was a comparison was performed by Lenovo and its problems are covered in the article The Problems with the Strange Lenovo HANA Benchmark.

Information reported to us from the field, which we covered in the article HANA as a Mismatch for S/4HANA and ERP, illustrates that HANA is a weak performer in transaction processing. Our information shows that HANA underperforms previous versions of databases that were used for ECC in transaction processing, which is the dominant processing type in ERP systems.

It seems quite likely that SAP has published no ERP benchmarks for HANA because it would show that SAP’s statements about being able to master both OLTP and OLAP from one database are false.

Search Our Other HANA Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.sap.com/dmc/exp/2018-benchmark-directory/#/sd

https://www.sap.com/about/benchmark/measuring.html

“SAP Application Performance Standard (SAPS) is a hardware-independent unit of measurement that describes the performance of a system configuration in the SAP environment. It is derived from the Sales and Distribution (SD) benchmark, where 100 SAPS is defined as 2,000 fully business processed order line items per hour.

In technical terms, this throughput is achieved by processing 6,000 dialog steps (screen changes), 2,000 postings per hour in the SD Benchmark, or 2,400 SAP transactions.

In the SD benchmark, fully business processed means the full business process of an order line item: creating the order, creating a delivery note for the order, displaying the order, changing the delivery, posting a goods issue, listing orders, and creating an invoice.”

https://www.sap.com/about/benchmark.html

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

The Four Hidden Issues with SAP’s HANA Analytics Benchmark

Executive Summary

  • SAP developed a new benchmark to make HANA look good.
  • We cover the problems with creating a benchmark for BW.

Introduction

After HANA was released in 2011, SAP created the BW-EML benchmark (since renamed the BWAML) and the BWH benchmark. Both of these benchmarks were for SAP’s BW application. SAP published no HANA benchmarks for any other SAP application since 2011.

In this article, we will cover the hidden issues with SAP’s HANA benchmarks.

The Setup of the BW Benchmark for HANA

SAP describes the BW-EML benchmark as follows.

“To ensure that the database can efficiently use both InfoCubes and DataStore Objects (DSOs) for reporting, the data model for the BW-EML benchmark consists of three InfoCubes and seven DSOs, each of which contain the data produced in one specific year. The three InfoCubes contain the same data (from the last three years) as the corresponding DSOs. Both object types include the same set of fields. The InfoCubes include a full set of 16 dimensions, which comprise 63 characteristics, with cardinali-ties of up to 1 million values and one complex hierarchy. To simulate typical customer data models, the InfoCube is made up of 30 key figures, includ-ing those that require exception aggregation. In the data model of the DSOs, the high-cardinality characteristics are defined as key members, while other characteristics are modeled as part of the data members

The first problem with this benchmark is what is unsaid. This is brought up by Oracle.

“SAP is now promoting HANA as the database of choice for their applications and clearly has a conflict of interest when it comes to certifying benchmark results that show better performance than HANA. Of the 28 SAP standard application benchmarks, SAP has chosen to only publish results for HANA on the BW-EML benchmark (emphasis added).”

Hidden Issue #1: How About the Missing Benchmarks?

SAP simply does not mention that there are missing benchmarks, and after all the exaggerations on HANA, SAP has chosen to publish just one benchmark.

Why?

The one benchmark they can get HANA to perform well. SAP clearly has a policy of hiding any benchmark for HANA that it can’t perform well, which is why you don’t have the entity performing the benchmark with a horse in the benchmark race.

Hidden Issue #2: SAP Crowning HANA, i.e. Contestant + Judge = Unbiased Outcomes?

Yes, this should go without saying, but you cannot be both a contestant and be a judge.

What would happen if say Miss Hawaii was also the only judge in a beauty pageant? Who, under those circumstances, would be most likely to win the pageant? Is there perhaps some reason we don’t allow competitors to also judge competitions? Clearly, this requires much research with the best minds working on it. 

Yet note that SAP has a different view.

“To help the market easily and quickly make these judgments, SAP offers standard application benchmarks. When used consistently, these benchmarks provide impartial, measurement-based ratings of standard SAP applications in different configurations with regard to operating system, database, or hardware, for example. Decision makers trust these benchmarks to provide unbiased information about product performance.”

A Problem With Translating the Word “Unbiased” into German?

Interesting. SAP might want to look up the term “unbiased” in the dictionary, as it is apparently not translating properly into German. Either that or SAP is saying something quite inaccurate in this quote. But I looked up unbiased in Google translator to German and came up with the word.

“Unvoreingenommen”

I then found these synonyms in the German-English dictionary.

“dispassionately {adv} [impartially]
impartial {adj}
candid {adj}
dispassionate {adj}
unprejudiced {adj}
detached {adj} [impartial]
impartially {adv}
nonpartisan {adj}
unbiassed {adj} [spv., especially Br.]
unjaundiced {adj}
fair-minded {adj}
open-minded {adj}
without bias {adj}”

So translation does not seem to be the problem.

This is just the first of the hidden issues with this benchmark.

But let us get to the second hidden issue, which is the inconsistency between InfoCubes or cubes and a column-oriented database.

Hidden Issue #3: Why Are InfoCubes Still Being Used for A Database with Column Oriented Capabilities?

I have been working on SAP DP projects for over a decade. DP uses the same data administration area as does BW. Except DP runs forecasting and has a forecasting front end on top of the data backend. HANA is supposed to eliminate the need for cubes, as cubes are aggregation devices use for performance based upon a row-oriented DB.

But in the BW-EML benchmark cubes are still used, as we can see from the quote above.

Why?

Because companies don’t want to decompose the cubes they already built for the pre-column oriented design? Quite possibly yes, as companies will still be using the cubes they built for many years. Actually, much of BW is made obsolete by putting it on top of a column-oriented design capable DB.

Nowhere in any of the BW-EML benchmark does it point out that a primary benefit of a column-oriented design the obsolescence of cubes.

Hidden Issue #4: The Problem with Benchmarking an Incompetent Application

How important is such benchmarking on BW in the first place? I ask because I perform forecast testing for full production data sets for clients on a laptop.

I have a best of breed forecasting application that handles hierarchies far better than DP, I can do things on the laptop with my inexpensive application that no customer I have ever seen can do in DP. Neither DP nor other forecasting applications do the type of forecast error measurement we want, so we created the Brightwork Explorer which we cover in How to Access Monetary Forecast Error Calculation. We put this on AWS and can apply any number of resources to it, making benchmarking studies like the BW-EML of little relevance.

  • The Brightwork “Hardware”: I have a decently powered laptop and it is all that I need to run the forecasting application. In fact, we would have liked to have purchased a more powerful one, but we were under time pressure as we were performing testing and an unfortunate Windows 10 install screwed up our previous laptop for a while. Therefore we went with a reasonably well-powered laptop that was available for purchase at a Costco across the street from our client at the time.
  • Why A Laptop is Just Fine: While I certainly could, I don’t even worry about buying a desktop and I perform repetitive testing with this setup. This means that I perform much more processing than a typical client because they normally do not perform testing but run the forecast on a weekly basis. However, I am performing forecast simulation (that is repeatedly performing forecasting jobs, but without passing them to a receiving system). This means that the load is far higher than the production server receives at my clients.

All of this illustrates the other problem with benchmarking. If the application is incompetently written and highly inefficient with how they manage resources like DP or BW, database benchmarking becomes a bit of a lost cause, because BW and DP will consume so much of the hardware and database processing capacity while it flails about. With these bad applications, one of the primary answers is to simply apply giant resources to them.

We have not once heard this topic raised, because neither SAP nor Oracle nor IBM has any interest in critiquing the application. Why? Well, their job is to sell databases to support the SAP application, the quality of the SAP application’s code is irrelevant to what they want to bring across. Customers have already made the decision to buy an awful application, now the only question is what database and hardware do you want to power your awful application.

I am not aware of what tricks the developer of the application I have used performed to make everything run so quickly and smoothly to make such flexible hierarchies, all I was told was that they put special attention to how the star schema was created, which obviously SAP did not, and which has been confirmed by conversations by other developers familiar with BW and DP.

Oh….this application I use was developed by a single developer. That has probably changed by now as the company has grown over time from when I first used it, but the application I used was developed by just one developer. And he ran circles around SAP’s large team of developers.

The BW-EML benchmark has since bee renamed to the BWAML. There are 17 benchmarks here, and the only database that is benchmarked is HANA. 

The second BW benchmark is called the BWH. There are 52 of these published at SAP’s benchmark site. The same issue applies, the only database that is benchmarked is HANA. The other database vendors have been excluded from this benchmark. 

BW is the only application that SAP has benchmarked HANA for. Both the BWAML and the BWH are BW benchmarks. SAP has refused to benchmark ECC on HANA or S/4HANA on HANA, which we cover in the article The Hidden Issue with the SD HANA Benchmarks.

Conclusion

Benchmarking can’t be interpreted in a vacuum, but it normally is. The issues specific to the BW-BML benchmark that we pointed out in this article are the following:

  • BW and DP are extremely poorly designed data warehouses (DP’s backend is BW) that consumes large amounts of computing resources.
  • Many decision-makers may read this benchmark without considering the fact that BW and DP are both inefficient resource consumers. If a more efficient data application were used, the database and hardware would not have to be so overpowered.
  • In testing against far less expensive applications, BW and DP lose, even when given far more resources to work with. Again, my comparisons have been using a consumer grade but reasonably powerful laptop, and beating a server that my clients were told by SAP that they needed to buy. The Brightwork “hardware” for forecast testing fits in a bag.
  • SAP serves as both a contestant and a judge in its own benchmarks, where HANA is set up as the winner before the competition begins.
  • None of the database vendors competing have any interest in the performance of the application versus other applications. They are there to sell databases.
  • It is highly unlikely that we could get SAP to certify our benchmarking that shows how inefficient BW and DP are versus other similar applications. SAP customers we have had as clients cannot be told that BW and DP are bad applications, so we are required to tiptoe around the issue to not make them feel bad about their poor investments. The primary benchmark in any IT environment is how good the IT department can be made to look. All other benchmarks are secondary to this primary benchmark.

The Broader Issues with Application and Database Benchmarking

There is no independent benchmarking entity for applications or for databases that exist in the enterprise software space. (Some might point to the TCP, but they are a benchmark specification setting entity, not a benchmarking entity).

  • Each participant runs and publishes benchmarks only to increase sales of their items.
  • Every entity that runs a benchmark, ends up, in a rather peculiar way, winning that benchmark. (surprise surprise)
  • Independent benchmarks are also dissuaded. Oracle demanded that an independent benchmarker be fired for publishing a benchmark that showed Oracle performing poorly. (The case of DeWitt — see footnote)
  • The commercial database vendors have clauses in their licenses that prevent independent companies from publishing benchmarks.
  • Open source databases do not have these clauses.

Overall, there are multiple dimensions to the presentation of the BW-EML/BWAML benchmark by SAP that hide information from the reader, such as the fact that SAP clearly did not release the benchmarks in which HANA was unable to perform well. HANA was supposed to perform 100,000 times faster than any competing database (McDermott) as we covered in How Accurate Was SAP About HANA Being 100,000x Faster Than Any Other Database. It was supposed to reduce the workday to (roughly six seconds) (Lucas) How Accurate Was SAP About HANA Enabling People to Work 10 to 10,000 Times. Yet when it came to proving these claims, SAP has had to rig its benchmarks to keep HANA from being compared to any other database. SAP often uses the term “AnyDB.” But perhaps the right explanation of SAP’s behavior is that SAP fears any objective comparison to “AnyDB,” or should just say the comparison to any DB.

“Coming Up with Solutions……Not Just Problems”

After publishing an article like this, readers sometimes ask that we come up with solutions rather than simply analyzing issues that are unpublished elsewhere.

Here the lesson should be straightforward enough.

IT departments should not take the word of SAP or SAP’s consulting ecosystem on the performance or other characteristics of HANA or any other item without evidence. The lesson for any business users that read this article is that IT departments that purchased and implemented HANA never looked for any evidence that HANA was able to meet the claims made HANA. SAP conveniently skirted the issue and rigged their benchmarks to specifically prevent HANA from being compared to any other database. No IT media or IT analyst ever called them out for this deception, and no company that purchased HANA ever bothered to check, preferring to base their purchase on the claims of SAP and their compliant consulting ecosystem. If these companies had done their research, it is unlikely they would have gone forward with a purchase of HANA. We say this repeatedly to clients that we advise on SAP. Whatever the SAP sales rep says is only a starting point. Everything stated by SAP must be fact-checked. And there is no reason to assume that something SAP says is true.

Search Our Other HANA Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://blogs.saphana.com/2015/03/19/behind-sap-bw-eml-benchmark/

https://www.springer.com/cda/content/document/cda…/9783319202327-c2.pdf

https://www.itconductor.com/blog/will-hana-dominate-in-sap-performance-over-oracle

https://www.glamour.com/story/miss-usa-is-still-a-beauty-pageant-but-not-the-one-it-used-to-be (image for beauty pageant)

https://dam.sap.com/mac/preview/a/67/mnPymWPAmmE7yyyXPglwXXl8OnyEAMlAXggXJlJlUDxlyPUv/41356_GB_40939_enUS.htm

https://www.linkedin.com/pulse/does-truth-matter-in-memory-benchmarks-sap-oracle-kuen-sang-lam/

https://blogs.oracle.com/oraclemagazine/the-undisputed-database-champ

http://www.tpc.org/tpcc/default.asp

https://www.brentozar.com/archive/2018/05/the-dewitt-clause-why-you-rarely-see-database-benchmarks/

https://www.sap.com/about/benchmark.html

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

The Problems with the Strange Lenovo HANA Benchmark

Executive Summary

  • Lenovo published a benchmark of ECC on AnyDB versus S/4HANA on HANA.
  • We cover the problems with this benchmark.

Introduction

On April 10, 2018 Lenovo published the technical paper, Lenovo SAP S/4HANA Scale out – Cycle 1.

This paper included a number of inaccuracies, which should not be surprising as Lenovo is an SAP partner and has a series of products they are trying to sell that are connected to HANA.

Notice the hardware specification below from the Lenovo paper.

The HANA box has a higher hardware specification, but it is hidden with the AnyDB/ECC server configuration not called out. 

Natural questions that arise.

  • Where is the hardware configuration listing on the AnyDB/ECC server?
  • How is the reader to know if the hardware is comparable? As we have pointed out in the article How Much of Performance is HANA?, many improvements that have been seen from HANA are due to the entity testing HANA against older hardware and previous versions of AnyDB.

Either Lenovo does not know how to list the specifications of the box, or more likely, Lenovo is excluding the AnyDB/ECC listings to obscure the fact that the hardware is not remotely comparable. How can this go unnoticed by those that read these studies?

Software Compared

Lenovo seems very interested in hiding the database that is being compared to ECC. Why the hesitation in identifying the database? It was not “AnyDB” it was a specific database. This gets again to the honesty of the study. Lenovo is intent on publishing benchmarking information in a way that offends the fewest possible parties. This is not an honest way to publish a benchmark, and Lenovo’s covering up of various aspects continues further on into the benchmark.

Compression of HANA

This contradicts all of the data points that we have on HANA, which shows a compression average of 30 to 40%. Of course, this value depends upon how much data is archived, which SAP attributes to compression. SAP has claimed compression of 97.5% which we covered in Is Hasso Plattner and SAP Correct About Database Aggregates?

This is suspicious because we have never encountered any compression even close to this.

Data Model Simplification

SAP has made many claims around the database or data models being simplified under S/4HANA. And here Lenovo holds to this line. However, while the data model is simplified in some ways, it is more complex in others, as we covered in the article How Accurate Was SAP About S/4HANA and a Simplified Data Model, this is a false claim. Furthermore, there is a great deal of work involved in switching to the new data model for existing ECC customers as we covered in the article Why It Is Important to Pull Forward S/4HANA Code Remediation. This is because in part all of the adapters and customizations have to be adjusted. 

This is another claim that calls into question the information that Lenovo is publishing. Second, you can’t measure data model complexity by simply counting objects.

Transaction Processing Performance

The lack of hardware listing for ECC/AnyDB has already ruined the study, but let us go forward to see what the study says.

First, let us look at the transaction processing.

Why are the exact comparisons redacted? Secondly, how is anyone to know how much of the improved transactions are from the HANA boxes higher hardware specification? And SAP promised massive improvements in performance across the board, so why are any of the transactions negative? 

The redaction is quite odd, as there is no reasonable explanation for why this should be.

Probably their best resources were put into the benchmark test…. and being a HW vendor, Lenovo have the best HW resources that money can buy and their best engineers working with the best SAP resources and the results is 21% of the existing transactions having slower performance.

Analytics on S/4HANA?

Now one could construct a scenario where a bunch of analytics is performed in ECC/S/4HANA. SAP has stated this as their vision. The concept is that all analytics would be performed inside the ERP system.

This is compared against an older version of most likely either Oracle or DB2, which did not have multimodel capability (that is column-oriented with row-oriented). However, why are any of the analytic scores slower than the older database versions?

  • Observations from the field show HANA underperforming all of the competitor databases, even SQL Server even in analytic workloads. This level of improvement, up to 2846% shows the hardware difference between the ECC box and the HANA box.
  • Once again we have redacted actual scores. What is being hidden here?

Conclusion

The Lenovo study is redacted and rigged to make HANA look good. It is extremely odd to find the hardware spec for the comparison system entirely lacking. Lenovo is apparently hoping that no one reads the study in much detail.

This means that the study cannot be used to say much of anything. The fact that Lenovo had to redact information even after not publishing the ECC/AnyDB hardware specification is another cause for concern.

Lenovo cannot publish a study on HANA because it has a financial bias to promote HANA.

How likely is it that Lenovo will publish any information that is not complimentary towards HANA that would negatively impact their hardware offering for HANA? 

Search Our Other HANA Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://en.resources.lenovo.com/whitepapers/lenovo-sap-s-4hana-scale-out

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

Why is There No Independent Database Research Entity?

Executive Summary

  • In our analysis, we could find no independent database research entity.
  • We cover what this means for decision making in the database area.

Introduction

We have been on one of the most prominent entities in researching and publishing on HANA. We have come to question virtually all of SAP’s proposals about HANA. Part of this has meant interpreting database benchmark studies, which has lead to investigating who does this type of testing. In this article, we will cover the topic of entities that verify the claims of database vendors.

Checking with Experienced Database Resources

Having conversations with multiple DB resources, that is people that have focused 100% of their career on databases for three decades, and the consensus is that there is not a single entity that verifies claims of database vendors, performs benchmarking, etc.. All benchmarking is performed by the vendors themselves. SAP has a single benchmark that they performed which is only one type of database processing. We analyzed this benchmark in the article What is the Actual Performance of HANA?

The Typical Coverage Available

Examples of entities that provide database coverage include DB-Engines, which tracks the popularity of databases.

Gartner which has a Magic Quadrant for databases but does not differentiate the database types in any way, as the following graphic indicates.

Gartner creates a fictitious category called ODMS operational database management systems as it is too lazy to analyze the different categories of databases.

It places Hadoop, which is a Big Data database in the same category as relational databases and in the same category with every other database type.

Gartner has no lab and does no testing and has very few people who even understand databases, much less touched a database as we covered in the article How Gartner Got HANA So Wrong.

Gartner places non-relational databases into a relational database magic quadrant and does not even differentiate the database in question from the vendor. Instead, they simply note the vendor on the Magic Quadrant; the database goes unmentioned.

Conclusion

The database category of software is filled with vendors making all manner of claims, but there is no entity which verifies any of these claims. This is a problem because it means that buyers in the database market have to perform their own testing.

This would mean gaining access to the databases in question and creating a laboratory environment including all the skill sets to do so. Very few companies do this.

Therefore, the ability to verify the claims made by the various database vendors is quite limited.

Search Our Other Database Content

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.databasejournal.com/features/oracle/article.php/3462091/Database-Benchmarking.htm

The following is an interesting quote from Database Journal.

“One important concept to take away from this discussion is that there is no singular, all encompassing, definitive test that allows a vendor to claim their system is the best one out there, no ands, ifs or buts. For Oracle, Microsoft, IBM, or Sybase to claim they are the best overall, well, it’s simply not true. A particular system can be the best on a particular platform under certain conditions, but to say a particular system is the best overall is going to make that vendor suspect with respect to credibility.”

https://www.tpc.org/

https://www.quest.com/products/benchmark-factory/

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model