How to Interpret the Performance Problems with MRP on HANA

Executive Summary

  • SAP makes outrageous statements about MRP on HANA.
  • SAP proposes that there will be no batch processing in S/4HANA.
  • Yet results from global S/4HANA implementations is that S/4HANA has problems with both MRP and transaction processing.
  • Gartner and Forrester, who have both been paid by SAP to repeat SAP’s proposals are silent on this issue.
  • There is no coverage in this in the IT media.

Introduction to MRP on HANA

SAP has proposed that HANA is important for improving MRP performance.

SAP made these proposals some time ago. However, between making those proposals and now, Brightwork has performed extensive research into HANA. So now that much more is known.

In this article, we will review SAP’s proposals on MRP performance on HANA with this knowledge, as well as with information from the field on actual MRP performance.

What SAP Says About HANA and MRP

SAP has proposed that with HANA there will be no more batch processing. The following is how they phrased this proposal.

“No More Batch — All Processes in Real-Time”

“…enterprises can dramatically simplify their processes, drive them in real time and change them as needed to gain new efficiencies – no more batch processing is required.”

No Batch Processing in S/4HANA?

SAP proposed that there will be no latency of any process in S/4HANA due to HANA. This means that no matter what process the company sets S/4HANA to do — that process will be completed in real time. If we look at MRP, it is common for MRP to be performed on what is called a “net change” basis. This means that only product location combinations that have had a change are processed.

The reason for this is to limit time spent processing. It is common for MRP to take several hours and this job is typically scheduled every weekday in the evening so as not to disrupt business operations. Then, on the weekend, a second MRP planning run is usually scheduled that performs a complete recalculation of all relevant product locations.

The Actual Problem with MRP Performance in SAP ECC

After seeing many MRP instances, the only situations that we have seen where MRP runtimes were a problem where there were system setup issues. One issue, for example, is when there are a large number of invalid location product combinations that must be processed.

With HANA, SAP claimed, there would be no latency. We should make sure we realize how large this claim is. This means that it should be possible to schedule a complete recalculation of the entire combination of product locations in the middle of the day, as the calculation will all be instantaneous. Two minutes later, an entire MRP run of every product location should be able to be performed, and once again, the time taken to run MRP should not be measurable. That is the definition of “no latency.”

This is a peculiar claim. And there are several reasons for this. Given today’s hardware and software capabilities, unless the model is very small, it is not possible.

The first reason is that given today’s hardware and software capabilities unless the model is very small, it is not possible.

The second reason gets into the what MRP does versus what HANA is. If one takes most of the batch processes in ERP, but MRP being an excellent example, the bottleneck on processing is the calculation. That is the microprocessor load. Not the database load. So it would not be feasible that HANA, which only speeds analytical processing, to do very much to speed MRP or other types of processor intensive activities.

Hasso Plattner Weighs In

On November 30, 2015, Hasso Plattner, one of the co-founders of SAP, published “How to Understand the Business Benefits of SAP S/4HANA Better.”

“I just heard of a Chinese fashion manufacturer who installed SAP S/4HANA and reports a game changing improvement of the MRP2 run from more than 24 hours to under two hours. My advice for the smaller and midsize SAP Business Suite customers is to carefully watch the early adopters, soon approaching 2000, especially in related industries.”Hasso Plattner

There are several curious things about this statement. First, This occurred before November 30, 2015. However, Hasso implies that this company moved from the pre-HANA version of SAP enterprise software — known as ECC — to HANA. (Yet, this earlier software runs on HANA.) He states that the Chinese company purchased and implemented S4 EM. However, as of June 2016, S4 EM was still some way away from full release. Only S4 Finance — previously called SAP Simple Finance — is available for purchase.

So the first question is how it is possible that this Chinese fashion manufacturer was even using S/4HANA for MRP.

But if we leave this issue to the side for a moment (it is an important question, but let us take this in stages), the question of how placing MRP on an analytics database versus a transaction processing database made such a difference in the MRP runtime is an important one.

  1. It is important because technically the claim does not make sense.
  2. Secondly, information from the field is that MRP does not run faster on HANA, but instead that it runs slower.

Feedback from Customers on MRP on HANA

The following are descriptive quotes from a person from a customer that uses MRP on HANA. They are consistent but more detailed than similar information we have received from multiple sources.

“Running MRP on HANA and the performance of the MRP job is horrendous. It runs in 8-12 hours, while the same amount of data on MSFT SQL (Server) ran in 1 hour. So far I’m quite unimpressed. Especially considering how much $$$ we spent!”

Unsurprisingly the output of MRP is not used as he describes in the following quotation.

“We don’t use the results of MRP, but we use the output to feed downstream systems. Across the board transactional performance is horrendous, the stats speak for themselves.”

This brings up a question of why MRP is being run at this customer. Is this simply to justify having gone through the expense of buying and implementing SAP?

MRP engines are easy to find and can be integrated to SAP. This customer could simply have an external MRP engine perform the processing and then put the results back into SAP.

And of course, this would allow the customer to use the results of MRP in the system — another benefit.

“Our ATP process is a heroic effort that takes an entire weekend with not a moment to spare. Jobs that we would usually run once a day in now take 8+ hours. Once we have all of our global companies in the same instance, I have no idea how our jobs are going to run with limited windows for downtime.”

The consensus of what we hear from multiple sources is certainly not “no batch processes and everything in real time” but rather significant performance problems with MRP on HANA. These are problems that did not exist when MRP was run on Oracle or DB2.

These are problems that did not exist when MRP was run on Oracle or DB2.

A Lack of SAP Claim Verification

SAP often makes statements indicating that improved performance is driving the migration to S/4HANA. These statements are typically accompanied by other amorphous examples (e.g., S/4HANA performance improvement during end-of-period close.) These statements are not checking out from the feedback that has come in from multiple SAP customers.

After extensive analysis, it turns out that these statements are not checking out from the feedback that has come in from multiple SAP customers.

Furthermore, Brightwork Research & Analysis has been the only entity specializing in SAP that has called into question whether the claims made by SAP in this area actually made any sense. Our view is that they never did make any sense and they have been targeted at executives that think they can guess if SAP is telling the truth, without relying on people with the right background to verify these claims.

Nothing to Say, Gartner, and Forrester?

Both Gartner and Forrester have been notably quite silent on whether any of SAP’s claims for HANA on transaction processing made any sense. This brings up the question of whether they actually have any interest in verifying information or see themselves as simply repeating mechanisms of vendors with the largest analyst budgets. Critiquing a vendor that pays you so much money would be biting the hand the feeds you. Gartner and Forrester can simply “look the other way” as one proposed capability of HANA are proven false after another.

Taken another way, if SAP pays about every media source, what media source has any interest in contradicting SAP? Isn’t it necessary to contradict vendors if the information provided by vendors is inaccurate? What is the definition of independence? In my last article, I quoted Gartner saying that “bias at Gartner is a non-issue.”

Not only Gartner but many other IT media entities it seems like not only a massive issue but an issue that is not going away.

Conclusion

The original claim made by SAP that an analytics database would eliminate all batches in the system was always highly unlikely to be true, even before receiving the feedback that we have now obtained.

It now entirely obvious that running MRP on HANA not only does not lead to the elimination of batches, but it also does not lead to performance improvement, and in fact, greatly lengthens the processing window for MRP. That is MRP on HANA so far seems like a giant step backward.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://investor.underarmour.com/directors.cfm

**https://news.sap.com/how-to-understand-the-business-benefits-of-s4hana-better/

https://blogs.sap.com/2015/11/11/sap-s4hana-frequently-asked-questions-part-8-1511-update/

MRP Repair Book

 

MRP System

Repairing your MRP System

What is the State of MRP?

MRP is in a sorry state in many companies. The author routinely goes into companies where many of the important master data parameters are simply not populated. This was not supposed to be the way it is over 40 years into the introduction of MRP systems.

Getting Serious About MRP Improvement

Improving MRP means both looking to systematic ways to manage the values that MRP needs, regardless of the MRP system used. It can also suggest evaluating what system is being used for MRP and how much it is or is not enabling MRP to be efficiently used. Most consulting companies are interested in implementing MRP systems but have shown little interest in tuning MRP systems to work to meet their potential.re

The Most Common Procedure for Supply and Production Planning?

While there are many alternatives to MRP, MRP, along with its outbound sister method DRP, is still the most popular method of performing supply, production planning, and deployment planning. In the experience of the author, almost every company can benefit from an MRP “tune up.” Many of the techniques that the author uses on real projects are explained in this book.

Chapters

  • Chapter 1: Introduction
  • Chapter 2: The Opportunities to Improve MRP
  • Chapter 3: Where Supply Planning Fits Within the Supply Chain
  • Chapter 4: MRP Versus MRP II
  • Chapter 5: MRP Explained
  • Chapter 6: Net Requirements and Pegging in MRP
  • Chapter 7: Where MRP is Applicable
  • Chapter 8: Specific Steps for Improving MRP
  • Chapter 9: Conclusion
  • Appendix A: Calculating MRP

What is the Actual Performance of HANA?

Executive Summary

  • John Appleby provides oodles of false information around HANA’s performance, without declaring his financial bias.
  • Hasso Plattner has been overstating the importance of removing aggregates.
  • The logic for the improved analytical (SAP HANA Performance) of column-oriented databases
  • Covering the impact of SAP marketing on benchmarking on HANA performance.

Introduction

I covered the topic of the actual performance of HANA versus competitive databases in the article Which if Faster, HANA or Oracle 12C. In this article, I will cover the various database benchmarks on HANA and its competitors in more detail.

Who Performs Benchmark Testing in Databases?

The first thing to establish is that there is no independent body – such as a Consumer Reports for database benchmarking. This means that vendors performed the benchmarks that I reviewed.

This is obviously a major issue.

Let us enumerate the problems with having no independent source for benchmarking as it relates to databases.

Selective Release

A vendor would never release a benchmark, which showed it as losing to a competing vendor across the board. The result of the benchmark would have to be positive for the vendor in some dimension, and more positive than negative for the results to be released.

This brings up the issue that pharmaceutical companies drug testing shows that negative studies tend to go unpublished.

“…studies about antidepressants made the drugs appear to work much better than they did. Of 74 antidepressant studies registered with the FDA, 37 studies that showed positive results ended up being published. By contrast, studies that showed iffy or negative results mostly ended up going unpublished or had their data distorted to appear positive, Turner found. The missing or skewed studies helped create the impression that 94 percent of antidepressant trials had produced positive results, according to Turner’s analysis, published in the New England Journal of Medicine. In reality, all the studies together showed 51-percent positive results.”

For instance, a past analysis of clinical trials supporting new drugs approved by the FDA showed that just 43 percent of more than 900 trials on 90 new drugs ended up being published. In other words, about 60 percent of the related studies remained unpublished even five years after the FDA had approved the drugs for market. That meant physicians were prescribing the drugs and patients were taking them without full knowledge of how well the treatments worked.” – LifeScience

We will address this topic directly as it appears that SAP is doing the same thing with its OLTP benchmarks for HANA.

Skill Familiarity Bias

A vendor will always have more skills in their solutions than in a competitor solution. Because databases can be “tuned up” and because of differences in hardware that is selected as well as some other differences, even if a vendor were 100% above board, they would still tend to observe better performance in their solution than a competing solution.

Hardware Bias

The vendors spare no expense in hardware for these tests. The customers will often purchase equipment that is lower in its specification than that used by the vendor.

The Laboratory Environment Bias

The hardware and database are run in a “lab” environment. It has no other batch jobs pulling its resources – which are of course unrealistic. Therefore the performance of the benchmark would typically not be attainable in a production setting. I see the benchmark results are more comparable between different benchmarks than between the benchmark and a production environment.

Sales Bias

Every benchmark paper I looked at had one clear purpose. That was to improve the sales of the product benchmarked by the vendor that wrote the paper.

Interpretational Bias

The benchmarks that are released are then viewed through the prism of bias. That is people that have an incentive to prefer a particular software vendor. One entity that has published inaccurate information about benchmarks that have been right in line with their financial bias has been the consulting firm Bluefin, which is overall one of the least reliable providers of information on HANA.

The Benchmark Tests

The following benchmarks were reviewed that were performed for these databases.

  1. SAP OLTP Benchmark: This is a benchmark for transaction processing. So things that ERP systems tend to do the most like recording journal entries, decrementing inventory when performing a goods issue, etc..
  2. SAP BW-EML (Business Warehouse Mixed Workload) Benchmark: This is an analytics benchmark.

SAP’s Missing Benchmarks

For years SAP would release an OLTP benchmark for databases. However, with HANA, SAP stopped publishing this benchmark. Database design would predict that HANA would perform poorly in this benchmark and this is the most likely reason why SAP never produced this benchmark. However, the consulting firm Bluefin Solutions has the following way of covering this up:

“The SAP HANA platform was designed to be a data platform on which to build the business applications of the future. One of the interesting impacts of this is that the benchmarks of the past (e.g. Sales/Distribution) were not the right metric by which to measure SAP HANA.” – Behind the SAP BW EML Benchmark

This is reinforced by the quotation from the book SAP Nation 2.0.

“It has not helped matters that SAP has been opaque about HANA benchmarks. For two decades, its SD benchmark, which measures SAP customer order lines processed in its Sales and Distribution SD module, has been the gold standard for measuring new hardware and software infrastructure. It has not released those metrics using a HANA database. One of the (unsatisfactory) excuses offered is that the expensive hardware needed to support such a test in a lab is better shipped to paying customers.”

John Appleby’s Lack of Transparency on Financial Bias

At no point in this article by John Appleby does he declare the fact that he has a quota, or leads a group with a quota to sell HANA. John Appleby presents himself as if he is some disinterested third party. So that is problem number one. But the second issue is that Appleby is speaking what amounts to gibberish in this quotation.

  • S4 has a Sales module.
  •  This sales module will be performing the same functions as the current ECC SD module. Will there be analytics involved in the Sales module? Of course. However, there will also be transactions or OLTP performed.
  • S4 Sales will record sales orders, update sales orders, etc…
  • Therefore it is demonstrably untrue that an OLTP benchmark is now irrelevant because “the platform was designed to be a data platform on which to build business applications of the future.” That sentence is false, and it’s hard to twist oneself up into a pretzel to try to defend it. The person seems to be preparing to run for political office.

Appleby’s interpretation of the BW-EML benchmark contains other nonsense like

“the configuration used by published results is the stock installation…there are not performance constructs like additional indexes or aggregates in use.”

Appelby’s Trademark Nonsense

The reason this is nonsensical is that column-oriented databases don’t use indexes. They don’t need them. Why Appleby is impressed by this is a head-scratcher. How many times has it been established that the primary reason for the reduction in the size of the database footprint is due to the removal of indexes? If so, and if this is widely accepted, why is it surprising to Appleby that the BW-EML benchmark for a column-oriented database does not have indexes???

On the topic of aggregates, HANA does use aggregates but does not call them aggregates. So what Appleby is saying it is incorrect. Although there are fewer aggregates.

It is important to remember that John Appleby is tied to the hip with SAP. Like other SAP consulting companies, John Appleby and his company Bluefin Solutions essentially serve to repeat whatever SAP says. For whatever reason, Bluefin Solutions, and John Appleby, in particular, were chosen/decided to release information about HANA and S/4HANA that would have had to have been approved by SAP. Please see our analysis of the consulting partnership agreement that controls the media output of SAP consulting partners in this article.

Hasso Plattner’s Overstatement of the Importance of Removing Aggregates

Hasso Plattner has had an obsession with eliminating aggregates for some time, and he rails against aggregates in his articles and his books, but in many cases aggregates are beneficial. Unlike what Hasso Plattner states, not everything needs to be continuously recalculated. And not everything needs to be recalculated every time it is accessed.

In fact, many reference tables only occasionally change. why instantaneously recalculate something that rarely changes? This is just a waste of processing cycles.

Let us take an example.

Let us say we want to see a report of all the sales orders that a company has processed for the past three months. This report was processed and aggregated along different dimensional attributes yesterday. Under Hasso Plattner’s logic, this aggregate is worthless because it is pre-calculated.

Let us look at that statement in detail.

Example

Let us say that the aggregate was calculated yesterday exactly 24 hours prior.

  • One day is roughly 1/90th of a three-month period.
  • If we look back 90 days in the report, we will show say 100,000 sales orders. That is an average of 1111 sales orders per day (yes, weekends would be less than work days, but as an average 1111 sales orders)
  • Now let us say that the day that drops off if we run the report anew had 1500 sales orders created (so a high day). And let us say that the day that was added, which is yesterday plus the hours up until the present hour are 700 sales orders (a low day).
  • So instead of looking at 100,000 sales orders, we are now looking at 99,200 sales orders. Is that a real problem? Is the last 24 hours more representative than the 24 hour period from 3 months ago? Probably not. But if it is, how much more should the company be willing to spend to get rid of all aggregates? And are there other investments that might be a better use of that money?

SAP’s Overestimation the Importance of HANA’s Changes as it Relates to SAP HANA Performance

There is an unlimited number of scenarios that could be imagined to determine the importance of the removal of aggregates. For instance, if just two days of sales orders were reviewed, then the company would receive a much more significant variation. However, the needs for instantly recalculated information are significantly overestimated in vendor marketing documentation and analytics vendor documentation in particular.

In the articles, Monthly Versus Weekly Forecasting Buckets and Quarterly Versus Monthly Forecasting Buckets challenges a long-held belief that forecasting information must frequently be updated with the most recent sales history to obtain the highest forecast accuracy. This was testing with actual client data, and from a client with difficult to forecast sales history, and will show that as with the tests I performed at previous customers, this is not all that important and contributes little to forecast accuracy.

This client was convinced that daily forecasting produced the highest accuracy.

Generalizing From Misrepresented Scenarios

So, while there can be scenarios where getting the most up to date information is critical, SAP tends to take these few scenarios and generalize them to the be “normal,” when in fact they tend to be the exceptions. Hasso Plattner has a way of presenting things that are often quite gray as black or white. And of course, all of Hasso Plattner’s examples have the peculiar and consistent outcome of handing over more money to SAP. I don’t make more money if I can exaggerate the way that Hasso Plattner does, and therefore his proposals tend to come off as sales fluff…at least to me. After years of reading Hasso Plattner’s statements, I do not consider him to be credible or truthful.

The Logic for The Improved Analytical (SAP HANA Performance) of Column-Oriented Databases

I found this quotation from IDC to be an excellent explanation of my column-oriented databases is so efficient for analytics.

“The established approach to setting up a query/reporting database (ODS, data mart, data warehouse) has involved establishing indexes for all columns that might have value lookup operations in the queries. Many organizations now use columnar databases, which have the same relational characteristics as row-oriented databases but store the data in blocks of column rather than row data for speed of retrieval. This obviates the need for indexes and, in some cases, for cubes and materialized views.” – IDC

“If live data is to be queried and updated at the same time, the queries must be very fast in order to avoid consuming resources on the database server and slowing down transactions. A number of vendors have created database technologies that optimize query performance by combining two key elements: query-optimized columnar organization for the data and memory-optimized database operations. In the case discussed here, however, there is an additional challenge, which is to maintain that data in a form that also supports a high-performance transactional database.” – IDC

“Database In-Memory leverages a unique “dual-format” architecture that enables tables to be in memory simultaneously in a traditional row format and a new in-memory column format. The Oracle SQL Optimizer automatically routes analytic queries to the column format and OLTP queries to the row format, transparently delivering best-of-both-worlds performance. Oracle Database 12c automatically maintains full transactional consistency between the row and the column formats, just as it maintains consistency between tables and indexes today.

·     Access only the columns that are needed.

·     Scan and filter data in a compressed format.

·     Prune out any unnecessary data within each column.

·     Use SIMD to apply filter predicates.” – Oracle

Interpretation

However, this does not mean, and Oracle is not implying that a column-oriented database is better for applications outside of analytics. And as far as I can determine from reading the perspective of different database vendors on this topic, SAP is the only database vendor that proposes that a column-oriented design is better for all types of applications.

Some of the Results On Oracle and SAP HANA Performance

For instance, in the Oracle benchmark paper released in 2015, the benchmark was tested on hardware similar to what SAP used in its BW-EML benchmark but leaves out the topic of how many customers would use this hardware configuration. I don’t know myself as I have not recorded the hardware specification of many clients, but the hardware employed by Oracle appeared quite advanced. At one point SAP’s benchmark used a machine with 1536 GB of RAM.

I have personally never heard of this much RAM being used on a server at any account that I have worked on. It probably exists as there are very advanced companies out there doing scientific computing. But it is a small number.

At one point Oracle points out that the monster machine used by SAP beat Oracle’s BW-EML benchmark, but needed three times the amount of memory to do this. Things bring up the question of whether SAP’s hardware was simply re-engineered to beat the Oracle benchmark. So did SAP first try the machine with 1000 GB of RAM, and then add 200 GB or RAM and then test again, and then add another 200 and test again, etc. until it finally beat the Oracle score?

In another benchmark SAP installed 100 IBM servers in an SAP HANA cluster. Furthermore, if no one outside of the NSA, Amazon AWS (which resells portions of its hardware over the cloud) or a scientific computing center will be willing to buy this size of hardware how relevant are these benchmarks to the majority of HANA customers?

The Impact of Marketing on SAP Benchmarking on SAP HANA Performance

SAP needs to get marketing out of the process of releasing benchmarking information. In the benchmark publication SAP HANA Performance: Efficient Speed and Scale-Out for Real-Time Business Intelligence the marketing influence is clear. One should not need to see a cover plastered with stock photograph imagery of a man pulling a “fly” snowboarding maneuver, and then an image of a bunch of people rowing together, along with a marketing written introduction that uses a word salad of terms like NetWeaver components. how is this related to SAP HANA performance? This should be a scientific paper that is not word-smithed and couched in the deceptive marketing language.

SAP marketing must acknowledge that not every paper produced by SAP needs to have their fingerprints on it. Here is an example of the type of nonsensical writing that I am referring to.

“The drill-down queries (276 to 483 milliseconds) demonstrate SAP HANA’s aggressive support for ad hoc joins and, therefore, to provide unrestricted ability for users to “slice and dice” with- out having to first involve the technical staff to provide indexes to support it (as would be the case with a conventional database).”

Please do not use the term “slice and dice” in a technical paper, or the term “unrestricted,” or the colorful “HANA’s aggressive support.” This is not scientific terminology. SAP’s benchmarking paper needs to be completely rewritten just using the original data. Then at the end SAP has quotations like the following:

“We have seen massive system speed improvements and increased ability to analyze the most detailed levels of customers and products.” – Colgate Palmolive

This is an anecdote, and it sounds like it was written by Donald Trump (except it use the word massive instead of tremendous.) If your benchmark study could have been written by Donald Trump, then you have a credibility problem with your benchmarking study.

Conclusion

When one compares what Bill McDermott, Hasso Plattner, SAP marketing, Bluefin Solutions, Deloitte, and others say about the game-changing aspects of HANA to the technical benchmarks there is absolutely no correspondence.

SAP invests comparatively little in benchmarking, but its marketing spending on HANA is enormous. This is similar to the major pharmaceutical companies. Pharmaceutical companies spend far more on marketing than research, and the research is mostly just running clinical trials, which is based on research that is performed by universities and is publicly funded.

Evidence of Oracle Outperformance SAP HANA Performance

Oracle has provided compelling evidence that its 12c database outperforms SAP HANA. I say this while acknowledging the fact that there is no independent body that performs database benchmarking. Oracle invests much more in database benchmarking, and its benchmarking studies are more transparent and make the case far better than SAP’s.

For all of the talk of SAP HANA performance, SAP produces a single benchmark to support these supposed claims of superiority over Oracle 12c and others. While we do not have independent verification, sifting through the results, it seems more likely than not that Oracle 12c is not only a little bit but far faster than HANA. And secondly, while SAP has placed speed as the priority in the design of its database, Oracle’s orientation is far more holistic, placing reliability first. Secondly, given 12c’s design, it will almost certainly easily beat SAP HANA performance for OLTP processing.

How Many Databases Can Outperform HANA?

This article did not review the benchmarking of other database vendors. However, I find it more likely than not that vendors like IBM, given the database talent that they have, not also have a solution that is superior to SAP HANA performance. And the list of other database vendors that can also beat HANA is likely more than just Oracle and IBM. The bottom line is that out of one type of database processing, called short query SQL, SAP HANA Performance is poor.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Risk Estimation and Calculation

Risk Estimation and Calculation

See our free project risk estimators that are available per application. The provide a method of risk analysis that is not available from other sources.

project_software_risk

Memory-Optimized Transactions and Analytics in One Platform: Achieving Business Agility with Oracle Database (IDC Sponsored by Oracle)

OracleVoice: Oracle Challenges SAP On In-Memory Database Claims, John Soat, Oracle.

Benchmark Results Reveal the Benefits of Oracle Database In-Memory for SAP Applications, Oracle White Paper, September 2015

https://www.livescience.com/8365-dark-side-medical-research-widespread-bias-omissions.html

https://www.livescience.com/5815-fraud-errors-misconceptions-medical-research.html

Behind the SAP BW EML Benchmark, John Appleby, Bluefin, March 19, 2015.

*https://www.amazon.com/SAP-Nation-2-0-empire-disarray-ebook/dp/B013F5BKJQ