Why Did SAP Pivot the Explanation of HANA In Memory?

Executive Summary

  • SAP has a very different explanation of how HANA’s memory management than when HANA was introduced.
  • We address why this pivot occurred.

Introduction

SAP first introduced the idea that all data would be loaded into memory. We called out this as a myth in the article How to Understand the In-Memory Myth. This went on for years beginning in 2011 and extending out to 2018. Then SAP began to change the description of how its HANA database worked, to the point where it sounded like how other databases optimized their memory. We covered this in the article Is SAP’s Warm Data Tiering for HANA New?  

But one question is why this pivot changed.

A Hypothesis Put Forward By Ahmed Azmi

Give me any database system and I can boost its performance by an order of magnitude simply by upgrading the hardware. If you spend a million dollars on memory and processor acceleration, you can make any database faster. It’s just a matter of throwing more hardware (money) at it.

That’s why SAP had to claim that HANA stores everything in memory. They had to justify the nefarious cost of the hardware. Now, they are pivoting away from that because they realized that hardware-driven performance doesn’t work when the product is hosted on a multi-tenant, scale-out platform like Azure & GCP.

This could be an explanatory reason for the change. It also simply brings HANA’s explanation back into line with reality. The fact that HANA used memory optimization and did not load all data into memory was clearly explained in HANA’s technical documentation.

Oracle’s View on HANA In Memory

While the article you are reading was originally published in June of 2019, in August of 2019, Oracle published a document called Oracle for SAP Database Update.

In this document, Oracle made the following statement about HANA versus Oracle.

Oracle Database 12c comes with a Database In-Memory option, however it is not an in-memory database. Support-ers of the in-memory database approach believe that a database should not be stored on disk, but (completely) in memory, and that all data should be stored in columnar format. It is easy to see that for several reasons (among them data persistency and data manipulation via OLTP applications) a pure in-memory database in this sense is not possible. Therefore, components and features not compatible with the original concept have silently been added to in-memory databases such as HANA.

Here Oracle is calling out SAP for lying. Furthermore, we agree with this. SAP’s proposal about placing all data into memory was always based upon ignorance and ignorance on the part of primarily Hasso Plattner.

If SAP had followed Oracle’s design approach, companies would not have to perform extensive code remediation — as we covered in the article SAP’s Advice on S/4HANA Code Remediation.

Conclusion

SAP has two explanations for its products. One is the sales and marketing explanation. This explanation not only exaggerates the explanation of their products, but in many cases changes the fundamental features of how their products work. HANA’s supposed “in memory” explanation which persisted from 2011 to 2018 is just one example of this.

The Necessity of Fact Checking

We ask a question that anyone working in enterprise software should ask.

Should decisions be made based on sales information from 100% financially biased parties like consulting firms, IT analysts, and vendors to companies that do not specialize in fact-checking?

If the answer is “No,” then perhaps there should be a change to the present approach to IT decision making.

In a market where inaccurate information is commonplace, our conclusion from our research is that software project problems and failures correlate to a lack of fact checking of the claims made by vendors and consulting firms. If you are worried that you don’t have the real story from your current sources, we offer the solution.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

S/4HANA Implementation Research

We offer the most accurate and detailed research into S/4HANA and its implementation history. It is information not available anywhere else and is critical correctly interpreting S/4HANA, as well as moderating against massive amounts of inaccurate information pushed by SAP and their financially biased consulting ecosystem.

Select the description that best matches you.

Option #1: Do You Work in Sales for a Vendor?

See this link for an explanation to sales teams.

Option #2: Do You Work for an Investment Entity that Covers SAP?

See this link for an explanation for investment entities. 

Option #3: Are You a Buyer Evaluating S/4HANA?

For companies evaluating S/4HANA for purchase. See this link for an explanation to software buyers

Search our Other In Memory Computing Content

References

https://www.oracle.com/a/ocom/docs/ora4sap-dbupdate-5093030.pdf

How Accurate Was IFS on the Potential of In Memory Computing?

Executive Summary

  • Dan Matthews, the CTO of IFS wrote an article on in memory.
  • We review how accurate he was in his article.

Introduction

Dan Matthews’s paper 3 Things Business Decision Makers Need to Know About In Memory Enterprise Software and was published in May, 2017.

The Quotations

Gartner Defines What is In Memory?

Gartner says that in order for a technology to be classified as in-memory, it requires “the database structure to be in-memory, specifically the main memory of the server.” This, according to Gartner, is in contrast with databases that would commonly rely on a disc-based Database Management System (DBMS) that feeds data in and out of a database stored on a disc or server, and may perhaps keep some data in cache to speed up performance. Gartner’s definition of an in-memory application requires an In-Memory DBMS, or IMDBMS.

We have previously critiqued Gartner for not understanding databases and for being paid by SAP promote HANA which we covered in the article How Gartner Got HANA So Wrong. We estimate Gartner is paid over $120 million per year to promote SAP products and move them up in the rankings. Gartner makes the rather absurd proposal that having a single employee who works as an “ombudsman” was we covered in the article How to Best Understand Gartner’s Ombudsman., makes that $120 million per year irrelevant.

Therefore, it is difficult for us to take what they seriously on these topics. The statement…

“the database structure to be in-memory of the server”

Is a meaningless statement. It sounds like it means something but it doesn’t. What is “the database structure”? Is that a table? What does Gartner mean here? They don’t know. Again we have yet to see a single time that Gartner has displayed any knowledge of databases. In our analysis of Gartner’s ODMS MQ which is at Can Anyone Make Sense of the ODMS Magic Quadrant?, should make this quite clear.

The following sentence..

“This, according to Gartner, is in contrast with databases that would commonly rely on a disc-based Database Management System (DBMS) that feeds data in and out of a database stored on a disc or server”

Is also meaningless.

This is because of all databases, even SAP HANA which has stated that it stores all of the database in memory doesn’t. All databases move data from storage into memory as needed in something called memory optimization.

However, we don’t mark down Dan or IFS for quoting Gartner. Even though Gartner adds no value and is a considerable value subtract in discussions around databases, they are still widely respected. It should also be mentioned that no vendor can call out Gartner for either being corrupt or not knowing their subject matter. This is because Gartner can retaliate against any vendor that does not show them the “proper deference” as Gartner has a near monopoly or vendor ratings.

Now let us see what Dan does with this quote.

“Under this definition, the in-memory column store capabilities of the Oracle 12C Enterprise Edition, which IFS leverages to deliver its in-memory offering, qualifies as a true in-memory solution, but one that recognizes real-life challenges faced in enterprise computing. It contains both a traditional DBMS and an IMDBMS working in parallel and always in sync. It enables an application user to keep all or part of the database in memory, so that columns and tables that are frequently queried by business analytics tools or referenced in ad hoc queries can be kept in memory while other data is stored in a physical disc.”

Well, if can now kick Gartner to the curb, Oracle does have an in-memory capability and it was added to the Oracle database back in 2013 as the graphic below illustrates.

Therefore yes, Oracle provides “in memory” functionality with a column-oriented store.

100% In Memory?

“A real-life ERP in-memory application should always be, in a manner of speaking, at least some type of hybrid solution between RAM-based and disc-based data storage. In theory, a pure in-memory computing system will require no disc space or file I/O. This is impractical in the world of ERP since a modern enterprise application may store not only structured data, but unstructured and unwieldy information like photos, technical drawings, video and other materials that are not used for analytical purposes and would consume a great deal of memory. This is one drawback of ERP applications, which by default run the entirety of a transactional database in memory. Meanwhile, the in-memory feature set of IFS Applications, for instance, will give end-users a choice of which data to house on a physical drive and which to store in-memory. Or of course, if they really want to, run the entire database and application in-memory.”

This is all quite true. And here Dan is directly contradicting Hasso Plattner of SAP. We have been contradicting Hasso Plattner on this topic since 2016. Hasso Plattner is wrong, only a small portion of the overall database needs to be loaded into memory, and that data changes depending upon what is being processed at the time.

The Need for ERP Speed?

“The chief benefit of in-memory computing in ERP is obvious—enhanced processing speed, particularly when dealing with larger data sets and queries of non-indexed tables. Data stored in memory can be accessed hundreds of times faster than would be the case on a hard disc or even flash drives. But also the columnar orientation of the in-memory storage means that it becomes very fast to find a smaller subset of data inside a very large set. In-memory is optimal for what is called “narrow queries”, where a smaller number of columns for a subset of rows is extracted from a very large data set.

This speed is particularly useful when companies are running ad hoc queries of the database underlying their ERP software product, for instance to identify customer orders that conform to specific criteria or determine which customer projects consume a common part.”

This is true.

But it leaves out the fact that if you spend time on ERP system accounts, the performance of the ERP system is really rarely the issue. We live in a time of great hardware capacity. And the processing requirements of ERP systems have not increased very much over the past few decades, but hardware capabilities very much has increased.

Secondly, “in memory”/column oriented solutions speed analytical workloads primarily, and as we covered in the article HANA as a Mismatch for S/4HANA and ERP, ERP systems are primarily transaction processing applications with a few CPU intensive operations like MRP and DRP. Therefore, they do not benefit much from the analytical processing capabilities of in-memory databases. The vast majority of companies still perform reporting on a specialized data warehouse, which does make sense to use some “in memory” capabilities, although it does not need to reside within a “Swiss Army Knife” database like Oracle 12c. For example, one could use Redis combined with a row-oriented database.

Dan addresses this issue of data warehousing in the following quote.

Real-Time Visibility?

“In order to eliminate the database as a constraint, most business intelligence tools or analytics instead query a copy of the transactional data that is kept separately in a data warehouse. This data is updated periodically, so it does not truly offer real-time visibility. In-memory technology can provide that real-time view of the business, at least when the data is coming from a single source system or application. In-memory technology in itself does not replace the need for transformation and mapping that typically has to happen when performing analysis across data from multiple source systems.”

This is true, but real-time visibility is not particularly important. A report that is based upon data that is a day old or 12 hours old or 6 hours old, normally will not tell you much more than a real-time pull. The biggest problem in companies is not data currency, its subject matter expertise. I work in forecasting improvement projects. The problem that faces these projects is knowledge of things like forecast error measurement, data storage of different inputs, testing knowledge, how to document, how to follow a scientific approach. The lack of real-time visibility is just not a high priority issue. Secondly, any specific item can be found in real time from the ERP system. And ERP systems also have more rudimentary reports that are also real time.

The Incentives to Add In-Memory to ERP

“The incentives that may drive a company running ERP to adopt in-memory computing are straightforward.

For the enterprise software vendor, though, in-memory computing may be a way to address underlying issues in their application architecture. If an enterprise software product was originally designed in too complex a fashion, the application may have to look in more than a dozen locations in a relational database to satisfy a single query. They may be able to simplify this convoluted model and speed up queries by moving from disc-based to in-memory data storage.”

This is an interesting observation. Our interpretation (although we can’t prove that Dan means this) is that in memory can be used to counteract poor application design. Analyzing this article is timely for us because we just finished the article The Four Hidden Issues with SAP’s BW-EML Benchmark. And in this article, we pointed out that the BW-EML benchmark entirely leaves out the quality of the SAP BW applications, which is atrocious and which we have previously easily beaten with different software running on a laptop. That is an intelligent application design can be effective with far fewer resources.

Increasing Sales with In-Memory?

“Promoting an enterprise application that relies entirely on an in-memory database may also be a way for an ERP vendor to derive more revenue from the software sale by pushing customers to purchase a new database rather than the Oracle, Microsoft or IBM databases they would typically otherwise use. For the customer, however, this could mean re-learning and re-training of IT staff to manage a new, and proprietary, in-memory database in addition to the additional license investment for this technology.”

Yeeeeeees! Vendors try to maximize revenues. And certainly, SAP does this. In fact, this is quote is directly aimed at SAP. SAP has been selling HANA on false claims since HANA was first introduced a Brightwork Research & Analysis has been the most vocal entity calling out SAP on this, while virtually the entirety of the IT media, Gartner, Forrester and SAP’s massive consulting ecosystem has simply parroted SAP’s false claims as we covered in What is the Difference Between an SAP Consulting Company and a Parrot on HANA?

And Dan is also correct that the costs of transitioning to HANA are very large. Although we also would add on that HANA is far less stable than more mature databases like Oracle or DB2. Brightwork receives no income from any vendor, so have no reason to take any vendor’s side, and are reporting what our research has concluded.

Valid Uses for In-Memory: Big Data?

“Analyzing enormous quantities of data while it is in movement requires tremendous computing resources and real-time access to data. Information in a traditional data warehouse will be old and therefore less useful, but continuous queries on the transactional database could lead to performance issues.”

True. Although we would be remiss if we did not mention that companies are often challenged in performing analysis on univariate data. And many benefits of Big Data are conjecture. They presume that looking at many data factors will lead to great insights. The early Big Data bubble was mostly about throwing large amounts of unstructured data into data lakes and saying “we will look at it later.” Data scientists are having great difficulty showing the forecasted benefits of this combination of Big Data and data science. We have run many of the ML algorithms ourselves and are often unimpressed with the outcomes.

Therefore we see a need for more understanding applied to data analysis rather than a focus on in memory.

Valid Uses for In-Memory: In Memory Queries?

“If there is data in an application that is subject to frequent queries for decision support or ad-hoc reporting, it may make sense to move those tables in-memory. Otherwise, these queries could take a while to complete—long enough to affect the user experience. The load on the transactional database could also affect the experience of other users. If you want to summarize a thousand rows out of a million or a billion, or to retrieve a handful of columns in a table for one thousand of a percent of the total data volume, this is one area where a targeted approach to in-memory computing shines.”

Sure. Nothing wrong with that.

In Memory and Transaction Processing?

“Running an entire transactional database in-memory will probably never be optimal, but it is possible. Databases may run faster in-memory by the time there are hundreds of millions of rows in a table. For a very large database with tens or hundreds of thousands of transactions per second, in-memory across the board may be the best way to ensure performance without event loss.

High-volume transactional environments on this scale are rare, however. In most cases, it will still make sense to move only carefully-chosen subsets of a transactional database in-memory. If these critical subsets of the database, cumulatively, are numerous or extensive enough to constitute the majority of the database, it may be easier and make more sense to load the entire database in-memory. But again, these situations will be vanishingly rare.”

Yes exactly. Basically, this is simply back to memory optimization. Perhaps more memory is used — more memory will generally always be used as hardware specifications continually increase.

What Data Gets Moved into Memory?

“A hybrid approach to in-memory, with some data stored in a spinning disc or flash memory environment, makes even more sense when we remember that in a fully functional enterprise application, we are not just talking about tabular data but, often, attached files. The benefit of moving imagery—like the photos an electric utility may take of meters—into memory would be minimal whereas the cost could be high. These data are not queried, do not drive visualizations or business intelligence, and would consume substantial memory resources.”

This is actually an excellent point that I have never heard brought up before. But many data types really make no sense to move into memory. Good for Dan to point this out, and the specific reason why it makes no sense to do so.

How About a Reasonable Approach to What is Loaded into Memory?

“IFS Applications customers can choose to keep some, all or none of their database in memory. Although our technology supports running the entirety of IFS Applications in memory, we believe that a more focused in-memory approach may be desirable. To help our customers choose the right things to put in memory, we provide an In-Memory Advisor as well as pre-configured In-Memory Acceleration packages for common scenarios in manufacturing, asset and service management.

In essence, at IFS, we have worked hard to package this technology in a way that is accessible enough for middle-market companies, robust enough for the largest global organization, and agile enough to adapt to changing data usage patterns over time.”

This is in great opposition to SAP’s approach — which is to hype customers upon in-memory to get them to buy the exorbitantly priced HANA database, the pricing of which we covered in the article How to Understand S/4HANA and HANA Pricing.

Conclusion

This article receives a 10 out of 10 for accuracy.

The enterprise software market is so filled with promotional information, it is extremely rare for any article to receive a high score from us, much less a perfect score. There is nothing communicated which is inaccurate, and the article is brave for going against the conventional wisdom on in memory. It is easy to simply write an article telling customers and prospects that whatever new thing is necessary, but this article has a genuine interest in educating the reader.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Content

References

https://www.ifsworld.com/corp/sitecore/media-library/assets/2017/05/02/in-memory-enterprise-software/

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

Why is the SAP Fiori Cloud So Slow?

Executive Summary

  • In extensive Fiori testing, the first thing we observed is how slow the interface is.
  • This caused us to perform a speed test which we published here.

Introduction

The Fiori Cloud is a strange introduction by SAP. You will learn about the Fiori Cloud and how accurate the claims for the Fiori Cloud are.

What is the Fiori Cloud

The Fiori Cloud is one of those strange artifacts that SAP brought out a while ago. The Fiori Cloud is a bit confusing because Fiori is just a UI, so you can’t have just a UI that is in the cloud. It has to be connected to an application layer and a database.

However, what the Fiori Cloud really is, is an online demonstration of Fiori with S/4HANA. Upon investigating this, we found something peculiar, which is related to the Fiori’s Cloud’s speed, which is the topic of this article.

Poking Around The Fiori Cloud

The Fiori Cloud is easy to access.

Once you get into it, it brings up the well-recognized Fiori tiles or squares.

The Fiori “tiles” is sort of the opposite of the SAPGUI, which is driven by transactions, or by navigating a very large tree structure.

With Fiori, the squares are selected to get into each transaction or screen. This demos nicely, but there are questions related to how well this design scales.

But Fiori has a nice search feature. This takes you directly to the item or the right square.

Once you select the item you want, often from a number of options that all meet the search criteria, you can be taken into the item or square. It has a very nice feel. But it is unclear to us if is an efficient method. It greatly depends upon the search function working, which we are about to dive into. 

The item you highlight points to the right square which you can then select. 

Fiori’s Hit and Miss Search

The search sometimes works great and is quite fast when it does work. But the search does not always work. 

But once you select the item, this is a common response.

There is a square called Working Capital Analysis. But where is it in this search? It should have come up on the right as an option once any of the keywords were typed in. It was there one time we logged in, we know because we wrote down the time it took to open, but it disappeared the next time we logged in. That is a first. 

This repeatedly occurred when we tested different searchers. Some words worked, but others didn’t. But while the search worked intermittently, Working Capital Analysis was the only square to just disappear from the UI. We checked by scrolling rather than using the search.

How does that happen?

The Best UI in Enterprise Software?

SAP has been carrying on about Fiori as the future. Hasso Plattner called it the best UI in enterprise software. But then why isn’t something basic like this fixed?

If there are many squared (not the 20 or so for the demo) how is the user supposed to find the square? Scroll the entire list of thousands of squares? That is not a feasible option.

SAP proposes that Fiori will eventually make the SAPGUI obsolete. That is not going to happen with the search still not working; squares decide to disappear, combined with such a small amount of coverage over SAP. We covered that second topic in the article The Strange Changes with the Count of Fiori Apps. 

Why are we the only ones to publish on this topic? The Fiori Cloud is available for anyone to go and check and test. But as we have pointed out in previous articles, all the money in consulting and IT media is in agreeing with whatever SAP says. Fact checking is simply not a focus. Even if Deloitte or CIO looked into this, they would never publish on their findings.

The entirety of the information apparatus that covers SAP is there to promote SAP, not to fact check SAP or to tell their clients and readers the real story on SAP.

It is exceptionally difficult to get any SAP consultant who will tell companies the truth about SAP. Most SAP consultants value their relationship with SAP and with other SAP consultants more than they do their relationship with their clients. Lying is rampant in SAP consulting. The objective is to make SAP look as good as possible, the truth is considered only within the context of a massaged narrative. 

Speed Tests for Fiori

After we got through the search problem, we were struck by how often we kept seeing the wait page for Fiori that looks like this.

We found this latency issue at several different locations, and therefore different Internet speeds. 

We did not notice any other latency issues using any other website that we accessed at these same locations. We checked the speed at one location with Speed Test, and here are the results.

So this was not a perfect Internet connection, but it was better than average, scoring four out of a possible five stars. 

When we found an even faster connection, one with five stars, we found that the Adjust Stock square/transaction took 4.49 seconds to open, 2.23 seconds longer than when tested at the slower Internet location (with four stars rather than five).

This slowness of Fiori is not a function of the Internet connection, its a function of the Fiori server, database, etc..)

The following is how long it took to simply get into the transaction screen by selecting the initial screen.

Fiori TransactionLoad Speed in Seconds (Test 1)Load Speed in Seconds (Test 2)
Adjust Stock1.8352.68
Team Calendar3.6353.40
Track Sales Orders2.7934.25
Liquidity Forecast3.124.30
Global Cash Position3.854.15
Working Capital Analysis12.61N/A (Square Disappeared so We Could not Retest)
My Spend4.243.65
My Accounts5.345.22
Order Products7.063.40
Availability Check3.602.97

*All timings were taken using an Android stopwatch app. 

What About The Effect of HANA?

The presentation of HANA has been that it would enormously speed both analytics and transaction process. Hasso Plattner has stated that HANA will deliver zero latency to all applications. If we take Hasso Plattner at his word, this means that the Fiori Cloud squares/transactions should have been limited only in the Internet connection latency, as the web, database and application server should have performed an instantaneous return. The total number of seconds should have been 0.00, exclusive of the Internet time.

We tested the fastest web page we know of which is Google at .486 seconds. But Google only returns text (we tested it searching for a word). Still, this would seem to be the rough latency of the Internet itself, .486, or roughly 1/2 of a second. So while Google is very close to zero latency, SAP is far off the reservation.

*At 1/2 a second, as one has to hit the return button and move one’s finger to hit the timer, verify the data populating the web page, and then re-hit the timer a very accurate measurement is not possible.

However the Fiori Cloud undoubtedly has HANA, yet the application transactions take an average of 4.33 seconds to load. 

Fiori Versus Our Website?

As a means of comparison, we checked the download time of one of our own web pages at Brightwork Research & Analysis. We end up with a time of 3.09. However, our pages have images on them, which means the page is larger than the Fiori pages that are being rendered. The speed will depend upon how many images the web page has (primarily). We do have pages that will render more slowly than 4.33 seconds (the Fiori average), but this is a function of the having quite a few images. Furthermore, we have far more text as well as formatting in a single page than in any Fiori screen that we tested.

SAP Fiori + HANA Losing to Open Source Products?

However, why are our larger web pages loading faster than the smaller Fiori pages that are only rendering numbers and text? Are we using some super fast backend? Hardly. We like our web host, but it is no top end setup. If we wanted to invest more money per month, we could get the speed faster, at a quite small cost. We could for instance move towards a dedicated server at our current host. That would increase the hardware available to fulfill requests.

What about the database? Is an advanced top-end database that the secret to our performance? Nope. Our web host uses MySQL. MySQL is owned by Oracle, but it is an open source database. MySQL is free. Does MySQL have a column data store and “in memory architecture” as does HANA? Nope. In fact, HANA does not compete with MySQL.

SAP has stated that HANA is faster than any of the top end databases offered by Oracle, IBM or Microsoft. But they are certainly not referring to open source database projects. Open source databases like MySQL, MariaDB, and PostgreSQL are not even part of the HANA conversation.

How about the application server? Must we be using a space-age application server right?

No again.

Our host uses Apache. Once again, Apache is an open-source project and is free. Fiori uses the SAP Fiori Front-end Server. It is based on a NetWeaver Applications Server ABAP.

Conclusion

We did not start out trying to illustrate that the Fiori Cloud is slow, or that the Fiori transaction search only works inconsistently. We discovered these while just taking the Fiori Cloud for a demo. We have spent a lot of time analyzing HANA and Fiori which we have covered in articles like What is the Actual Performance of HANA? and What is in the Fiori Box. Even in a basic analysis, like this one, we find that SAP’s claims regarding Fiori and HANA do not check out. If SAP’s “in memory architecture” is so great, why are Fiori and HANA outperformed by our combination of WordPress, Apache, and MySQL — all of which are open source and free products?

The issue that we see is that no one is fact-checking SAP and publishing the results. Therefore SAP marketing is sitting there proposing a virtually unlimited number claims which go unchallenged. If SAP’s claims were true, then it would be annihilating an open sources configuration that is common for the vast majority of websites. But it doesn’t, it fact, it loses to it.

Furthermore, the Fiori Cloud is supposed to be a showcase to demonstrate how superior Fiori is, and with SAP’s virtually unlimited resources it should be configured for speed. Oh, and the search box should work, and it should work 100% of the time and without disappearing transactions.

Overall, SAP is presenting customers with a risky product in Fiori. I cover the topic of enterprise software risk in great detail in the book Rethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects, and the fact that Fiori is offered by a large software vendor like SAP does not change these risks.

Fiori is much more involved than is commonly presented. SAP and their surrogates want to make the use of Fiori sound as painless as possible, but because Fiori is not technically baked and because it is used to drive customers to HANA, it is often presented under pretenses.

The Problem: SAP’s Changed Strategy on Fiori

SAP has changed its strategy on Fiori, from one of charging for the SAP Fiori Apps to making Fiori as part of a packaged deal — that is packaged with HANA.

This is what is known as a Faustian bargain. It does not allow the SAP Fiori Apps to succeed on its own merits, but instead unnecessarily ties Fiori to HANA. However, there is no technical reason for this to be the case. SAP has put a significant amount of effort into Fiori, but Fiori has a very poor future if SAP continues to limit the use of Fiori apps to customers that are running HANA.

Overall, SAP is presenting customers with a risky product in Fiori. I cover the topic of enterprise software risk in great detail in the book Rethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects, and the fact that Fiori is offered by a large software vendor like SAP does not change these risks.

Fiori is much more involved than is commonly presented. SAP and their surrogates want to make the use of Fiori sound as painless as possible, but because Fiori is not technically baked and because it is used to drive customers to HANA, it is often presented under pretenses.

Being Part of the Solution: Our Predictions on Fiori

The information coming out about Fiori from SAP and from the SAP consulting companies has been largely false. Fundamental inaccuracies have been provided to customers, such as overestimating Fiori’s uptake, as well as how much functionality Fiori covers in S/4HANA.

After 4.5+ years after Fiori has been introduced, and there are extremely few customers that use Fiori. Brightwork Research & Analysis called out the problems with Fiori repeatedly while SAP consulting firms have been providing bad information. The amazing topic is that companies continue to rely on SAP consulting firms for what is true about SAP. We beat every single entity that covered Fiori. All SAP customers had to do to not wasted money on Fiori was contact us. The Fiori story never made any sense to us.

The Necessity of Fact Checking

We ask a question that anyone working in enterprise software should ask.

Should decisions be made based on sales information from 100% financially biased parties like consulting firms, IT analysts, and vendors to companies that do not specialize in fact-checking?

If the answer is “No,” then perhaps there should be a change to the present approach to IT decision making.

In a market where inaccurate information is commonplace, our conclusion from our research is that software project problems and failures correlate to a lack of fact checking of the claims made by vendors and consulting firms. If you are worried that you don’t have the real story from your current sources, we offer the solution.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Fiori Content

References

https://blogs.sap.com/2016/12/07/sap-fiori-front-end-server-installation-guide/

https://httpd.apache.org/

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How Accurate is the Hasso Plattner Institute’s Course Explanation?

Executive Summary

  • The Hasso Plattner Institute has curious course explanations.
  • We review the accuracy of these descriptions.

Introduction

The In Memory course offered by the Hasso Plattner Institute was recommended to us to understand in-memory computing. This was recommended to us even though we wrote the article How to Understand Why In-Memory Computing is a Myth and have observed and proven it is a deliberately misleading term.

In this article, we will analyze the accuracy of the description of the in-memory course offered by the Hasso Plattner Institute.

The Quotations

Course Description

“Week 1: The first week will give you an understanding of origins of enterprise computing. It is vital to know the historic development which lead to the emergence of current hardware as we know it now in order to understand the decisions made in the past. Many characteristics of current applications, like materialized aggregates and a reduction of detail in the stored information, have their roots in the past. While these measures were helpful in former systems, they form an obstacle which has to be overcome now in order to allow for new, dynamic applications.”

It is only the Hasso Plattner Institute that thinks this is true. An aggregate is a table of precalculated values. Hasso Plattner is very much opposed to aggregates. However, the reason he gives seems to be directed around making HANA seem to be less expensive than it is. This is because HANA is alone among databases being priced per GB. Aggregates take up space in the database, but they serve a valuable purpose. Without aggregates, constant recalculation is required. Hasso Plattner has stated that this is highly advantageous, but is it? What if those values very rarely change, such as a table of weight conversions? But this table must constantly be recalculated on the fly? If not some rule of excellence has been violated?

Hasso Plattner has stated compression values that have not born out to be true. This is covered in the article Articles that Exaggerate HANA’s Benefits.

Most HANA accounts can expect footprint reductions in the area of 30%. However, this is immaterial to companies as storage is extremely inexpensive, particularly disk storage. John Appleby, SAP proponent and head of an SAP consulting company, has made the statement that disks are a problem because they “take up a lot of space,” which is a claim we analyzed in the article What Was John Appleby’s Accuracy on Moving BW to HANA?

  • The long and short of it is that nothing Hasso Plattner has ever said about database aggregates has made any sense.
  • The focus on aggregates is a gimmick, designed to confuse the message receivers as to what is important in database management.
  • The entire aggregate discussion is a distraction.

“Week 2: Within the second week, the differences between a horizontal, row-oriented layout and a columnar layout are discussed. Concepts like compression and partitioning are introduced. Based on that, you will get an explanation of the internal steps performed inside the database to carry out the fundamental relational operations insert, update and delete. The week concludes with a fundamental difference of SanssouciDB to most other databases: the insert only approach. Following this concept, we circumvent several pitfalls concerning referential integrity and additionally gain the foundation for a gap-less time travel feature.”

This seems like good training.

“Week 3: The content of week 3 focuses on more advanced structures and operations within the database. The differential buffer, a means to prevent frequent resorting of the dictionaries and rewriting of the attribute vectors, is explained in further detail. Subsequently, also the merge process, which incorporates the changes from the differential buffer into the main store, is illustrated. The retrieval of information via the select statement, as well as related concepts like tuple reconstruction, early and late materialization, or a closer examination of the achieved scan speed, are also part of this week’s schedule. The description of the join operation, which is used to connect information from different tables, concludes this week.”

I have no comment on this section.

“Week 4: Week 4 is all about aggregation. Aggregations are the centerpiece of every business analytics application. Given that huge impact of aggregates on all parts of a business, it is of great importance to understand what aggregate functions are, why we remove all materialized aggregates and go for aggregation on the fly. You will further learn how to greatly reduce the costs of this on demand approach by using the aggregate cache and understand its connection to the differential buffer and the merge process. In the units concluding this week, you will see new prototype applications using the aggregate cache to deliver complex simulations in real time.”

With another week spent on aggregates, without any debate as to whether aggregate removal is a value-add, or worth the effort, this seems like a waste of time. Week 4 could be better spent on hiring a psychologist to analyze why Hasso Plattner is so obsessed with aggregate removal (or should we say reduction, as HANA does have aggregates, but does not call them aggregates).

Hasso Plattner either needs to come in to and speak to our Brightwork psychiatrist to get to the bottom of his aggregate obsession or he needs to admit the only reason he keeps talking about aggregates is that it is a gimmick to try to confuse and sell to customers who lack an understanding of databases.

“Week 5: Week 5 sheds light on some more inner mechanisms of the database. What happens in emergency situations, when for example the power is turned off? Logging and recovery are vital parts to know in order to understand why an in-memory database is as secure as a traditional disk based one. Further, the benefits of replicas are explained. We conclude the week with an outlook onto the implications that arise with the tremendously increased speed at hands.”

SAP promotes its HANA database which is not “in memory” but has more of the database’s tables loaded into memory. Therefore, SAP promoting the idea that a database with more tables loaded into memory is as secure as a traditional one is a marketing point. Therefore can anything that SAP says in this area be trusted?

HANA is far less stable than competing databases, although this is for different reasons. Are the reasons for HANA’s relative instability going to be explained in this course? Probably not.

“Week 6: Week 6 is centered on applications. The last conceptual unit is about data separation into active and passive. After that, we showcase several prototypes and sketch out potential fields to apply the technology, thereby also leaving the domain of pure enterprise solutions, by using main memory databases in weather simulations and medicine.”

Nothing objectionable there.

Conclusion

The purpose of the Hasso Plattner Institute is to educate and advocate as to why everyone should buy HANA. Its courses are to create devotes to SAP’s particular approach to databases. However, there is a problem. HANA is not what Hasso Plattner or SAP says that it is. SAP has presented no evidence that HANA can outperform competing databases. This means that for people taking this course, they are taking it from an institution that was started by a man who has made exaggerated and unsubstantiated claims about HANA.

  • HANA can’t meet the claims made for it by SAP. Will the course talk about that?
  • We receive emails from around the world which show SAP consulting companies providing false information about HANA to customers. Will the course cover that?
  • SAP has made so many false claims about databases that they have undermined rather than enhanced the understanding of databases generally.

So the question is why should the Hasso Plattner Institute be considered anything more than a propaganda apparatus for SAP?

SAP should spend more time trying to get HANA to meet its exaggerated claims versus trying to institutionalize or create a completely biased “HANA university” where receiving certifications or PhDs means agreeing with Hasso Plattner. It has been over six years since HANA was introduced, and the only thing HANA does well is speed data warehousing query performance, but only over previous versions of competitor’s databases.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Hasso Plattner Content

References

https://open.hpi.de/courses/imdb2015?lipi=urn:li:page:d_flagship3_detail_base%3B7YmJ63B8S%2FaTlvIxFvm2mQ%3D%3

How to Understand Why In Memory Computing is a Myth

Executive Summary

  • AWS covers HANA’s in memory nature. Placing a database 100% placed into memory is simply not a good thing.
  • We cover the long history of database memory optimization.

Introduction to In Memory Computing

SAP has been one of the major proponents of something called “in memory computing.” Hasso Plattner has written four books on the topic. You will learn how in memory for databases works.

Hasso Plattner has been pushing the importance of in-memory computing for a number of years. Hasso Plattner’s books aren’t books in the traditional sense. They are sales material for SAP. The books we have read by Hasso Plattner uniformly contain exaggerations as to the benefits one can expect from “in memory computing.”

If you read any of Hasso’s books or read his interviews, he is continually jumping from one topic to the next. People that are programmers are aware of a series of sequential and unending goto statements. But after two days of running an infinite loop, eventually, people will figure out “hey, these goto statements are not doing anything.” Programmatic goto statements run to quickly to be useful in tricking people, but it appears that evidence-free assertions can last a very long time.

There have been some inaccuracies concerning the specific topic of memory management with HANA.

In an article titled SAP’s HANA Deployment Leapfrog’s Oracle, IBM and Microsoft published in Readwrite, the following quote reiterates this popularity.

In-Memory Databases

Many companies today offer in-memory databases for a variety of tasks. The databases are much faster than traditional technology because all data is stored in system memory where it can be accessed quickly. Standard relational databases write and read to disks, which is a much slower process.

This may be rock ReadWrite’s world, but other databases use memory as well. They load tables into memory that are needed by the application. There is a debate as to whether one should load all tables into memory, which is how SAP does it. However, the benefits of doing this are not demonstrated in any benchmark, which is covered in the article What is the Actual Performance of SAP HANA? SAP has made so many statements about the benefits of HANA, but in their entirety, they are nothing more than unverifiable anecdotes provided to them about mostly anonymous customers.

In the absence of evidence, SAP’s proposal that they have the best way of dealing with memory and databases should be considered conjecture. But ReadWrite seems to treat it as if is a natural law, as established as gravity.

What is Non In Memory Computing?

So all computing occurs in memory there is no form of computing that is performed without memory because the results would be unacceptable. Computing has been using more and more memory as anyone who purchases a computer can see for themselves. While at one time a personal computer might sell with 4 GB of memory (or RAM), 16 GB is now quite common on new computers.

The Problem with the Term In Memory Computing

SAP took a shortcut when they used the phrase “in memory” computing. The computer I am typing on has loaded the program into memory. So the term “in-memory computing” is a meaningless term.

Instead, what makes HANA different is it requires more of the database to be loaded into memory. And HANA is the only database I cover that works that way. With this in mind, the term should have been

“more database in memory computing.”

**There is a debate as to how may tables are loaded into memory. Not the large tables and not the column-oriented tables. This is the opposite of what SAP has said about HANA. The reason for this debate is SAP has provided contradictory information on this topic. 

That is accurate. SAP’s term may roll off the tongue better, but it has the unfortunate consequence of being inaccurate.

And it can’t be argued that it is correct.

Here is a quote from AWS’s guide on SAP HANA, which is going to tend to be more accurate than anything SAP says about HANA.

“Storage Configuration for SAP HANA: SAP HANA stores and processes all or most of its data in memory, and provides protection against data loss by saving the data in persistent storage locations. To achieve optimal performance, the storage solution used for SAP HANA data and log volumes should meet SAP’s storage KPI.”

However, interestingly, this following statement by AWS on HANA’s sizing is incorrect.

“Before you begin deployment, please consult the SAP documentation listed in this section to determine memory sizing for your needs. This evaluation will help you choose Amazon EC2 instances during deployment. (Note that the links in this section require SAP support portal credentials.)”

Yet it is likely, not feasible for AWS to observe that SAP’s sizing documentation will cause the customer to undersize the database so that the customer will purchase HANA licenses on false pretenses and then have to go back to purchase more HANA licenses after the decision to go with HANA has already been made.

Bullet Based Guns?

Calling HANA “in-memory computing” is the same as saying “bullet based shooting,” when discussing firearms.

Let us ask the question: How would one shoot a firearm without using a bullet?

If someone were to say their gun was better than your gun (which in essence SAP does regarding its in-memory computing) and the reason they give is that they used “with bullet shooting technology,” you would be justified in asking what they are smoking. A gun is a bullet based technology.

How to Use a Term to Create Confusion Automatically

This has also lead to a great deal of confusion about how memory is used by computers among those that don’t spend their days focusing on these types of issues. And this is not exclusive to SAP. Oracle now uses the term in-memory computing as do many IT entities. Oracle references the term also as can be seen in the following screenshot taken from their website.

Is 100% of the Database Placed into Memory a Good Thing?

However, the question is whether it is a good or necessary thing. And it is difficult to see how it is.

It means that with S/4HANA even though only a small fraction of the tables are part of a query or transaction, the entire database of tables is in memory at all times.

Now, let us consider the implications of what this means for a moment. Just think for a moment how many tables SAP’s applications have, and how many are in use at any one time.

Why do tables not involved in the present activity, even tables that are very rarely accessed need to be in memory at all times?

Oracle’s Explanation on This

In August 2019 Oracle published the Oracle for SAP Database Update document. In this document, Oracle made the following statement about HANA versus Oracle.

Oracle Database 12c comes with a Database In-Memory option, however it is not an in-memory database. Support-ers of the in-memory database approach believe that a database should not be stored on disk, but (completely) in memory, and that all data should be stored in columnar format. It is easy to see that for several reasons (among them data persistency and data manipulation via OLTP applications) a pure in-memory database in this sense is not possible. Therefore, components and features not compatible with the original concept have silently been added to in-memory databases such as HANA.

Here Oracle is calling out SAP for lying. Furthermore, we agree with this. SAP’s proposal about placing all data into memory was always based upon ignorance and ignorance on the part of primarily Hasso Plattner.

If SAP had followed Oracle’s design approach, companies would not have to perform extensive code remediation — as we covered in the article SAP’s Advice on S/4HANA Code Remediation.

The Long History of Database Memory Optimization

People should be aware that IBM and Oracle and Microsoft all have specialists that focus on something called memory optimization.

Microsoft has documents on this topic at this link.

Outsystems, which is a PaaS development environment that connects exclusively to SQL Server has its own page on memory optimization to the database which you can see at this link.

The specialists that work in this area figure out how to program the database to have the right table in memory to meet the demands of the system, and there has been quite a lot of work in this area for quite a long time. Outside of SAP, there is little dispute that this is the logical way to design the relationship between the database and the hardware’s memory.

Conclusion

In summary, if a person says “in-memory computing” the response should be “can we be more specific.” Clear thinking requires the use of accurate terms as a logical beginning point.

SAP’s assertion the entire database must be loaded into memory is unproven. A statement cannot be accepted if it both has no meaning and if what it actually means (as in the entire database in memory) is unproven.

The Necessity of Fact Checking

We ask a question that anyone working in enterprise software should ask.

Should decisions be made based on sales information from 100% financially biased parties like consulting firms, IT analysts, and vendors to companies that do not specialize in fact-checking?

If the answer is “No,” then perhaps there should be a change to the present approach to IT decision making.

In a market where inaccurate information is commonplace, our conclusion from our research is that software project problems and failures correlate to a lack of fact checking of the claims made by vendors and consulting firms. If you are worried that you don’t have the real story from your current sources, we offer the solution.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Performance Content

References

https://www.oracle.com/a/ocom/docs/ora4sap-dbupdate-5093030.pdf

I cover how to interpret risk for IT projects in the following book.

*https://readwrite.com/2013/01/10/saps-hana-deployment-leapfrogs-oracle-ibm-and-microsoft/

https://www.ibm.com/blogs/research/2017/10/ibm-scientists-demonstrate-memory-computing-1-million-devices-applications-ai/

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-in-memory

https://s3.amazonaws.com/quickstart-reference/sap/hana/latest/doc/SAP+HANA+Quick+Start.pdf

 

Which is Faster HANA or Oracle 12C?

Executive Summary

  • SAP proposes that HANA has advantages in performance versus all other databases with a database that runs 100,000 times faster than any other.
  • The confusion on HANA vs Oracle performance due to the commingling of hardware speed and database design.
  • SAPs strategy for using HANA to lock out other database vendors (as claimed Teradata).
  • SAP’s conflict of interest in not certifying Oracle 12c for S/4HANA.

Introduction: How HANA Compares to Oracle

HANA is a constant source of discussion on SAP projects. The claims by SAP are enormous, but how many are true. You will learn about the debate and the truth on HANA vs Oracle from an independent source and from the multiple dimensions that are claimed by SAP.

*Note: This article was originally written in April of 2016, and it refers to some articles by SAP earlier than this, however, the article has been updated as of August of 2019, and it is applicable several years later. 

The History of HANA

SAP has promoted HANA has run far faster than any alternative database, and this principally means HANA vs Oracle.

This has been the logic that it has used for why SAP would not port new applications, like S/4 (SAP’s new ERP system) to Oracle. (as Oracle has the largest market share of supporting SAP applications, although SAP also is targeting IBM and SQL Server)

This contention has few independent parties even investigating this issue.

A Recipe for Confusion on HANA vs Oracle: The Commingling of Hardware Speed and Database Design

One of the confusing aspects of HANA vs Oracle is that two different topics are commingled and communicated as if they are one topic.

  • One is the hardware issue, as SAP HANA requires moving the active database into memory.
  • A second aspect is the database design, which is the column-based database.

SAP discusses these two topics as if they are the same subject.

One could say that SAP has done a poor job of explaining the distinction. I don’t think SAP is trying to be clear in this area and is primarily hoping customers are confused.

  • The less clear SAP’s customers are on where the potential benefits are coming from, the more the advantage swings to SAP when it comes to negotiating.
  • The more ability it has to market SAP HANA vs Oracle as a differentiated offering.
  • The more it can position SAP HANA as worthy of a serious price premium.

Something which goes undiscussed by SAP is how SAP HANA is both a technology strategy and a targeted strategy to push Oracle out of SAP accounts.

Take the Queen: SAP’s Strategy for Locking Out Other Vendors

This is an extension of a strategy that SAP has used to great effect for decades, but with a slight twist. SAP kept other applications out of its customers by using the ERP system as a queen on the chessboard.

We refer to this as the “take the queen” software strategy.

By declaring that all other SAP applications would integrate better with the queen, SAP’s customers could have lower risk implementations. This ended up being false — and a primary reason for this was SAP’s applications have been far higher risk than the applications which they competed against. 

The Result of the “Take the Queen” Sales Strategy

This strategy has been enormously successful, even after most vendors have come very close to SAP’s integration with their adapters (only the ERP system is “fully integrated” in that all of the modules run off of the same database, all other SAP applications are connected through adapters, and many SAP acquired applications have worse adapters than non-SAP applications).

The account control features of ERP systems are covered in the article ERP Systems as a Trojan Horse, and ERP Became an Out of Control Octopus.

SAP does not just sell a company an ERP system. The ERP system is just the wedge, that breaks into the account. Like an octopus, SAP keeps hammering on different areas that must be made “SAP standard.” The development language used, the other applications must also be SAP. Now SAP has pushed into the database layer. Oracle, the primary vendor that SAP is trying to push out of SAP accounts with HANA functions the exact same way. 

Using Control of the Application Layer to Push into the Database Layer

One-Time horizontal competition — i.e., competition at the application layer. HANA is a twist on this block out strategy but takes it to the database layer. This is why SAP is so strongly positioning HANA vs Oracle. It is preventing Oracle (and other databases) from competing with S/4 by not certifying Oracle’s database. Even though there is no reason that Oracle, IBM and SQL Server cannot fully support S/4HANA. Further recall that none of the databases in the mix is open source — even though open source databases like PostgresSQL or MariaDB could easily support SAP. This is a battle between monopolistic high overhead and controlling software vendors.

Code Pushdown and Stored Procedures

SAP’s main arguments for why S/4HANA is restricted to HANA is that SAP has pushed some S/4HANA code into HANA, and it is not doing so for Oracle or other databases. The logic of the stored procedures placed into HANA is a cover for the fact that SAP is using S/4HANA’s exclusive certification to drive HANA sales, which we cover in the article SAP’s Arguments on Code Pushdown.

Let us be clear on this topic.

SAP does not care if performance increases for customers. SAP introduced HANA and says what it does about HANA for one reason, to increase its sales and to push Oracle out of accounts. SAP consultants at SAP consulting firms that repeat HANA talking points don’t themselves know if they are true, and also normally don’t care. They make statements in order to increase billing hours.

Finally, the general database knowledge inside of SAP, and inside of the consulting firms of SAP consulting firms is quite poor, and at Brightwork, we don’t pay attention to what SAP resources say about HANA because it never matches the data points we receive from SAP customers or the private benchmarking information that we have access to or the evidence and history around HANA performance that we have extensively researched.

The Real Opportunity with HANA vs Oracle?

It has been proposed to me that the real chance with HANA is for the company to place ERP and all other SAP applications on HANA. Then the analytics engine can sit right on the same hardware. And now no integration or transformation is necessary, and now analytics reports right off of the application tables.

Cognitive

However, wait one second. I know that is feeling mighty big in its britches after over five years of breathless conferences about the brave new world of analytics, and the new Big Data and overall analysts obsession (which has lead to far fewer benefits than originally proposed). However, are we now going to transform all of the hardware to be optimized for analytics?

  • Also, what about non-SAP applications? They won’t sit on HANA, so they do have to be integrated and transformed.
  • Will SAP now make the argument that those applications are legacy because they don’t sit on the “strategic platform” for the company?

Secondly, using HANA is expensive. Even more expensive than Oracle which is already very expensive, and exorbitant if one follows Oracle’s “advice” and activates the higher-end functionality.

HANA is Not Expensive…Hasso?

Hasso Plattner has routinely argued that SAP HANA is not expensive.

Typically Hasso Plattner will use the example of compression that is available in column-based databases to reduce the footprint. Hasso has so frequently talked about compression because HANA is priced per GB, which is strange for a database as most commercial databases are priced per CPU. If Hasso or the SAP account rep can make the database sound like it will be smaller than it is, SAP can get more sales.

However, if you talk to SAP account executives, they will tell you that SAP HANA is expensive. Furthermore, they will inform you that HANA is very hard to position for this reason, once the price tag comes back, the customer balks. We have routinely priced HANA for customers and there is simply no getting around the fact that HANA is the most expensive database among the competiting options.

We cover HANA pricing in the article How to Understand S/4HANA and HANA Pricing.

It is a simple thing for Hasso Plattner to propose in interviews how SAP HANA could in some hypothetical sense be not too expensive. But all other sources point to HANA being quite expensive. And you will not be buying HANA from Hasso but an SAP account executive.

Hasso Plattner’s Constant Inaccuracy

Something important to consider is that Hasso’s accuracy historically is quite poor, and I don’t see analysts or the traditional IT media outlets recording this inaccuracy or commenting upon it.

I have performed a detailed analysis of Hasso’s statements on HANA and when Hasso says something, he is normally wrong.

Hasso considers himself a professor and highly technical visionary. However, his accuracy puts him close to a salesperson as we cover in the article Thomas Edison, Elizabeth Holmes and Hasso Plattner. and How Much Should Hasso Plattner Be Cut Slack for Lying?

As we cover in a few paragraphs, SAP will never allow a fair competition against Oracle or other databases, because SAP knows it will lose. SAP is both the gunfighter in this scenario and also the entity supervising the gunfight, as it is the certifying entity. SAP also has the power to block any database provider from publishing any benchmark under the partnership agreement. 

The Fastest Database in the West (HANA vs Oracle)?

There is evidence building that HANA is not the speed champion that SAP says that it is. One of the primary performance weaknesses of HANA is very rarely addressed. HANA as a column-based database is not the correct database design for non-analytic applications. SAP has said that it is, but this is from the computer science perspective, not true.

Although SAP obscures the fact that HANA cannot be 100% column-based or column-oriented in design.

As I pointed out in the article Where HANA Gets it’s Speed, for inserts, deletes or updates — which what a transactions processing system does all the time, the column-based table is slower than the row-based.

Row-oriented databases are what is known as the relational database, but which is a row-based database.

The Great Database Speed Debate

John Soat is a writer that works for Oracle, and like Hasso is not an independent source on this topic. However, John’s article in Forbes makes some good points on the topic of HANA vs Oracle. One that stuck out was SAP’s demurring on releasing HANA performance benchmarks for transaction processing.

“..SAP has not published a single benchmark result for any of its transaction processing applications running on HANA. Why Not?”

And I would say it is quite obvious why not. And that is for transaction processing systems like ERP systems these benchmarks won’t be particularly fast.

Vinnie Mirchandani, who was an independent source of the SAP HANA/Oracle 12C debate at the time of publication of SAP Nation 2.0, in his book SAP Nation 2.0 reinforces John Soat’s point on benchmarks.

“It has not helped matters that SAP has been opaque about HANA benchmarks. For two decades, its SD benchmark, which measures SAP customer order lines processed in its Sales and Distribution (SD) module, has been the gold standard for measuring new hardware and software infrastructure. It has not released those metrics using a HANA database.”

Misdirection from John Appleby

Is it possible that SAP performed the benchmarking but it was poor, so it simply stopped reporting the result?

John Appleby, the Global Head of HANA at Bluefin Consulting and a well known HANA advocate and someone who has provided an enormous amount of false information about HANA has this to say about the topic — which is also documented in SAP Nation 2.0.

“The answer for the SAP Business Suite is simple right now: you have to scale-up. This advice might change in the future, but even an 8-socket 6TB system will fit 95% of SAP customers, and the biggest Business Suite installations in the world can fit in a SGI 32-socket with 24TB — and that is before considering Simple Finance or Data Aging, both of which decrease memory footprint dramatically.”

I can’t tell if this is in direct response to the lack of transparency on transaction benchmarks, but if it is, it is an inadequate response. In fact, it looks to me that John Appleby is changing the topic in his answer.

We tracked John Appleby’s accuracy and displayed the article The Appleby Accuracy Checker: A Study into John Appleby’s Accuracy on HANA. Back in 2013 Appleby was aggressively and falsely promoting HANA in order to get his company ready to sell to Mindtree as we covered in the article Appleby’s False HANA Statements and the Mindtree Acquisitions.

The Appleby (Formerly Known as the Hasso) Pivot

The question is related to a performance of a transaction processing system on HANA vs Oracle, and John Appleby quickly moves to a discussion of how much companies should only buy more hardware and not worry about it. What is John Appleby talking about here?

He states that “for the SAP Business Suite.” and then goes on to declare the answer for this suite.

Well, the only part of the SAP Business Suite that is ready for HANA (at the time of this quotation) was S/4 Finance. There is lots of debate as to how implementable S/4 Finance was at this time.

Secondly, the rest of the suite, now called SAP HANA Enterprise Management, as I stated, was not available for purchase at this time. John Appleby is phrasing his response to what should be the future tense as if it is the present tense.

Is it, in fact, critical to scale up for something that does not yet exist?

Is Oracle Monkeying with the S/4 Certification?

John Soat also points out that while Oracle performed very well on one particular benchmark, but SAP will not certify the result as SAP states that Oracle manipulated the test.

Now I was not at the trial, so I am in no position to say what Oracle did or did not do. Oracle has their story, and SAP has theirs. John Soat has a good explanation of each side’s position in his article.

Also, Stephan Kohler, an Oracle performance database consultant had the following to say on this topic.

“SAP already answered why they do not accept the benchmark results (you also find this in the mentioned article – Copy & Paste: “Oracle manipulated its BW-EML benchmark by using a custom setup involving database functions known as triggers and materialized views that can lead to hard-to-spot data inconsistencies and aren’t supported in real-world production environments.”). The reason was the use of triggers and materialized views, which are supposed to be not supported. However if SAP would have checked their own SAPnotes – you can see that it is clearly supported and also used in SAP ECO Space. SAPnote #105047: “Materialized Views – Use permitted.

For more information, see SAP Note 741478.” SAPnote #105047: “Trigger – Use permitted as part of the SAP standard system (for example, BW trigger /BI0/05* in accordance with SAP Note 449891, incremental conversion ICNV). Use of Logon Trigger permitted in accordance with SAP Note 712777. Implicit use as part of Oracle features permitted (for example, online reorganisation, materialized views, GridControl/Enterprise Manager). Use in connection with materialized views in an SAP BW system is permitted as long as no flat cubes are available as an alternative. There is no SAP Integration and SAP does not offer support for this.” Flat cubes are available in Beta since Q1/2016 – so nothing relevant to the Oracle benchmark from 2015.”

And this leads to the next topic, and it is a big one.

SAP’s Conflict of Interest in Not Certifying Oracle 12c

The issue that SAP now completes with all of the hardware vendors places SAP in a conflict of interest when certifying databases; this is a conflict of interest that before its investment into HANA it did not have. What was once a straightforward process is now rife with political intrigue where one now has to parse the statements by SAP and Oracle to see who is telling the truth.

How can SAP certify Oracle, that is give them a fair hearing, if, by certifying Oracle, SAP cut’s into their market share for HANA?

The Mode Switching of Oracle 12c, a New Wrinkle in HANA vs Oracle

Oracle 12c can switch between “modes” displaying either in memory rows or memory columns. That is a serious advantage. IBM BLU has a similar ability. Although there is not that much evidence that there is a major need for a database that does both OLTP and OLAP — and it may not be feasible to design one that does each type of database processing equally well. In fact, the trend in databases is the opposite of this, with specialized database designs such as NoSQL, indexing databases flourishing.

However, getting back to the HANA versus Oracle 12c discussion:

  • Oracle’s flexible design should beat SAP HANA in performance for all but pure analytic applications. 
  • The logic presented by SAP that the entire database should be columnar never made sense because few tables are used in analytics. Therefore does it make sense to use analytics-optimized tables (the columnar design) for every single application table?
  • There is a debate as to how mature Oracle’s in-memory database is. SAP lists 7,000 SAP HANA customers. However, most of these customers are known to either not use the software at all (i.e., it is shelf-ware) or to be test systems, not live systems. As a consequence, SAP HANA skills are still quite hard to find.

Furthermore, Oracle’s in-memory modal switch adds to the price of Oracle 12c both in license and in maintenance.

SAP’s Rigged Benchmarks

SAP has had ample opportunity to prove their claims of superiority around HANA, but have never done so. SAP has never allowed a comparative benchmark in transaction processing, as is covered in the article The Hidden Issue with the SD HANA Benchmark. And because HANA performed so poorly in the transaction process benchmark, it created a new benchmark that attempts to cover up this fact, but the benchmark they provided is designed around analytics which is covered in the article The Four Hidden Issues with SAP’s BW-EML Benchmark. This overall topic has come to a head, as numerous companies have complained about HANA’s transaction processing performance, which brings up the question of how well HANA can support an ERP system as is covered in the article HANA as a Mismatch for S/4HANA and ERP.

Conclusion

With Oracle 12c, Oracle 12c can switch between row-based and column-based tables and switch for the same table, which is a new capability.

As far as I can tell, just about all of the SAP marketing documentation on HANA has preceded this development. If I were heading up HANA marketing at SAP, I would not want to address Oracle 12c, because I would not have a good answer for it. This is because Oracle 12c undermines lots of effort expended on the part of SAP to get SAP customers to think that SAP HANA technology is unique to SAP to position SAP HANA as unique and better.

The new capabilities of Oracle 12c undermine some SAP contentions that have been proposed over the years. SAP has not addressed Oracle 12c, and most of the material created on SAP HANA was developed before Oracle 12c was released. IBM and MS SQL Server have similar column store capabilities. Not because there was a big reason to develop them, but because SAP, through its enormous marketing placed the focus on this type of database functionality.

First, SAP now does not have a good reason — or should I say a good idea if it puts its customer’s interests first, to only port new SAP applications (like S/4) to SAP HANA.

Dictating the Database to the Customers?

The previous argument that only SAP could provide a fast enough database is most likely untrue. It was always a poor argument because regardless of the reason. No application vendor should be dictating the data layer to its customers. However, repeatedly that is what SAP has said that it wants to do.

“SAP still believes in running the new system in the cloud and on premise, but it will be only SAP S/4HANA with which we can achieve this in one software version going forward. This will reduce the TCO and speed up the so much needed step into the future. Every single application area like data entry, standard reporting, analytics and predictions, the digital boardroom or the multi-channel customer interaction, to name a few, becomes a world class component  in its own right. This alone is a reason to consider an earlier migration to SAP S/4HANA.” – Hasso Plattner

SAP’s Interest in Sending the IT Industry Back in Time

At the Computer History Museum in Mountain View, there is an exhibit that explains that at one time the software was proprietary to the hardware vendor. At that point software was not an “industry,” and a program released by IBM could only run on IBM hardware. The software was not charged for separately, so there was no competition at the software level.

The software industry we know it today only came into its own after it was decoupled from the hardware. And this was only done by the threat of the US enforcing anti-trust legislation against the proprietary software model and hardware vendors. HANA as a coupling between the applications and database layer — controlled by a single vendor, takes us back in time.

This creates what amounts to a proprietary application/database combination.

  • SAP’s argument that only column-based databases have a future is also untrue.
  • Finally, unsurprisingly to those who know the database vendors and their history, the idea that only SAP can develop a high-performance database that meets the speed capabilities of HANA is untrue.

SAP’s argument has not been that they are simply the equal of every other database vendor. With HANA they are superior to every other database vendor. That, of course, includes HANA vs Oracle or anyone else for that matter.

SAP certainly can and will keep selling HANA vs Oracle. But the exclusivity argument that SAP has been proposing is no longer a possible position to believe.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database that is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The Necessity of Fact Checking

We ask a question that anyone working in enterprise software should ask.

Should decisions be made based on sales information from 100% financially biased parties like consulting firms, IT analysts, and vendors to companies that do not specialize in fact-checking?

If the answer is “No,” then perhaps there should be a change to the present approach to IT decision making.

In a market where inaccurate information is commonplace, our conclusion from our research is that software project problems and failures correlate to a lack of fact checking of the claims made by vendors and consulting firms. If you are worried that you don’t have the real story from your current sources, we offer the solution.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Performance Content

References

https://www.forbes.com/sites/oracle/2015/12/18/oracle-challenges-sap-on-in-memory-database-claims/

*https://www.amazon.com/SAP-Nation-2-0-empire-disarray-ebook/dp/B013F5BKJQ

https://en.wikipedia.org/wiki/Proprietary_software