- HANA has performance problems when supporting transaction processing applications like S/4HANA.
- We cover one possible reason for this.
SAP proposed that HANA is optimal for processing either OLAP or OLTP. And SAP had a specific design which SAP claimed allowed for this to happen.
This is explained by the following quotation from Rolf Paulsen.
HANA’s Design with Respect to Column Oriented Tables
The common quotation.
“HANA combines OLTP and OLAP capabilities, with row store and columnar store in the same box” –
..is misleading at least because it suggests row store belongs to OLTP and columnar store to OLAP in HANA. Unlike products of other vendors, HANA does not provide “hybrid” tables that combine row store and columnar store in parallel. A HANA table is either columnar or row stored but row store tables are the exception for very volatile and fast-changing data. E.g. the size of all row store tables together has a hard limit of 1,945 GB per instance. The sophisticated issue are the transactional operations on the many columnar tables. The price for having data in the columnar form convenient for fast analysis has to be paid on data manipulation. No “row” of a columnar table gets updated, there are only inserts and complex “delta merges” on a short-lived row representation of the columnar data.“
The original design of SAP was a nearly 100% column-oriented design. Obviously, with this design, HANA ran into major problems in transaction processing. Later versions of HANA decreased the percentage of column-oriented tables to roughly 1/3 of the total tables in the database for S/4HANA.
This is evidence that SAP did not know what it was doing when it designed HANA and.
Oracle’s design is different which is covered in part by this quote from Oracle.
“The IM column store encodes data in a columnar format: each column is a separate structure. The columns are stored contiguously, which optimizes them for analytic queries. The database buffer cache can modify objects that are also populated in the IM column store. However, the buffer cache stores data in the traditional row format. Data blocks store the rows contiguously, optimizing them for transactions.”
This quote on Rolf’s part is compelling.
“The sophisticated issue is the transactional operations on the many columnar tables.
The price for having data in the columnar form convenient for fast analysis has to be paid on data manipulation.
No “row” of a columnar table gets updated, there are only inserts and complex “delta merges” on a short-lived row representation of the columnar data.”
The CPU Consumption Issues of HANA
If we can restate this..
- Rolf proposes that the columnar tables are updated by a transaction (let us say HANA is supporting a transaction processing system rather than BW).
- Therefore this update is problematic from transactions being processed to the database.
Results from the field indicate HANA still has problems with both transaction and running CPU intensive processes like MRP/DRP as we covered in the article HANA as a Mismatch for S/4HANA and ERP.
The CPU issue is clearly because of the overload of data into memory, causes major CPU consumption, often requiring a HANA reboot as we covered in the article How to Understand HANA’s High CPU Consumption. However, the continued transaction processing performance issues could be in part related to the exact problem you bring up here in Rolf’s quote above.
Removing Duplicate Data by Porting an Analytics Database Design to the Non-Analytics Application?
John Appleby and others repeatedly proposed that HANA would eliminate duplicate data (that is the data that is redundant between the ERP system and the data warehouse). Analysis of this claim performed in this article How Accurate Was John Appleby on HANA Replacing BW? But the “solution” meant turning the ERP system database into something like the database for a data warehouse (if one does not want to use star schemas). The row-oriented databases supported data warehouses quite well but used an intermediary structure called the star schema. We use an application that creates star schemas in the background without the user seeing anything, and easily beats the performance of SAP BW, which requires the configuration of star schemas when BW sits on top of a row-oriented DB. That is creating star schemas does not need to be as onerous as it is in SAP’s BW. It can be made to create the schemas in the background from just loading a flat file which contains the relationships (as one example).
Hiding HANA’s Transaction Processing Performance
John Appleby stated that SAP no longer would use the SD benchmark as we covered in the article The Hidden Issue with the SD HANA Benchmark, because it no longer fit with how companies used SD (that is the sales and distribution module of ECC, a transaction processing system). And in fact, even in 2019, there is still no SD benchmark for HANA published by SAP. Instead, SAP created an analytics benchmark called the BW-EML which we covered in The Four Hidden Issues with SAP’s BW-EML Benchmark. All of this is strongly indicative that the SAP has run the SD benchmark internally, but choose not to publish the benchmark because the performance would match what has been reported to us from the field, that HANA simply performs poorly for transaction processing.
As far as adding columnar capabilities to a row-oriented database, all of the major database vendors were able to do the same thing. Sybase IQ was around for at least 15 years before HANA and had a similar design to HANA, but it was just never very popular.
Oracle, IBM, SQL Server all added column stores to their databases after SAP did (and all of these vendors had superior knowledge of memory optimization versus SAP).
But it is not a question of if one can, its a question of “why would you?” SAP essentially proposed that an analytics database is a fantastic database for a transaction processing system — and that furthermore, they were the only one to figure this out. This was incorrect from multiple dimensions.
Search Our Other HANA Content
Financial Bias Disclosure
This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.
Getting to the Detail of TCO
The Mechanics of TCO
- Understand why you need to look at TCO and not just ROI when making your purchasing decision.
- Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
- Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
- Learn why ERP systems are not a significant investment, based on their TCO.
- Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
- Appreciate the importance and cost-effectiveness of a TCO audit.
- Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
- Chapter 1: Introduction
- Chapter 2: The Basics of TCO
- Chapter 3: The State of Enterprise TCO
- Chapter 4: ERP: The Multi-Billion Dollar TCO Analysis Failure
- Chapter 5: The TCO Method Used by Software Decisions
- Chapter 6: Using TCO for Better Decision Making