- HANA has high CPU consumption due to HANA’s design.
- The CPU consumption is explained by SAP, but we review whether the explanation makes sense.
Introduction to HANA CPU Consumption
At Brightwork, we have covered the real issues with HANA that are censored by SAP, SAP consulting firms, and IT media. As a result of this information being censored, many SAP customers have been hit with many surprises related to implementing HANA.
See our references for this article and related articles at this link.
In this article, we will address HANA’s CPU consumption or overconsumption.
HANA CPU Overconsumption
A second major issue in addition to memory overconsumption with HANA is CPU consumption.
HANA’s design is to load data that is not planned to be used in the immediate future into memory. SAP has decreased its somewhat unrealistic position on “loading everything into memory,” but it still loads quite a lot into memory.
The reason for the CPU overconsumption is this is how a server reacts when so much data is loaded into memory. The CPU spikes. This is also why CPU monitoring, along with memory monitoring, is considered so necessary for effectively using HANA. At the same time, this is not an issue generally with competing databases from Oracle, IBM, or others.
SAP’s Explanation for Excessive CPU Utilization by HANA
SAP offers a peculiar explanation for CPU utilization.
“Note that a proper CPU utilization is actually desired behavior for SAP HANA, so this should be nothing to worry about unless the CPU becomes the bottleneck. SAP HANA is optimized to consume all memory and CPU available. More concretely, the software will parallelize queries as much as possible to provide optimal performance. So if the CPU usage is near 100% for query execution, it does not always mean there is an issue. It also does not automatically indicate a performance issue”
Why Does SAP’s Statement Not Make Sense
This entire statement is unusual, and it does not explain several issues.
Why HANA Times Out if the CPU is Actually Being Optimized?
Timing out is an example of HANA not being able to manage and throttle its resource usage. Timing out requires manual intervention to reset the server. If a human is needed to intervene or the database ceases to function, this means that resources are not being optimized. We have frequent reports at some companies of their HANA server needing to be reset multiple times a week.
This is supposed to be a standard capability that comes with any database that one purchases. Free open-source databases do not have the problem that HANA has.
If an application or database is continually consuming all resources, apparently, the likelihood of timeouts increases. It also presents a false construct that HANA has optimized the usage of the CPU, which is not correct. It seeks to show what is a bug in HANA as a design feature.
Why HANA Leaves A High Percentage of Hardware Unaddressed
Benchmarking investigations into HANA’s utilization of hardware indicate clearly that HANA is not addressing all of the hardware that it uses. While we don’t have both related items evaluated on the same benchmark, it seems very probable that HANA is timing out without addressing all of the hardware. Customers that purchase the hardware specification recommended by SAP often do not know that HANA leaves so much hardware unaddressed.
The Overall Misdirection of SAP’s Explanation
This paragraph seems to attempt to explain away the consumption of hardware resources by HANA that should be a concern to administrators. This statement is also inconsistent with other explanations about HANA’s use of memory, as can be seen from the SAP graphic below.
Notice the pool of free memory.
Once again, notice the free memory in the graphic.
This is also contradicted by the following statement as well.
“As mentioned, SAP HANA pre-allocates and manages its own memory pool, used for storing in-memory tables, for thread stacks, and for temporary results and other system data structures. When more memory is required for table growth or temporary computations, the SAP HANA memory manager obtains it from the pool. When the pool cannot satisfy the request, the memory manager will increase the pool size by requesting more memory from the operating system, up to a predefined Allocation Limit. By default, the allocation limit is set to 90% of the first 64 GB of physical memory on the host plus 97% of each further GB. You can see the allocation limit on the Overview tab of the Administration perspective of the SAP HANA studio, or view it with SQL. This can be reviewed by the following SQL
select HOST, round(ALLOCATION_LIMIT/(1024*1024*1024), 2)
as “Allocation Limit GB”
Introduction to HANA Development and Test Environments
Hasso Plattner has routinely discussed how HANA simplifies environments. However, the hardware complexity imposed by HANA is overwhelming to many IT departments. This means that to get the same performance of other databases, HANA requires not only far more hardware but far more hardware maintenance.
Development and testing environments are always critical, but with HANA, the issue is of particular importance. The following quotation explains the alterations within HANA.
“Taking into account, that SAP HANA is a central infrastructure component, which faced dozens of Support Packages, Patches & Builds within the last two years, and is expected to receive more updates with the same frequency on a mid-term perspective, there is a strong need to ensure testing of SAP HANA after each change.”
The Complications of So Many Clients and Servers
HANA uses a combination of clients and servers that must work in unison for HANA to work. This brings up complications, i.e., a higher testing overhead when testing any new change to any one component. SAP has a process for processing test cases to account for these servers.
“A key challenge with SAP HANA is the fact, which the SAP HANA Server, user clients and application servers are a complex construct of different engines that work in concert. Therefore, SAP HANA Server, corresponding user clients and applications servers need to be tested after each technical upgrade of any of those entities. To do so in an efficient manner, we propose the steps outlined in the subsequent chapters.”
HANA development environments can be acquired through AWS at reasonable rates. This has the extra advantage that because of AWS’s elastic offering, volume testing can be performed on AWS’s hardware without having to purchase the hardware. Moreover, the HANA AWS instance can be downscaled as soon as the testing is complete. The HANA One offering allows the HANA licenses to be used on demand. This is a significant value-add as sizing HANA has proven to be extremely tricky.
HANA’s design problems have continued for years because the database originally had its design goals/parameters set by a person, Hasso Plattner, who was not qualified to design a database. And since the beginnings of HANA, people lower down the totem pole have been required to continue to try to support Hasso’s original “vision.”