Is SAP’s Warm Data Tiering for HANA New?

Executive Summary

  • SAP is telling customers that they can use data tiering in HANA to use a new innovation in database design.
  • How accurate is SAP data tiering being new?

Introduction

SAP has communicated a new item for SAP HANA. It is an amazing technological breakthrough where apparently the data is processed in the computer’s memory, but then written to the disk for something called “persistence.”

We cover this amazing development in this article.

Our Analysis

Let us being by reviewing the quotes from SAP.

“SAP HANA native storage extension is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-frequently accessed data without fully loading it into memory. It integrates disk-based or flash-drive based database technology with the SAP HANA in-memory database for an improved price-performance ratio.”

This is amazing.

Translated for the laymen it means that with SAP’s super-advanced technology enables companies to use both memory and disks and even flash drives in something called a “server.”

We have a top secret image shown below. These photos of a prototype of the device used by HANA were smuggled out of SAP at great risk to our agents.

Apparently, the server is a piece of hardware, that has the memory, the disk drives and the flash drives inside of a metal box. 

But the story gets even better. Apparently, some of the data does not need to be accessed all of the time. This is what SAP calls “warm data.” Let us review SAP explanation of this new data classification.

“Between hot and cold is “warm” data — which is less frequently accessed than hot data and has relaxed performance constraints. Warm data need not reside continuously in SAP HANA memory, but is still managed as a unified part of the SAP HANA database — transactionally consistent with hot data, and participating in SAP HANA backup and system replication operations, and is stored in lower cost disk-backed columnar stores within SAP HANA. Warm data is primarily used to store mostly read-only data that need not be accessed frequently.”

While certainly impressive, it is different from SAP’s earlier statements about HANA, where HANA was to be better than all other databases because all of the data was to be loaded into memory. And through shrinking the database size, you could..

“Run an ERP system off of a smartphone.”

In an article published in 2013, John Appleby stated..

“HANA was the only in-memory database with full HA.

The SAP HANA Native Storage Extension?

SAP calls this SAP HANA Native Storage Extension. However, it is actually nothing. It is merely a restatement of what all the competing databases have been doing for decades.

Its called memory optimization.

Let us look at SAP’s graphic where they describe their contraption.

We congratulate SAP on this observation.

What is described above is a standard memory-optimized database. The same design used by other database vendors decades ago and still used to this day. Without being able to load data into memory, it would be impossible to perform a calculation or any type of processing actually. This can be tested by taking any computer or server and removing the memory modules and then rebooting the computer or server. But according to SAP, this design is perfect for replacing “legacy databases,” as the following quotation attests.

The Perfect Architecture to Replace Legacy DBMS Technologies

SAP has a general sizing guidance for HOT vs. WARM data ratio and Buffer Cache size. However, it is ultimately the application performance SLA that drives the decisions on HOT vs. WARM data ratio and Native Storage Extension Buffer Cache size.

For the hardware to implement Native Storage Extension, simply add necessary disk capacity to accommodate WARM data and memory to accommodate Buffer Cache requirements as per the SAP sizing guidance.

The combination of full in-memory HOT data for mission-critical operations complemented by less frequently accessed WARM data is the perfect and simple architecture to replace legacy DBMS technologies. This also eliminates the need for legacy DBMS add-on in-memory buffer accelerators for OLAP read-only operations requiring painful configuration and management.”

But now the question arises,

“If SAP has such a similar design to other databases, why are those databases legacy?”

To this, we have the answer. But before we get to that we would like to take this opportunity to introduce our innovation.

Introduction the Brightwork Piston Gas/Diesel System

This diagram describes something we have named a “piston.” It comes in two types, an energy source we call “gas” and a second which uses something called “diesel.” Our proprietary design deploys a four-stroke design, which is as follows, Intake, Compression, Combustion, and Exhaust. (Patent pending, so please respect our IP)

We are thinking of deploying this technology in some type of propulsion system.

Stay tuned for more details!

Conclusion

SAP’s warm data tiering is nothing new. Furthermore, it contradicts everything SAP said about HANA years ago. HANA is now increasingly backtracking on earlier pronouncements that were designed to make it appear different from other databases. 

From the beginning of HANA’s introduction as a product SAP was told you cannot place everything in memory. SAP told everyone that they did not “understand” and that with their proprietary technology they had achieved zero latency because of their special ability to manage memory in a way that Oracle or Teradata and others could not. This meant loading all data into memory with zero swapping. We had to investigate the HANA documentation to find it was actually memory optimized, and SAP lied about this.

SAP is not an all-in-memory database, nor has it ever been. 

SAP receives a Golden Pinocchio Award for making something that all databases have sound innovative.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search our Other In Memory Computing Content

References

https://blogs.saphana.com/2019/04/17/sap-hana-native-storage-extension-a-cost-effective-simplified-architecture-for-enhanced-scalability/

How Accurate Was John Appleby on What In-Memory Database for SAP BW?

https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.04/en-US/c71469e026c94cb59003b20ef3e93f03.html

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model