How to Understand Why In Memory Computing is a Myth

Executive Summary

  • Non in memory computing makes no sense, therefore memory computing makes no sense.
  • AWS covers HANA’s in memory nature. Placing a database 100% placed into memory is not a good thing.
  • We cover the long history of database memory optimization.

Introduction to In Memory Computing

SAP has been one of the major proponents of something called “in memory computing.” Hasso Plattner has written four books on the topic. You will learn how in memory for databases works.

Hasso Plattner has been pushing the importance of in-memory computing for a number of years. Hasso Plattner’s books aren’t books in the traditional sense. They are sales material for SAP. The books we have read by Hasso Plattner uniformly contain exaggerations as to the benefits one can expect from “in memory computing.”

However, there have been some inaccuracies concerning the specific topic of memory management with HANA.

What is Non In Memory Computing?

So all computing occurs in memory there is no form of computing that is performed without memory because the results would be unacceptable. Computing has been using more and more memory as anyone who purchases a computer can see for themselves. While at one time a personal computer might sell with 4 GB of memory (or RAM), 16 GB is now quite common on new computers.

The Problem with the Term In Memory Computing

SAP took a shortcut when they used the phrase “in memory” computing. The computer I am typing on has loaded the program into memory. So the term “in-memory computing” is a meaningless term.

Instead, what makes HANA different is it requires more of the database to be loaded into memory. And HANA is the only database I cover that works that way. With this in mind, the term should have been

“more database in memory computing.”

**There is a debate as to how may tables are loaded into memory. Not the large tables and not the column-oriented tables. This is the opposite of what SAP has said about HANA. The reason for this debate is SAP has provided contradictory information on this topic. 

That is accurate. SAP’s term may roll off the tongue better, but it has the unfortunate consequence of being inaccurate.

And it can’t be argued that it is correct.

Here is a quote from AWS’s guide on SAP HANA, which is going to tend to be more accurate than anything SAP says about HANA.

“Storage Configuration for SAP HANA: SAP HANA stores and processes all or most of its data in memory, and provides protection against data loss by saving the data in persistent storage locations. To achieve optimal performance, the storage solution used for SAP HANA data and log volumes should meet SAP’s storage KPI.”

However, interestingly, this following statement by AWS on HANA’s sizing is incorrect.

“Before you begin deployment, please consult the SAP documentation listed in this section to determine memory sizing for your needs. This evaluation will help you choose Amazon EC2 instances during deployment. (Note that the links in this section require SAP support portal credentials.)”

Yet it is likely, not feasible for AWS to observe that SAP’s sizing documentation will cause the customer to undersize the database so that the customer will purchase HANA licenses on false pretenses and then have to go back to purchase more HANA licenses after the decision to go with HANA has already been made.

Bullet Based Guns?

Calling HANA “in-memory computing” is the same as saying “bullet based shooting,” when discussing firearms.

Let us ask the question: How would one shoot a firearm without using a bullet?

If someone were to say their gun was better than your gun (which in essence SAP does regarding its in-memory computing) and the reason they give is that they used “with bullet shooting technology,” you would be justified in asking what they are smoking. A gun is a bullet based technology.

How to Use a Term to Create Confusion Automatically

This has also lead to a great deal of confusion about how memory is used by computers among those that don’t spend their days focusing on these types of issues. And this is not exclusive to SAP. Oracle now uses the term in-memory computing as do many IT entities. Oracle references the term also as can be seen in the following screenshot taken from their website.

Is 100% of the Database Placed into Memory a Good Thing?

However, the question is whether it is a good or necessary thing. And it is difficult to see how it is.

It means that with S/4HANA even though only a small fraction of the tables are part of a query or transaction, the entire database of tables is in memory at all times.

Now, let us consider the implications of what this means for a moment. Just think for a moment how many tables SAP’s applications have, and how many are in use at any one time.

Why do tables not involved in the present activity, even tables that are very rarely accessed need to be in memory at all times?

The Long History of Database Memory Optimization

People should be aware that IBM and Oracle and Microsoft all have specialists that focus on something called memory optimization.

Microsoft has documents on this topic at this link.

Outsystems, which is a PaaS development environment that connects exclusively to SQL Server has its own page on memory optimization to the database which you can see at this link.

The specialists that work in this area figure out how to program the database to have the right table in memory to meet the demands of the system, and there has been quite a lot of work in this area for quite a long time. Outside of SAP, there is little dispute that this is the logical way to design the relationship between the database and the hardware’s memory.

Conclusion

In summary, if a person says “in-memory computing” the response should be “can we be more specific.” Clear thinking requires the use of accurate terms as a logical beginning point.

SAP’s assertion the entire database must be loaded into memory is unproven. A statement cannot be accepted if it both has no meaning and if what it actually means (as in the entire database in memory) is unproven.

Search Our Other HANA Content

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

HANA & S/4HANA Research Contact

  • Interested in Research on S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is not possible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

I cover how to interpret risk for IT projects in the following book.

https://www.ibm.com/blogs/research/2017/10/ibm-scientists-demonstrate-memory-computing-1-million-devices-applications-ai/

https://docs.microsoft.com/en-us/azure/sql-database/sql-database-in-memory

https://s3.amazonaws.com/quickstart-reference/sap/hana/latest/doc/SAP+HANA+Quick+Start.pdf

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Risk Estimation and Calculation

Risk Estimation and Calculation

See our free project risk estimators that are available per application. The provide a method of risk analysis that is not available from other sources.

project_software_risk