How Accurate Was IFS on the Potential of In Memory Computing?

Executive Summary

  • Dan Matthews, the CTO of IFS wrote an article on in memory.
  • We review how accurate he was in his article.

Introduction

Dan Matthews’s paper 3 Things Business Decision Makers Need to Know About In Memory Enterprise Software and was published in May, 2017.

The Quotations

Gartner Defines What is In Memory?

Gartner says that in order for a technology to be classified as in-memory, it requires “the database structure to be in-memory, specifically the main memory of the server.” This, according to Gartner, is in contrast with databases that would commonly rely on a disc-based Database Management System (DBMS) that feeds data in and out of a database stored on a disc or server, and may perhaps keep some data in cache to speed up performance. Gartner’s definition of an in-memory application requires an In-Memory DBMS, or IMDBMS.

We have previously critiqued Gartner for not understanding databases and for being paid by SAP promote HANA which we covered in the article How Gartner Got HANA So Wrong. We estimate Gartner is paid over $120 million per year to promote SAP products and move them up in the rankings. Gartner makes the rather absurd proposal that having a single employee who works as an “ombudsman” was we covered in the article How to Best Understand Gartner’s Ombudsman., makes that $120 million per year irrelevant.

Therefore, it is difficult for us to take what they seriously on these topics. The statement…

“the database structure to be in-memory of the server”

Is a meaningless statement. It sounds like it means something but it doesn’t. What is “the database structure”? Is that a table? What does Gartner mean here? They don’t know. Again we have yet to see a single time that Gartner has displayed any knowledge of databases. In our analysis of Gartner’s ODMS MQ which is at Can Anyone Make Sense of the ODMS Magic Quadrant?, should make this quite clear.

The following sentence..

“This, according to Gartner, is in contrast with databases that would commonly rely on a disc-based Database Management System (DBMS) that feeds data in and out of a database stored on a disc or server”

Is also meaningless.

This is because of all databases, even SAP HANA which has stated that it stores all of the database in memory doesn’t. All databases move data from storage into memory as needed in something called memory optimization.

However, we don’t mark down Dan or IFS for quoting Gartner. Even though Gartner adds no value and is a considerable value subtract in discussions around databases, they are still widely respected. It should also be mentioned that no vendor can call out Gartner for either being corrupt or not knowing their subject matter. This is because Gartner can retaliate against any vendor that does not show them the “proper deference” as Gartner has a near monopoly or vendor ratings.

Now let us see what Dan does with this quote.

“Under this definition, the in-memory column store capabilities of the Oracle 12C Enterprise Edition, which IFS leverages to deliver its in-memory offering, qualifies as a true in-memory solution, but one that recognizes real-life challenges faced in enterprise computing. It contains both a traditional DBMS and an IMDBMS working in parallel and always in sync. It enables an application user to keep all or part of the database in memory, so that columns and tables that are frequently queried by business analytics tools or referenced in ad hoc queries can be kept in memory while other data is stored in a physical disc.”

Well, if can now kick Gartner to the curb, Oracle does have an in-memory capability and it was added to the Oracle database back in 2013 as the graphic below illustrates.

Therefore yes, Oracle provides “in memory” functionality with a column-oriented store.

100% In Memory?

“A real-life ERP in-memory application should always be, in a manner of speaking, at least some type of hybrid solution between RAM-based and disc-based data storage. In theory, a pure in-memory computing system will require no disc space or file I/O. This is impractical in the world of ERP since a modern enterprise application may store not only structured data, but unstructured and unwieldy information like photos, technical drawings, video and other materials that are not used for analytical purposes and would consume a great deal of memory. This is one drawback of ERP applications, which by default run the entirety of a transactional database in memory. Meanwhile, the in-memory feature set of IFS Applications, for instance, will give end-users a choice of which data to house on a physical drive and which to store in-memory. Or of course, if they really want to, run the entire database and application in-memory.”

This is all quite true. And here Dan is directly contradicting Hasso Plattner of SAP. We have been contradicting Hasso Plattner on this topic since 2016. Hasso Plattner is wrong, only a small portion of the overall database needs to be loaded into memory, and that data changes depending upon what is being processed at the time.

The Need for ERP Speed?

“The chief benefit of in-memory computing in ERP is obvious—enhanced processing speed, particularly when dealing with larger data sets and queries of non-indexed tables. Data stored in memory can be accessed hundreds of times faster than would be the case on a hard disc or even flash drives. But also the columnar orientation of the in-memory storage means that it becomes very fast to find a smaller subset of data inside a very large set. In-memory is optimal for what is called “narrow queries”, where a smaller number of columns for a subset of rows is extracted from a very large data set.

This speed is particularly useful when companies are running ad hoc queries of the database underlying their ERP software product, for instance to identify customer orders that conform to specific criteria or determine which customer projects consume a common part.”

This is true.

But it leaves out the fact that if you spend time on ERP system accounts, the performance of the ERP system is really rarely the issue. We live in a time of great hardware capacity. And the processing requirements of ERP systems have not increased very much over the past few decades, but hardware capabilities very much has increased.

Secondly, “in memory”/column oriented solutions speed analytical workloads primarily, and as we covered in the article HANA as a Mismatch for S/4HANA and ERP, ERP systems are primarily transaction processing applications with a few CPU intensive operations like MRP and DRP. Therefore, they do not benefit much from the analytical processing capabilities of in-memory databases. The vast majority of companies still perform reporting on a specialized data warehouse, which does make sense to use some “in memory” capabilities, although it does not need to reside within a “Swiss Army Knife” database like Oracle 12c. For example, one could use Redis combined with a row-oriented database.

Dan addresses this issue of data warehousing in the following quote.

Real-Time Visibility?

“In order to eliminate the database as a constraint, most business intelligence tools or analytics instead query a copy of the transactional data that is kept separately in a data warehouse. This data is updated periodically, so it does not truly offer real-time visibility. In-memory technology can provide that real-time view of the business, at least when the data is coming from a single source system or application. In-memory technology in itself does not replace the need for transformation and mapping that typically has to happen when performing analysis across data from multiple source systems.”

This is true, but real-time visibility is not particularly important. A report that is based upon data that is a day old or 12 hours old or 6 hours old, normally will not tell you much more than a real-time pull. The biggest problem in companies is not data currency, its subject matter expertise. I work in forecasting improvement projects. The problem that faces these projects is knowledge of things like forecast error measurement, data storage of different inputs, testing knowledge, how to document, how to follow a scientific approach. The lack of real-time visibility is just not a high priority issue. Secondly, any specific item can be found in real time from the ERP system. And ERP systems also have more rudimentary reports that are also real time.

The Incentives to Add In-Memory to ERP

“The incentives that may drive a company running ERP to adopt in-memory computing are straightforward.

For the enterprise software vendor, though, in-memory computing may be a way to address underlying issues in their application architecture. If an enterprise software product was originally designed in too complex a fashion, the application may have to look in more than a dozen locations in a relational database to satisfy a single query. They may be able to simplify this convoluted model and speed up queries by moving from disc-based to in-memory data storage.”

This is an interesting observation. Our interpretation (although we can’t prove that Dan means this) is that in memory can be used to counteract poor application design. Analyzing this article is timely for us because we just finished the article The Four Hidden Issues with SAP’s BW-EML Benchmark. And in this article, we pointed out that the BW-EML benchmark entirely leaves out the quality of the SAP BW applications, which is atrocious and which we have previously easily beaten with different software running on a laptop. That is an intelligent application design can be effective with far fewer resources.

Increasing Sales with In-Memory?

“Promoting an enterprise application that relies entirely on an in-memory database may also be a way for an ERP vendor to derive more revenue from the software sale by pushing customers to purchase a new database rather than the Oracle, Microsoft or IBM databases they would typically otherwise use. For the customer, however, this could mean re-learning and re-training of IT staff to manage a new, and proprietary, in-memory database in addition to the additional license investment for this technology.”

Yeeeeeees! Vendors try to maximize revenues. And certainly, SAP does this. In fact, this is quote is directly aimed at SAP. SAP has been selling HANA on false claims since HANA was first introduced a Brightwork Research & Analysis has been the most vocal entity calling out SAP on this, while virtually the entirety of the IT media, Gartner, Forrester and SAP’s massive consulting ecosystem has simply parroted SAP’s false claims as we covered in What is the Difference Between an SAP Consulting Company and a Parrot on HANA?

And Dan is also correct that the costs of transitioning to HANA are very large. Although we also would add on that HANA is far less stable than more mature databases like Oracle or DB2. Brightwork receives no income from any vendor, so have no reason to take any vendor’s side, and are reporting what our research has concluded.

Valid Uses for In-Memory: Big Data?

“Analyzing enormous quantities of data while it is in movement requires tremendous computing resources and real-time access to data. Information in a traditional data warehouse will be old and therefore less useful, but continuous queries on the transactional database could lead to performance issues.”

True. Although we would be remiss if we did not mention that companies are often challenged in performing analysis on univariate data. And many benefits of Big Data are conjecture. They presume that looking at many data factors will lead to great insights. The early Big Data bubble was mostly about throwing large amounts of unstructured data into data lakes and saying “we will look at it later.” Data scientists are having great difficulty showing the forecasted benefits of this combination of Big Data and data science. We have run many of the ML algorithms ourselves and are often unimpressed with the outcomes.

Therefore we see a need for more understanding applied to data analysis rather than a focus on in memory.

Valid Uses for In-Memory: In Memory Queries?

“If there is data in an application that is subject to frequent queries for decision support or ad-hoc reporting, it may make sense to move those tables in-memory. Otherwise, these queries could take a while to complete—long enough to affect the user experience. The load on the transactional database could also affect the experience of other users. If you want to summarize a thousand rows out of a million or a billion, or to retrieve a handful of columns in a table for one thousand of a percent of the total data volume, this is one area where a targeted approach to in-memory computing shines.”

Sure. Nothing wrong with that.

In Memory and Transaction Processing?

“Running an entire transactional database in-memory will probably never be optimal, but it is possible. Databases may run faster in-memory by the time there are hundreds of millions of rows in a table. For a very large database with tens or hundreds of thousands of transactions per second, in-memory across the board may be the best way to ensure performance without event loss.

High-volume transactional environments on this scale are rare, however. In most cases, it will still make sense to move only carefully-chosen subsets of a transactional database in-memory. If these critical subsets of the database, cumulatively, are numerous or extensive enough to constitute the majority of the database, it may be easier and make more sense to load the entire database in-memory. But again, these situations will be vanishingly rare.”

Yes exactly. Basically, this is simply back to memory optimization. Perhaps more memory is used — more memory will generally always be used as hardware specifications continually increase.

What Data Gets Moved into Memory?

“A hybrid approach to in-memory, with some data stored in a spinning disc or flash memory environment, makes even more sense when we remember that in a fully functional enterprise application, we are not just talking about tabular data but, often, attached files. The benefit of moving imagery—like the photos an electric utility may take of meters—into memory would be minimal whereas the cost could be high. These data are not queried, do not drive visualizations or business intelligence, and would consume substantial memory resources.”

This is actually an excellent point that I have never heard brought up before. But many data types really make no sense to move into memory. Good for Dan to point this out, and the specific reason why it makes no sense to do so.

How About a Reasonable Approach to What is Loaded into Memory?

“IFS Applications customers can choose to keep some, all or none of their database in memory. Although our technology supports running the entirety of IFS Applications in memory, we believe that a more focused in-memory approach may be desirable. To help our customers choose the right things to put in memory, we provide an In-Memory Advisor as well as pre-configured In-Memory Acceleration packages for common scenarios in manufacturing, asset and service management.

In essence, at IFS, we have worked hard to package this technology in a way that is accessible enough for middle-market companies, robust enough for the largest global organization, and agile enough to adapt to changing data usage patterns over time.”

This is in great opposition to SAP’s approach — which is to hype customers upon in-memory to get them to buy the exorbitantly priced HANA database, the pricing of which we covered in the article How to Understand S/4HANA and HANA Pricing.

Conclusion

This article receives a 10 out of 10 for accuracy.

The enterprise software market is so filled with promotional information, it is extremely rare for any article to receive a high score from us, much less a perfect score. There is nothing communicated which is inaccurate, and the article is brave for going against the conventional wisdom on in memory. It is easy to simply write an article telling customers and prospects that whatever new thing is necessary, but this article has a genuine interest in educating the reader.

Search Our Other HANA Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.ifsworld.com/corp/sitecore/media-library/assets/2017/05/02/in-memory-enterprise-software/

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

Why is the SAP Fiori Cloud So Slow?

Executive Summary

  • In extensive Fiori testing, the first thing we observed is how slow the interface is.
  • This caused us to perform a speed test which we published here.

Introduction

The Fiori Cloud is a strange introduction by SAP. You will learn about the Fiori Cloud and how accurate the claims for the Fiori Cloud are.

What is the Fiori Cloud

The Fiori Cloud is one of those strange artifacts that SAP brought out a while ago. The Fiori Cloud is a bit confusing because Fiori is just a UI, so you can’t have just a UI that is in the cloud. It has to be connected to an application layer and a database.

However, what the Fiori Cloud really is, is an online demonstration of Fiori with S/4HANA. Upon investigating this, we found something peculiar, which is related to the Fiori’s Cloud’s speed, which is the topic of this article.

Poking Around The Fiori Cloud

The Fiori Cloud is easy to access.

Once you get into it, it brings up the well-recognized Fiori tiles or squares.

The Fiori “tiles” is sort of the opposite of the SAPGUI, which is driven by transactions, or by navigating a very large tree structure.

With Fiori, the squares are selected to get into each transaction or screen. This demos nicely, but there are questions related to how well this design scales.

But Fiori has a nice search feature. This takes you directly to the item or the right square.

Once you select the item you want, often from a number of options that all meet the search criteria, you can be taken into the item or square. It has a very nice feel. But it is unclear to us if is an efficient method. It greatly depends upon the search function working, which we are about to dive into. 

The item you highlight points to the right square which you can then select. 

Fiori’s Hit and Miss Search

The search sometimes works great and is quite fast when it does work. But the search does not always work. 

But once you select the item, this is a common response.

There is a square called Working Capital Analysis. But where is it in this search? It should have come up on the right as an option once any of the keywords were typed in. It was there one time we logged in, we know because we wrote down the time it took to open, but it disappeared the next time we logged in. That is a first. 

This repeatedly occurred when we tested different searchers. Some words worked, but others didn’t. But while the search worked intermittently, Working Capital Analysis was the only square to just disappear from the UI. We checked by scrolling rather than using the search.

How does that happen?

The Best UI in Enterprise Software?

SAP has been carrying on about Fiori as the future. Hasso Plattner called it the best UI in enterprise software. But then why isn’t something basic like this fixed?

If there are many squared (not the 20 or so for the demo) how is the user supposed to find the square? Scroll the entire list of thousands of squares? That is not a feasible option.

SAP proposes that Fiori will eventually make the SAPGUI obsolete. That is not going to happen with the search still not working; squares decide to disappear, combined with such a small amount of coverage over SAP. We covered that second topic in the article The Strange Changes with the Count of Fiori Apps. 

Why are we the only ones to publish on this topic? The Fiori Cloud is available for anyone to go and check and test. But as we have pointed out in previous articles, all the money in consulting and IT media is in agreeing with whatever SAP says. Fact checking is simply not a focus. Even if Deloitte or CIO looked into this, they would never publish on their findings.

The entirety of the information apparatus that covers SAP is there to promote SAP, not to fact check SAP or to tell their clients and readers the real story on SAP.

It is exceptionally difficult to get any SAP consultant who will tell companies the truth about SAP. Most SAP consultants value their relationship with SAP and with other SAP consultants more than they do their relationship with their clients. Lying is rampant in SAP consulting. The objective is to make SAP look as good as possible, the truth is considered only within the context of a massaged narrative. 

Speed Tests for Fiori

After we got through the search problem, we were struck by how often we kept seeing the wait page for Fiori that looks like this.

We found this latency issue at several different locations, and therefore different Internet speeds. 

We did not notice any other latency issues using any other website that we accessed at these same locations. We checked the speed at one location with Speed Test, and here are the results.

So this was not a perfect Internet connection, but it was better than average, scoring four out of a possible five stars. 

When we found an even faster connection, one with five stars, we found that the Adjust Stock square/transaction took 4.49 seconds to open, 2.23 seconds longer than when tested at the slower Internet location (with four stars rather than five).

This slowness of Fiori is not a function of the Internet connection, its a function of the Fiori server, database, etc..)

The following is how long it took to simply get into the transaction screen by selecting the initial screen.

Fiori TransactionLoad Speed in Seconds (Test 1)Load Speed in Seconds (Test 2)
Adjust Stock1.8352.68
Team Calendar3.6353.40
Track Sales Orders2.7934.25
Liquidity Forecast3.124.30
Global Cash Position3.854.15
Working Capital Analysis12.61N/A (Square Disappeared so We Could not Retest)
My Spend4.243.65
My Accounts5.345.22
Order Products7.063.40
Availability Check3.602.97

*All timings were taken using an Android stopwatch app. 

What About The Effect of HANA?

The presentation of HANA has been that it would enormously speed both analytics and transaction process. Hasso Plattner has stated that HANA will deliver zero latency to all applications. If we take Hasso Plattner at his word, this means that the Fiori Cloud squares/transactions should have been limited only in the Internet connection latency, as the web, database and application server should have performed an instantaneous return. The total number of seconds should have been 0.00, exclusive of the Internet time.

We tested the fastest web page we know of which is Google at .486 seconds. But Google only returns text (we tested it searching for a word). Still, this would seem to be the rough latency of the Internet itself, .486, or roughly 1/2 of a second. So while Google is very close to zero latency, SAP is far off the reservation.

*At 1/2 a second, as one has to hit the return button and move one’s finger to hit the timer, verify the data populating the web page, and then re-hit the timer a very accurate measurement is not possible.

However the Fiori Cloud undoubtedly has HANA, yet the application transactions take an average of 4.33 seconds to load. 

Fiori Versus Our Website?

As a means of comparison, we checked the download time of one of our own web pages at Brightwork Research & Analysis. We end up with a time of 3.09. However, our pages have images on them, which means the page is larger than the Fiori pages that are being rendered. The speed will depend upon how many images the web page has (primarily). We do have pages that will render more slowly than 4.33 seconds (the Fiori average), but this is a function of the having quite a few images. Furthermore, we have far more text as well as formatting in a single page than in any Fiori screen that we tested.

SAP Fiori + HANA Losing to Open Source Products?

However, why are our larger web pages loading faster than the smaller Fiori pages that are only rendering numbers and text? Are we using some super fast backend? Hardly. We like our web host, but it is no top end setup. If we wanted to invest more money per month, we could get the speed faster, at a quite small cost. We could for instance move towards a dedicated server at our current host. That would increase the hardware available to fulfil requests.

What about the database? Is an advanced top-end database that the secret to our performance? Nope. Our web host uses MySQL. MySQL is owned by Oracle, but it is an open source database. MySQL is free. Does MySQL have a column data store and “in memory architecture” as does HANA? Nope. In fact, HANA does not compete with MySQL.

SAP has stated that HANA is faster than any of the top end databases offered by Oracle, IBM or Microsoft. But they are certainly not referring to open source database projects. Open source databases like MySQL, MariaDB, and PostgreSQL are not even part of the HANA conversation.

How about the application server? Must we be using a space-age application server right?

No again.

Our host uses Apache. Once again, Apache is an open source project and is free. Fiori uses the SAP Fiori Front-end Server. It is based on a NetWeaver Applications Server ABAP.

Conclusion

We did not start out trying to illustrate that the Fiori Cloud is slow, or that the Fiori transaction search only works inconsistently. We discovered these while just taking the Fiori Cloud for a demo. We have spent a lot of time analyzing HANA and Fiori which we have covered in articles like What is the Actual Performance of HANA? and What is in the Fiori Box. Even in a basic analysis, like this one, we find that SAP’s claims regarding Fiori and HANA do not check out. If SAP’s “in memory architecture” is so great, why are Fiori and HANA outperformed by our combination of WordPress, Apache, and MySQL — all of which are open source and free products?

The issue that we see is that no one is fact checking SAP and publishing the results. Therefore SAP marketing is sitting there proposing a virtually unlimited number claims which go unchallenged. If SAP’s claims were true, then it would be annihilating an open sources configuration that is common for the vast majority of websites. But it doesn’t, it fact, it loses to it.

Furthermore, the Fiori Cloud is supposed to be a showcase to demonstrate how superior Fiori is, and with SAP’s virtually unlimited resources it should be configured for speed. Oh, and the search box should work, and it should work 100% of the time and without disappearing transactions.

Search Our Other S/4HANA Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://blogs.sap.com/2016/12/07/sap-fiori-front-end-server-installation-guide/

https://httpd.apache.org/

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How Accurate is the Hasso Plattner Institute’s Course Explanation?

Executive Summary

  • The Hasso Plattner Insitute has curious course explanations.
  • We review the accuracy of these descriptions.

Introduction

The In Memory course offered by the Hasso Plattner Institute was recommended to us to understand in-memory computing. This was recommended to us even though we wrote the article How to Understand Why In-Memory Computing is a Myth and have observed and proven it is a deliberately misleading term.

In this article, we will analyze the accuracy of the description of the in-memory course offered by the Hasso Plattner Institute.

The Quotations

Course Description

“Week 1: The first week will give you an understanding of origins of enterprise computing. It is vital to know the historic development which lead to the emergence of current hardware as we know it now in order to understand the decisions made in the past. Many characteristics of current applications, like materialized aggregates and a reduction of detail in the stored information, have their roots in the past. While these measures were helpful in former systems, they form an obstacle which has to be overcome now in order to allow for new, dynamic applications.”

It is only the Hasso Plattner Institute that thinks this is true. An aggregate is a table of precalculated values. Hasso Plattner is very much opposed to aggregates. However, the reason he gives seems to be directed around making HANA seem to be less expensive than it is. This is because HANA is alone among databases being priced per GB. Aggregates take up space in the database, but they serve a valuable purpose. Without aggregates, constant recalculation is required. Hasso Plattner has stated that this is highly advantageous, but is it? What if those values very rarely change, such as a table of weight conversions? But this table must constantly be recalculated on the fly? If not some rule of excellence has been violated?

Hasso Plattner has stated compression values that have not born out to be true. This is covered in the article Articles that Exaggerate HANA’s Benefits.

Most HANA accounts can expect footprint reductions in the area of 30%. However, this is immaterial to companies as storage is extremely inexpensive, particularly disk storage. John Appleby, SAP proponent and head of an SAP consulting company, has made the statement that disks are a problem because they “take up a lot of space,” which is a claim we analyzed in the article What Was John Appleby’s Accuracy on Moving BW to HANA?

  • The long and short of it is that nothing Hasso Plattner has ever said about database aggregates has made any sense.
  • The focus on aggregates is a gimmick, designed to confuse the message receivers as to what is important in database management.
  • The entire aggregate discussion is a distraction.

“Week 2: Within the second week, the differences between a horizontal, row-oriented layout and a columnar layout are discussed. Concepts like compression and partitioning are introduced. Based on that, you will get an explanation of the internal steps performed inside the database to carry out the fundamental relational operations insert, update and delete. The week concludes with a fundamental difference of SanssouciDB to most other databases: the insert only approach. Following this concept, we circumvent several pitfalls concerning referential integrity and additionally gain the foundation for a gap-less time travel feature.”

This seems like good training.

“Week 3: The content of week 3 focuses on more advanced structures and operations within the database. The differential buffer, a means to prevent frequent resorting of the dictionaries and rewriting of the attribute vectors, is explained in further detail. Subsequently, also the merge process, which incorporates the changes from the differential buffer into the main store, is illustrated. The retrieval of information via the select statement, as well as related concepts like tuple reconstruction, early and late materialization, or a closer examination of the achieved scan speed, are also part of this week’s schedule. The description of the join operation, which is used to connect information from different tables, concludes this week.”

I have no comment on this section.

“Week 4: Week 4 is all about aggregation. Aggregations are the centerpiece of every business analytics application. Given that huge impact of aggregates on all parts of a business, it is of great importance to understand what aggregate functions are, why we remove all materialized aggregates and go for aggregation on the fly. You will further learn how to greatly reduce the costs of this on demand approach by using the aggregate cache and understand its connection to the differential buffer and the merge process. In the units concluding this week, you will see new prototype applications using the aggregate cache to deliver complex simulations in real time.”

With another week spent on aggregates, without any debate as to whether aggregate removal is a value-add, or worth the effort, this seems like a waste of time. Week 4 could be better spent on hiring a psychologist to analyze why Hasso Plattner is so obsessed with aggregate removal (or should we say reduction, as HANA does have aggregates, but does not call them aggregates).

Hasso Plattner either needs to come in to and speak to our Brightwork psychiatrist to get to the bottom of his aggregate obsession or he needs to admit the only reason he keeps talking about aggregates is that it is a gimmick to try to confuse and sell to customers who lack an understanding of databases.

“Week 5: Week 5 sheds light on some more inner mechanisms of the database. What happens in emergency situations, when for example the power is turned off? Logging and recovery are vital parts to know in order to understand why an in-memory database is as secure as a traditional disk based one. Further, the benefits of replicas are explained. We conclude the week with an outlook onto the implications that arise with the tremendously increased speed at hands.”

SAP promotes its HANA database which is not “in memory” but has more of the database’s tables loaded into memory. Therefore, SAP promoting the idea that a database with more tables loaded into memory is as secure as a traditional one is a marketing point. Therefore can anything that SAP says in this area be trusted?

HANA is far less stable than competing databases, although this is for different reasons. Are the reasons for HANA’s relative instability going to be explained in this course? Probably not.

“Week 6: Week 6 is centered on applications. The last conceptual unit is about data separation into active and passive. After that, we showcase several prototypes and sketch out potential fields to apply the technology, thereby also leaving the domain of pure enterprise solutions, by using main memory databases in weather simulations and medicine.”

Nothing objectionable there.

Conclusion

The purpose of the Hasso Plattner Institute is to educate and advocate as to why everyone should buy HANA. Its courses are to create devotes to SAP’s particular approach to databases. However, there is a problem. HANA is not what Hasso Plattner or SAP says that it is. SAP has presented no evidence that HANA can outperform competing databases. This means that for people taking this course, they are taking it from an institution that was started by a man who has made exaggerated and unsubstantiated claims about HANA.

  • HANA can’t meet the claims made for it by SAP. Will the course talk about that?
  • We receive emails from around the world which show SAP consulting companies providing false information about HANA to customers. Will the course cover that?
  • SAP has made so many false claims about databases that they have undermined rather than enhanced the understanding of databases generally.

So the question is why should the Hasso Plattner Institute be considered anything more than a propaganda apparatus for SAP?

SAP should spend more time trying to get HANA to meet its exaggerated claims versus trying to institutionalize or create a completely biased “HANA university” where receiving certifications or PhDs means agreeing with Hasso Plattner. It has been over six years since HANA was introduced, and the only thing HANA does well is speed data warehousing query performance, but only over previous versions of competitor’s databases.

Search Our Other Hasso Plattner Content

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://open.hpi.de/courses/imdb2015?lipi=urn:li:page:d_flagship3_detail_base%3B7YmJ63B8S%2FaTlvIxFvm2mQ%3D%3