Analysis of Bloor’s Article on SAP HANA 2014 Update

Executive Summary

  • Bloor produced a research piece on HANA.
  • We review the accuracy of this research.


On Mar 7, 2014, Bloor published an article titled SAP HANA Update. In this article, we will evaluate the accuracy of this Bloor article.

Our References for This Article

If you want to see our references for this article and other related Brightwork articles, see this link.

Notice of Lack of Financial Bias: We have no financial ties to SAP or any other entity mentioned in this article.

  • This is published by a research entity, not some lowbrow entity that is part of the SAP ecosystem. 
  • Second, no one paid for this article to be written, and it is not pretending to inform you while being rigged to sell you software or consulting services. Unlike nearly every other article you will find from Google on this topic, it has had no input from any company's marketing or sales department. As you are reading this article, consider how rare this is. The vast majority of information on the Internet on SAP is provided by SAP, which is filled with false claims and sleazy consulting companies and SAP consultants who will tell any lie for personal benefit. Furthermore, SAP pays off all IT analysts -- who have the same concern for accuracy as SAP. Not one of these entities will disclose their pro-SAP financial bias to their readers. 

Article Quotations

I have, in the past, written somewhat critically about SAP HANA. A part of the problem is that the product is something of a moveable feast with new versions appearing on a pretty regular basis—perhaps not surprising for a technology that is still relatively new—but, nevertheless, this means that it is difficult to keep up with. Also, historically at least, there has been a perception that SAP has been over-hyping SAP HANA.

Yes, HANA has been one of the most overhyped products I have ever analyzed.

HANA to Cure Cancer?

For example, Larry Dignan, reporting for ZDNet, recently wrote that “Vishal Sikka, SAP’s technology lead and member of the company’s executive board, stopped short of saying HANA would cure cancer, but not by much.” While Larry was no doubt writing this with some of his tongue in his cheek it is worth noting that, in SAP’s defence, the company (using SAP HANA) has been working with the Stanford University Medical Center and the National Center for Tumor Diseases in Heidelberg, Germany (NCT), to help the Human Genome Research Institute to put the therapeutic promise of the Human Genome within reach. And the White House recently honoured these three companies for the genomics advances they have achieved (Nov 2013).

Yes, that is true. Both Vishal Sikka and Hasso Plattner stated that SAP would release a health care research product. Vishal Sikka indicated that they were writing cancer curing algorithms in HANA.

Upon evaluating SAP’s 309 long product list, the term health or medical still does not appear years later.


All that being said, I have had a recent update from SAP and it is worth reporting a number of salient points. The first thing is that SAP HANA is intended to support both OLTP and analytic processing in the same instance. By SAP’s surveys, around a quarter to a third of HANA customers use it this way, with another quarter using it purely in warehousing environments and the remainder being specialised developments created by partners.

It is not technically feasible to do this. An analytics database has a different design from a transaction processing database. IBM, Oracle, and SQL have improved SAP’s design regarding adding a column store to an RDBMS. But it still will not match the performance of a dedicated analytics database. At the heart of this is the debate about whether analytics should be pushed to a specialized database.

In addition, in data warehousing environments, HANA is not seen by SAP as the sum total of its solution. In particular, it sees SAP HANA integrating into existing environments with Oracle, Teradata, to SAP IQ (formerly Sybase IQ), Hadoop, streaming analytics systems like SAP ESP and other data sources. In the case of SAP IQ, SAP HANA will manage perhaps the last 6 months’ or a year’s worth of data while seasonal or trending information can be stored in IQ.

That is SAP’s dream, but it looks decreasingly like this will ever happen.


Speaking of SAP IQ, HANA primarily stores data in columns (rows are an option) and SAP is now using the same compression techniques as employed in SAP IQ, except that in SAP HANA it is bit-level compression rather than byte-level, so it should be even more efficient. As an aside, bit-level compression has already been added to SAP IQ 16, released in 2013. Also, on the issue of compression, SAP has assured me that when conducting query processing the data is never decompressed except for result sets. Of course, this will also apply to intermediate result sets and that is likely to explain why I have criticised the product in the past for what I have described as spikes in memory usage.

SAP IQ is very rapidly declining in the market, and SAP is looking for basically any excuse to use it. The new idea is to use it for archival.

HANA Faster than the Competition?

But the truth is that all database products are going to be subject to the materialisation of intermediate results and there is no reason to suppose that SAP HANA is either any better or any worse when it comes to this than any other vendor. Indeed, intermediate results are dropped as soon as the data is joined, while the results are cached. What will be important is that queries are written in such a way that the materialisation of intermediate results is minimised. Of course, this also applies to business intelligence and analytics that may be used in conjunction with SAP HANA (or any other database).

The first part of this paragraph was true when Philip wrote it in 2014. But now, information has come to light that indicates SAP HANA is slower than DB2 or Oracle 12c. This is covered in the article, Which is Faster HANA or Oracle 12c. 

While on the subject of memory, the standard maximum memory for SAP HANA is 1TB. However, the company tells me that this is not the technical limit—this is only what is certified as standard—and actually you can run up to 4TB per sever. For example, Intel, in its recent Ivy Bridge launch demonstrated an Intel 4TB E7 v2 Ivy Bridge system using SAP HANA and SAS together to analyse a real-time oil and gas pipeline pump malfunction. The SAS/SAP HANA analysis came back in 5 seconds, 129x faster than the E7 v1 using SAS and a disk-based database.

In addition, you can scale out using multiple servers. Moreover, a feature I particularly like is that SAP HANA has data virtualisation built in, so that you can optimise (and the database optimiser knows about the data virtualisation) queries across distributed SAP HANA servers and also when using SAP HANA in conjunction with SAP IQ. This will also be relevant when you want to combine structured and unstructured elements (SAP HANA has a text pre-processor and also supports machine-generated data) in a single query. Thus for most environments there should be no issues about the availability of sufficient memory.

The problem with listening to SAP is that just about every single statement is an exaggeration. After a significant amount of time spent researching HANA, Brightwork concludes that SAP never had anything with HANA that the other database vendors did not have and better at doing. SAP was successful in motivating other vendors to bring out flexible column oriented stored for the RDMBSs.

To summarise: I still think that SAP over-hypes SAP HANA and the fact is that it is not the answer to a maiden’s prayer. That said, for the right application/analytic environments it should certainly represent a credible solution.

All true. However, while HANA does have excellent read performance, the evidence is that its read performance is worse than the competitors that it seeks to supplant at customers.


Bloor receives a Brightwork score of 9.5 out of 10 on the article. More information has come to light since Philip wrote this, but he has to go down as an early predictor of what we would eventually uncover about HANA.