Can You Take and Pass the HANA Challenge?

Executive Summary

  • We have devised one of the most difficult challenges in all of IT.
  • To pass it, one must write out the claims made by SAP on HANA.

Introduction

SAP made many claims about HANA, and one of the most inaccurate was that HANA was not only a database but so many other unrelated items. Therefore we decided to create this challenge.

  1. A database, and zero latency at infinite load. 
  2. A platform
  3. A development environment
  4. A cloud offering
  5. A cloud development environment
  6. An advanced planning system (don’t question it Pedro — but HANA was slated to replace APO – which would be the first application ever replaced by a database in human history)
  7. A Big Data platform
  8. An entirely new application architecture.
  9. A coffee maker
  10. A popcorn machine

I have been laughing heartily as I have been writing this.

Everything listed above (except the coffee machine and popcorn maker) are real SAP claims.

You cannot repeat the claims made by the HANA crowd and have any background in software and databases and not begin laughing. Just try to type out the claims.

But it does not stop there. As pointed out by Rolf Paulsen, HANA was not only a database but multiple database “types”, which Rolf accurately lists as the following:

  1. 1a. A relational database for all OLAP use cases
  2. 1b. A relational database for all OLTP use cases
  3. 1c. A NoSQL database
  4. 1d. A graph database
  5. 1e. A geospatial database

Typing this all out without being seized by the natural compulsion to laugh is what we call the “The HANA Challenge.”

Taking Responsibility for Lying to Your Clients

Something that really needs to be discussed is how wrong so many people were on HANA. We covered this in the article Who Was Right and Who Was Wrong on HANA?

Now is an excellent time for all of those that had no idea what they were talking about or lied for SAP to so many customers to come clean and admit it. You can look in the mirror and repeat three times.

“I am a parrot for SAP.”

“I repeat things I don’t know are true.”

Who Needs To Do This?

That includes Gartner, Accenture consultants, Forrester, the list is quite long.

The HANA Fiasco is not just a matter of not HANA not meeting its performance claims; this is one of the most ridiculous proposals in enterprise software history. I am going to reach out to the Computer History Museum in Mountain View to get HANA listed somehow. Future generations have to know what happened. And we have the quotations documented.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Who Was Right and Wrong on HANA Content

References

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

Is Bärbel Winkler Correct the Brightwork SAP Layoff Article Was Fake News?

Executive Summary

  • Barbel Winkler of Alfred Kärcher SE & Co. KG stated that Brightwork created fake news with its layoff and HANA coverage.
  • We analyze this article for logic and information provided.

Introduction

Several months ago we wrote the article SAP’s Layoffs and a Brightwork Warning on HANA, which went viral and brought down our website several times. In the article, we discussed how the layoffs actually fit with areas of weakness with SAP we had been writing for years. Surprisingly a rebuttal appeared on the SAP’s blog.

This article is our analysis of this rebuttal.

Connecting the Brightwork Article with Fake News?

Barbel Winkler begins by discussing the topic of fake news.

“In this age of ‘fake news’, which so easily spreads thanks to social media and other outlets, I thought it might be worthwhile to share some resources and pointers of how to avoid accidentally contributing to this epidemic.

What prompted this blog post?

A couple of days ago, I happened upon a new comment posted on Paul Hardy‘s Reasons not to Move to S/4 HANA blog post. The comment links to an article written by Shaun Snapp about SAP’s Layoffs and a Brightwork warning on HANA which I then read. While doing so, I had some ‘warning flags’ go off in my mind as it contained rather emotive language and at least some of the paragraphs read a lot like conspiracy theories.”

Let us review the definition of the term conspiracy.

“A conspiracy is a secret plan or agreement between persons (called conspirers or conspirators) for an unlawful or harmful purpose, such as murder or treason, especially with political motivation,[1] while keeping their agreement secret from the public or from other people affected by it.”

Brightwork’s Article on Layoffs Implied an Unlawful Purpose?

Well, I don’t recall accusing SAP of conspiring to engage in murder or treason. But they do keep these agreements secret from the public.

Now let us look at the definition of a conspiracy theory.

“A “conspiracy theory” is a belief that a conspiracy has actually been decisive in producing a political event of which the theorists strongly disapprove. Another common feature is that conspiracy theories evolve to incorporate whatever evidence exists against them, so that they become, as Barkun writes, a closed system that is unfalsifiable, and therefore “a matter of faith rather than proof””

When has anything I have ever written fall into this category? I will wait for a quote where I stated anything like this.

Is Hasso Plattner’s and SAP’s Marketing of Hasso Plattner’s Fake PhD a Conspiracy Theory?

SAP, for example, I have pointed out has a large number of consulting firms that will repeat what they say. But does hits produce a “political event?” Also as opposed to the definition, I provide proof in my claims, so that can’t be an accurate explanation of the research the article which Bärbel Winkler read, or any other article at Brightwork. The story of how Hasso falsified the origins of HANA to put himself in a more exalted position would in no way qualify as a conspiracy theory. Something that is not discussed by Bärbel Winkler, which is a statement made in the article is that Hasso Plattner has a fake PhD. He has for at least a decade and a half been claiming to have a real PhD when he has two honorary PhDs as we covered in the article Does SAP’s Hasso Plattner Have a PhD? Does that claim also read like a conspiracy theory? Because it is easily verified by looking into Hasso Plattner’s Wikipedia page? Did the fact that SAP and Hasso Plattner agreed to mispresent his honorary PhD as a legitimate PhD qualify as a conspiracy theory? Well not according to the definition listed above, but it is dishonest.

SAP routinely makes exaggerated claims in a variety of dimensions, and they do so because of financial motivations. Not every contradiction of these claims qualifies as a conspiracy theory. Bärbel Winkler seems to have comingled the terms “conspiracy theory” with the term fact checking.

In Bärbel Winkler’s mind, anything that disagrees with the official story presented by SAP is as a conspiracy theory.

Is SAP’s Contradiction of Oracle’s Claims Around the Autonomous Database a Conspiracy Theory?

Let us take a look at the article by SAP in Forbes in Forbes Allows SAP to Vent Against the Oracle Autonomous Database for Money, in this article SAP disputes Oracle’s claims about the Autonomous database (which we happen to agree with SAP as we covered in the article How Real is Oracle’s Autonomous Database?). However, is the fact that SAP states that Oracle’s claims about the autonomous database a conspiracy theory? If not, why not? Doesn’t it contradict the official claim by Oracle, could we not say that SAP is peddling in conspiracy theories — at least if we apply the definition of a conspiracy theory as proposed by Bärbel Winkler.

Given this line of reasoning couldn’t one say that any position that disagrees with our position is also a conspiracy theory, with the term conspiracy theory being nothing more than an ad hominem rather than a legitimate claim against a contention? How can SAP say that Oracle has exaggerated what Oracle’s autonomous database is capable of, doesn’t this imply a conspiracy on the part of Oracle? Isn’t any contradiction of an official claim require the creation of a conspiracy theory?

More Emotive Language than Say…..Bill McDermott?

Bärbel Winkler spends much more time either associating the article with fake news and talking about a mystery bias (which she can’t prove or even guess to the source) rather than addressing the facts presented in the article. I personally found Bärbel Winkler writing style very boring. And it could probably be improved by the use of some emotion. However, I did not let this factor into my analysis of the content. I don’t care much about the writing style, because my focus is the content. And this is not a debate around writing styles. But furthermore, does Bärbel Winkler actually have a problem with emotion in language?

Let us take a look at the following quote from an article on Bill McDermott.

“But at this point in his story, many readers will be watching for the thunderbolt — his journey had to this point been so straight and direct, so perfectly well-groomed and lucky, so driven by hard work, one becomes uneasy.

And BOOM!  The thunderbolt strikes. His wife Julie, struck by cancer. Suddenly McDermott’s world had limits, exhausting days followed by bad nights. “Alone on that dark hill up in Connecticut, with two little kids depending on me, it was grim, pretty grim.”

Relief came in the form of a call to sunny California from Siebel Systems. His wife’s cancer caught in time, at this point McDermott had feet firmly planted in the tech world, and again, momentum and drive were in his favor.”

Here is Bill McDermott on SAP’s values.

“Throughout our history SAP has been committed to leading with purpose. Our enduring vision is to help the world run better and improve people’s lives. Together, as citizens, companies and countries, we need to be committed to working together to drive greater progress. There is no time to wait. That’s why we are proud to have signed the Compact for Responsive and Responsible Leadership. With shared commitment and resolve, the best days are ahead for the economy, the environment and society as a whole.”

Have I ever told a story or written any analysis with this type of emotion anywhere in any Brightwork article?

Are these quotes from Bill McDermott something that Bärbel Winkler would critique or is this a heartwarming story or unachieved goals ethical goals emotive according to Bärbel Winkler?

This book by Bill McDermott is filled with emotive language, far exceeding anything ever written at Brightwork Research & Analysis. Hint, we are researchers with technical expertise, while Bill McDermott is 100% sales and uses emotion to manipulate other people. But the problem with too much emotion is with a research entity and not Bill McDermott.

On the basis of this can we expect an extensive critique of the book by Bärbel Winkler, or is Bärbel Winkler afraid of critiquing Bill McDermott? We ask because nearly all SAP resources seem to to be absolutely petrified of saying anything against Bill, which allows him the freedom to say the most false things and never is called out for it. I have yet to have gotten any SAP resource to agree or critique McDermott’s comment about HANA being 100,000x faster than any competing technology. SAP resources are petrified of going on record as calling out McDermott’s lies.

Bärbel Winkler’s claim around emotive writing appears to be someone looking to make a claim about the authority of the article based upon a superficial impression. All of Bärbel Winkler can be classified as superficial dislikes. Furthermore, the article states that SAP has been lying for years about its newer products that are supposed to be the future of SAP. SAP is aggressively lying to customers about SAP Cloud, about Leonardo. Is one supposed to stay completely unemotional about this?

Go With Your Gut?

Bärbel Winkler continues on how she relied on her gut to tell her the Brightwork article was fake news.

“Before I could put my gut feeling into words, Paul Hardy had already responded mentioning some of the items I had noticed as well:”

Is Bärbel Winkler having an emotional reaction to the article, or analyzing the content it presents?

First, does Bärbel Winkler have the background to be critiquing the article? Has Bärbel Winkler put the hundreds of hours over more than 3 years to make this determination if what the article states is true?

In debates with SAP resources on the article, some of the most aggressive detractors of the article have said..

“I don’t know enough about the topic to contradict the article, but I know it must not be true.”

This seems to be where Bärbel Winkler is coming from as well. This is to a researcher like myself an extremely bizarre statement, and correlates to “the articles contradicts my beliefs,” rather than being a statement of any substance.

Interestingly, we analyzed the statements of one of the major proponents of HANA, John Appleby, and found he had an accuracy of less than 6.5%. The Appleby Accuracy Checker: A Study into John Appleby’s Accuracy on HANA. Just that research took more than a week. And it required many hundreds of hours to support that research. There is no other research entity that has put anywhere near this effort into fact-checking the claims made about HANA.

Comment from Paul

“If the articles that Shaun writes were true then everyone who works with SAP should be very worried. However, not everything on the internet is true, as we know.

As it is, the articles read rather like a conspiracy theory i.e. “I am the only one in the world telling the truth, every single other person on the planet are conspiring together to hide the truth”.”

Let us address the issue of the level of concern SAP resources should have.

SAP’s growth stage is over. So in that sense, they should be concerned, but SAP will continue to make a lot of money for many years. But the growth is over because SAP’s products outside of ERP are simply very poor in quality. This is covered in the article How SAP is Now Strip Mining its Customers.

SAP’s Future Growth Prospects

Some of SAP’s non-ERP applications are commercially successful, like SAP BW, but SAP BW is one of our lowest rated BI products. SAP’s acquisitions are mostly not related to SAP’s core. Eight years after the SuccessFactors acquisition, SAP still has not fully integrated SuccessFactors into ECC. SAP has not shown capabilities out of ERP, and this can only be covered up with consulting partnerships for so long. SAP’s implementation of type 2 indirect access is essentially due to desperation for license revenues because SAP cannot meet its revenue targets following the rules.

If there were truly competitive market forces, SAP would have shrunk a great deal already. But they are highly protected by a system of consulting or “ecosystem.” (consulting companies that don’t care about what is good for customers) and media and analyst entities (that sell their opinions to SAP). This ecosystem will repeat anything SAP says, no matter how false, just see the Run Simple program for evidence in the article Is SAP’s “Running Simple” Real?

Only Brightwork Speaks the Truth on SAP?

I previously addressed the conspiracy topic. But the quality of information on SAP is terrible. We can trace back the financial bias of nearly every entity that provides information on SAP. And both Paul and Bärbel Winkler have this financial bias as well. Even Nucleus Research, a firm that used to be critical of SAP, has been brought into the fold and now publishes inaccurate information in return for money.

SAP has a standard practice of paying off analyst firms to perform well in a study and then showcasing their good performance in the study without disclosing they paid for the study. Nucleus Research’s conclusion is false, S/4HANA Cloud is not a leader, and in fact, is barely used. Did Bärbel Winkler call this study by Nucleus fake news? Of course, not because it is pro SAP. Bärbel Winkler does not care what is true or false, only what is pro-SAP or negative against SAP. 

But as SAP has Gartner and media entities on the payroll. And as there are so many SAP consulting firms that are repeating whatever SAP says, my argument is yes, very few people speak the truth on SAP. However, I don’t know why this is so shocking. Pharmaceutical companies do very little research, they parasitize the taxpayer-funded research at universities and keep most of the profits themselves. And this is also kept quiet. Financial advisors pretend to look out for their clients, but in the US, only around 10% of them are registered as fiduciaries, which means they are obligated to put their client’s interests ahead of their own. There is a large amount of corruption in economies overall.

Furthermore, as far as telling the truth on SAP, I read an article by Paul, on S/4HANA, and he clearly censored the article and used humor to keep from offending SAP.

Let us take a look at one of Paul’s quotes from his article Reasons Not to Move to S/4HANA, to see how much he tells the truth on SAP.

“The PR people keep saying that S/4 HANA is being adopted faster than any other product in SAP’s history and that may be so, but the giant pink elephant in the room is that if (at TECHED for example) the presenter asks for a show of hands as to which organisation is on what release of SAP they are shocked and disappointed to find out just how many are still on older releases. Some are still on 4.6C.

If it is so good, why hasn’t everyone dropped every other project and moved to adopt S/4 HANA like they were buying a hot cake?”

Hmmmm….that seems like an interesting pause going on here, or an unwillingness the connect the dots. So SAP is or is not lying about the number of go-lives?

Is there something Paul would like to say? Because he moves off of the subject right after this. This is an example of Paul, like just about every other SAP resource or information provider not telling the truth or at least the whole truth on SAP. And Paul’s quote here is about as honest as any SAP resource will get. That is the quote is a complete outlier. Vinnie Mirchandani stated in his book SAP Nation 3.0 that S/4HANA..

“S/4 has not been a runaway success but SAP’s competition has not exactly gone for the jugular either.”

And that is truly an understatement. Also, how is SAP supposed to “go for the jugular?” SAP customers are locked into ECC, and they are not going to move away from ECC.

Let us take a step back to fully analyze the ridiculousness of S/4HANA as the fastest growing product in SAP’s history. For S/4HANA to be the fastest adopted application in SAP’s history, it would have to outpace R/3 in the 1990s. This is the product that made SAP what it is today. S/4HANA has an extremely small number of go-lives.

So isn’t SAP’s claim impossible?

That is a mighty big pink elephant Paul is referring to, but it is also known in the common parlance as a straight up lie (by SAP not Paul). Paul is not lying, but he is changing the subject. SAP has been exaggerating S/4HANA numbers since they introduced S/4HANA as we covered in How Accurate Was SAP on S/4HANA’s Go Live Numbers?

You see, we don’t stop at calling an obvious lie a “pink elephant.” So while Paul did more than most who write about SAP, Paul is clearly concerned about publishing material that is too critical of SAP. And this is most likely for career reasons. We don’t shy away from writing what the research says on SAP.

But the topic brought up here by Paul is really not an argument against the article. This is merely the 2nd ad hominem or personal attack.

Let us be sure to keep count.

The first ad hominem was that the article was fake news. Actually, I want to address this issue. A trusted contact who reviewed this article pointed out that Bärbel Winkler never directly calls the Brightwork layoff article fake news, but the effect of constantly associating the article with fake news is such guilt by association, that I do not see the distinction. Bärbel Winkler it seems is trying to have it both ways by setting an overall framework where she creates the association, but then gives herself a way out, but being able to say “well I never actually said it.” So there is no reason for me to accept this underhanded tactic. Bärbel Winkler is clearly of the mind that the article is fake news.

The second was that the article seemed to have emotional language. The third was that the article seems conspiratorial.

  1. Fake News
  2. Emotional language
  3. Conspiratorial

We are probably 1/4 of the way through the article and we have three ad hominems and zero statements addressing the claims in the article.

Brightwork Has Been Incorrect in its Predictions?

“He has been predicting things for many years that keep not happening e.g. SAP ditching HANA, which is rather like the people who keep predicting the end of the world and when it doesn’t happen on the target day, they set a new date.”

We predicted that SAP would have to backtrack on HANA for S/4HANA. Why SAP Will Have to Backtrack on S/4HANA on HANA. That means that SAP will at some point port S/4HANA to AnyDB. However, we never set a date or target date this would occur, and therefore we never changed a date. How would we know when SAP would change this policy? Check the article, and if you find a date there, be sure to let us know because we can’t find it.

We made the prediction based upon the fact that HANA is limiting the market for S/4HANA. This is the first factual claim made in Bärbel Winkler’s article, and it is incorrect.

Secondly, the analogy to some end of days preacher is not correct. Also, why are our research infused predictions being compared to the predictions of a cult leader who bases their predictions on reading a religious text? This seems to be the fourth ad hominem, which is that Brightwork is similar to a cult rather than a research entity. What is left out is how accurate we have been in our predictions. SAP would love to have our accuracy, which you can check for yourself at A Study into SAP’s Accuracy.

HANA has not seen anywhere near the success predicted by SAP. It is difficult for anyone to deny this is the case.

The point of the article on the layoffs was that layoffs hit HANA particularly hard, and for an obvious reason. The primary point is entirely unaddressed by Bärbel Winkler.

In 2011 SAP claimed that they would be the number two database vendor by 2015

This is a listing from DB Engines that tracks database popularity. Where is HANA? At #20 with roughly 4% of the share of Oracle. Where is SAP Adaptive Server? At #16, with roughly 5% of the share or Oracle.

Does this look like SAP is the #2 database vendor? And this is now four years after 2015. Are Bärbel Winkler or Paul going to critique this prediction, or will this ridiculous prediction go unnoticed, because neither of these individuals holds SAP accountable for false claims due to their financial bias? Both Bärbel Winkler and Paul have analyzed the predictions made by Brightwork and the predictions made by SAP, and have concluded that the accuracy problem resides with Brightwork?

SAP stopped reporting quarterly numbers of HANA back in 2015, as we covered in the article Why Did SAP Stop Reporting HANA Numbers After 2015?

SAP claimed they would become the John Appleby stated that SAP would be

“finished on Oracle,”

as we covered in the article How Accurate Was SAP on SAP Being Finished on Oracle? 

With roughly 7,370 customers according to iDatalabs, and with 1.25% of the database market.

Who looks like they have done a better job predicting the trajectory and growth of HANA, SAP or Brightwork?

Who is the insane “end of days” preacher in this scenario, Brightwork or SAP and Hasso Plattner?

Contradictory Claims?

Next, Paul accuses Brightwork of making contradictory claims.

“Some of the accusations are contradictory – for example he both accuses Hasso Plattner of designing a rubbish database and at the same time claims Hasso did not design it all, but acquired it from a mysterious un-named company.”

This is a manufactured contradiction.

Let us review the facts.

  1. Hasso Plattner did design a “rubbish database.” And by the way, Hasso Plattner lacks the qualifications to be a database designer. This should be no surprise that this would be the outcome.
  2. I never stated that Hasso did not design some parts of HANA. But the parts that Hasso proposed are the parts that don’t work. Hasso proposed that all tables should be columnar (a major error that was reversed in SPS08 as we covered in How Accurate Was John Appleby on HANA SPS08? He proposed that all data could be processed in memory, and it doesn’t. He proposed removing all aggregates Is Hasso Plattner and SAP Correct About Database Aggregates? All of these items were terrible ideas and are the subject of frequent lampooning by people that understand databases. But that does not mean that HANA was not based upon pre-existing technology. In fact, if the author or Paul had read the link, which is the article Did Hasso Plattner and His Ph.D. Students Invent HANA?, they would have found that HANA was based partially on P*Time and TREX, which were two acquisitions made by SAP roughly a year before Hasso claims to have begun his work on HANA. Although Teradata is suing SAP claiming IP theft that ended up in HANA, as we covered in How True is SAP’s Motion to Dismiss the Teradata Suit. Is the Teradata lawsuit also fake news?

So let us review the 2nd to 3rd factual claims made by the Bärbel Winkler article.

  1. The fact that Hasso designed HANA badly is contradicted by the fact that Brightwork stated that HANA was based upon acquisitions: (FALSE)
  2. The two acquisitions were mysterious and an unnamed company: (FALSE)

I don’t know Paul personally, but I have read several of his articles and he is clearly a smart person. However, Paul has not put in the work to research HANA the way that I have. And he is constantly subjected to faulty information about HANA because of the company he keeps.

Bärbel Winkler Cannot Verify The Article?

“On my own, I cannot really verify how much – if any – truth is actually contained in Shaun Snapp’s article, but just going by the language and emotiveness I would bet on “not really much” being the correct assessment.
How can ‘fake news’ or misinformation be identified?”

Is this not the definition of a stupid statement? Can an article’s accuracy be gauged by its language and emotiveness? And what “language” is Bärbel Winkler going by, because I have no idea how to respond to this as I don’t know what it means. In the normal parlance, “language” would mean improper language or swear words. But that can’t be right as the article does not contain any profanity. And how many times is Bärbel Winkler going to discuss her interpretation of the style of writing of the article?

If Bärbel Winkler can’t verify any of the information contained in the article, how is she in a good position to claim the article is fake news? In fact, the last person to be fact-checking an article or categorizing it as anything is a person who lacks the domain expertise in the subject of the article. This is why I don’t call articles on quantum physics “fake news” because I don’t like the way the article is written.

And not only does Bärbel Winkler not have the domain expertise to verify the article, the other commenters (as we will see) also lack the domain expertise to verify the article. However, all of them seem to know that the article if fake news. How convenient, in surveying a group of HANA know-nothings with a 100% SAP bias, Bärbel Winkler has found that an article that is highly critical of SAP and of HANA must not be a legitimate article. Will the next article cover how scientists paid by Exxon don’t agree with global warming? No one knows where Bärbel Winkler will take this crackerjack method of determining what is “fake news” to next?

Climate Science and Fake News?

“Through my involvement with climate science and the misinformation (aka ‘fake news’) spread in that particular topic area, I’m in touch with researchers who study this very topic. Underestimating the importance to not fall for and accidentally spread ‘fake news’ is not really possible, so here are some pointers of how to verify the credibility of sources and/or articles before sharing them. The information below is from the blog post Threading the Fact-Checking Needle on the Pro-Truth Pledge (PTP) website, using Shaun Snapp’s article as a case study.”

That is curious, I wonder why the conclusion as to who is providing misinformation in climate science is not declared. Because the answer is the misinformation provided in that area is from those on the petrochemical payroll.

It is curious that one can’t tell from the quote, however. Does the author’s position on this topic need to be hidden for some reason? Or is there self-censorship going on here that the author does not want to lose the audience that agrees with Donald Trump that climate change is a hoax perpetrated by the Chinese? (no not making that up, this is Trump’s explanation)

If you are an author who is afraid to even declare what you mean and who is responsible for false information in the climate area, which is entirely black and white by the way, one has to wonder about the forthrightness of that author.

What follows is Bärbel Winkler following or attempting to follow rules of how to identify fake news from the PTP-Blog.

Rule #1: Is the Brightwork Article Timely?

Is the article timely?

“For the article in question, this one can be answered “Yes” as it uses the recent layoffs from SAP as the hook.

From the PTP-blog:
“Many items may be interesting but if they’re more than a few months old, chances are most of those with whom you would share have moved on to other news. It’s generally best to give these a pass.””

It is difficult to see what currency has to do with whether something is true or false. What about Newton’s theory of gravity? Its origin can be traced back to 1686.

Should we give the theory of gravity a pass?

Rule #2: Does the Brightwork Site Have Credibility?

“Is it published on a site generally agreed to be accurate, precise and reputed for its integrity?

I wasn’t able to find any “credibility” analysis for Shaun Snapp’s Brightworkresearch website, so cannot really answer this with a “Yes”. Even though Shaun Snapp regularly refers to “we” in the article or the tweets shown in the sidebar, I’m not really sure that Brightworkresearch is actually more than a one-man-endeavor (anybody know for sure?).”

What is a credible website in the SAP space? G2Crowd, Gartner or Forrester?

They may appear credible but they all take money from SAP which they do not declare and the quality of their analysis is quite poor. Many of G2Crowd’s SAP reviews are written by SAP consultants, who are looking to promote SAP products. And G2Crowd serves as lead generation for vendors who pay them. All of this is undisclosed to readers.

Forrester will publish anything that SAP pays them to publish. We cover why we are a far better source on SAP than Gartner or Forrester in the article Why is Brightwork Better than Gartner or Forrester?

Brightwork Research & Analysis is currently composed of Shaun Snapp and Ahmed Azmi, although we occasionally use subcontractors on projects. But we are certainly tiny. But one cannot measure accuracy by size. If one could then InfoSys or WiPro would be highly credible providers of information on SAP. Does anyone think that is true?

In fact, the lack of intellectual property development at the major consulting firms is staggering given that some of these firms employ hundreds of thousands of people. None of these firms could produce any research, because any research they did produce would be instantly gerrymandered by a senior partner to maximize some sales goal.

The Term “We”

The term we could be used even if Brightwork were one person. This is called the use of the “royal we,” as explained by Wikipedia.

“In the public situations in which it is used, the monarch or other dignitary is typically speaking not only in his or her personal capacity but also in an official capacity as leader of a nation or institution.”

I am a researcher. Ahmed is an advisor. But Brightwork Research & Analysis is a legal entity. “We” is essentially a shorthand to describe the entity of Brightwork Research & Analysis. I could write out the full name every time, but that would both be tedious, and would then bring accusations of grandstanding or writing overly promotional articles.

However, if the claim by Bärbel Winkler is that Brightwork Research & Analysis is small, that is true. And no claim of otherwise has ever been made. However, it seems to be using smallness as a criticism. Again, see WiPro for the relationship between size and accuracy on SAP.

One of the most important features of Brightwork Research & Analysis is completely overlooked by Bärbel Winkler, which is that we take no money form any vendor. This is a feature not shared by all of the entities that are approved analysts for SAP. It is curious that nowhere does Bärbel Winkler note that media output is highly correlated with the financial connection to vendors. Apparently, this all-important point does not interest Bärbel Winkler.

Is the Brightwork Article Opinion?

“Does it appear to be an opinion piece or a factual narrative?

Due to the rather emotive language it reads more like an opinion piece to me. Something the author seems to want to hide by calling his website “Brightworkresearch“.”

And back to the emotive ad hominem once again. With all of the information provided in the article and all of the links to supporting information, the article is an opinion piece?

Opinions are nothing more than conclusions, each based upon data points or discrete facts. One cannot magically eliminate contrary views by calling them opinions. When a doctor or lawyer provide their analysis of a patient or a case, what do we call that? That is right, a medical or legal opinion. But we do not seek to undermine their conclusions by calling them “just opinions.” The credibility of opinions come from the evidence presented and the domain expertise of the presenter. We have a great deal of HANA research. In fact, we have research that neither Bärbel Winkler or anyone Bärbel Winkler has included in her article can even understand (at least if we measure the responses to the article). If Bärbel Winkler in barely able to understand anything about HANA, how is she in a good position to say the research is not substantial?

Does the Brightwork Article Make Outrageous Claims?

“From the PTP-blog:
“If it’s an op-ed and you choose to share, you probably want to add a comment to that effect when sharing, as well as clarifying what you believe about the op-ed: do you agree with it, disagree with it, or agree with some parts, and disagree with others.”

Does it make any absolute or outrageous claims?

I’d put the conspiracy minded claims of how HANA got created and referring to it as “fake innovation” into the “outrageous claims” category.”

Our critique of HANA is that SAP faked innovation. Unlike Bärbel Winkler, we investigated the claims of HANA and determined that Hasso added little to what existed already outside of this removal of aggregates and declaration that all tables should be column oriented (even to support transaction processing). Read the article How to Understand Fake Innovation in SAP HANA. Also, see the article on how SAP has backward engineered other databases for years. Did SAP Just Reinvent the Wheel with HANA? 

While admittedly not knowing anything about HANA, she categorized our claim around SAP innovation has outrageous. And how would Bärbel Winkler know one way or another?

Is the Brightwork Article Too Categorical?

“From the PTP-blog:
“Here, you’re looking for trigger words or phrases like “always”, “never”, “everyone”, “no one”, or “all the time.” These are red flags for claims that are likely to be, at least to some degree, wrong. Rarely is truth absolute enough to hold in every case or falsehood so absolute as to be false in every instance.

If your answer is yes, then it’s probably best to pass on sharing.”

Does the article please you?

No.

From the PTP-blog:
“If so, it may be playing to your subconscious biases. Make extra efforts to check if it is true.”

Does it make you angry?

No. As mentioned at the beginning, it made me wonder how factual it actually is.”

That is curious because the original article called into question many of SAP’s products that were supposed to be the future of SAP. HANA, Leonardo, SAP Cloud. Layoffs were concentrated in areas that have failed to meet expectations. As an ABAPer, Bärbel Winkler was not angry? Was she concerned? Did she welcome the news with an open mind?

Is Brightwork Research Bringing an Uncomfortable Truth?

“From the PTP-blog:
“Try to determine why. It may simply be that your anger is being triggered by an “uncomfortable truth”.

In either case, any article that stirs strong emotions (positive or negative) needs checking – as the emotions themselves may lead you to share as a result of your own biases.

Note here that an article or site may be intentionally designed to incite anger, revulsion, or outrage. The aim isn’t clarity or edification but rather to get your goat, basically a form of trolling. If this appears to be the case then you can safely ignore it. It’s simply not worth your time.”

More questions to ask and answer (from the PTP-blog):

Does the text support the title?
Are there any actual facts cited within it?
Are they well-supported?
If so, how reliable are those sources and can you trace them back to original articles or studies?

Judging from Paul’s reaction, at least some of the claims made in the article can be easily falsified, making them not really “well-supported”:

“In addition, some of the “facts” can be checked against reality e.g. the claim that an in-memory HANA database runs just the same speed or slower than a traditional disk based one. There are many organisations out there that have moved to a HANA database and they have all seen improvements, albeit not the over-stated claims made by SAP marketing, but improvements nonetheless.”

First, what about the claims that they thought were supported? When we analyze an article we review the entire article or at least the most salient points. Bärbel Winkler’s article stops at trying to fact check two of the article’s many contentions. Even if both were wrong, the article is based upon many contentions and supporting articles. Why does Bärbel Winkler not mention the points that both Bärbel Winkler and Paul and others think may be supported? If they are supported, then it is deeply problematic and unethical even to call the entire article fake news. Disagreeing with “some of the claims” is not going to cut the mustard. Bärbel Winkler could have said (A, B & C) are those we agree with and (X, Y & Z) are ones that seem true. But instead, she looks to invalidate the entire article through barely addressing any of the points in the article.

Now let us get to the two areas that Paul disputes.

Brightwork Stated that HANA Underperforms Disk Based Databases?

We have multiple reports of ECC on HANA underperforming ECC on Oracle. And this is specifically for transaction processing, which is most of what ECC does. We covered this in the article HANA as a Mismatch for S/4HANA and ERP. Paul is restating what SAP says, but he is not adding case studies or reality to the analysis. Even if some customers benefited from analytics improvements, that would not invalidate our claim regarding transaction processing performance.

Secondly, we never stated that performance for analytics goes down. It goes up. We covered this in the article SAP HANA Overview: How SAP HANA Is Such a Fast Database (For Analytics). But the story is much more involved than just that. Remember, both Oracle and IBM reached equivalence with HANA in 2014 and SQL Server in 2016 when they added in-memory and columnar capabilities. Something not mentioned by Paul is that much of the improvement in performance seen in HANA has come from increases in hardware as we covered in the article How Much of Performance is HANA?

Is Paul Accusing SAP’s Marketing of Making False Claims

Paul also seems to be critiquing SAP marketing. He seems to be saying that SAP marketing made false claims. So is Paul saying that SAP lied when they made these claims?

This is the second time in recent memory that a pro-SAP resource has claimed SAP marketing provides inaccurate information. The first is Hasso Plattner, which we covered in the article John Appleby, Beaten by Chris Eaton in Debate and Required Saving by Hasso Plattner. When called out for inaccuracies on HANA, Hasso states.

“please don’t think that marketing or false statements will have any impact over time. there is nothing more transparent than application using an in memory data base in the cloud. nothing can be hidden away.”

Why is Hasso correlating claims made by HANA by SAP marketing with false statements? False statements have an enormous impact over time. They lock decision makers into lies told to them by the vendor, and as such lock them into and make them invest in the lie and protect the lie.

Why is this ok?

Furthermore, Hasso Plattner cannot hold himself apart from SAP marketing. Hasso was the primary message shaper for HANA, to begin with, and has ceaselessly repeated false claims around HANA.

Our conclusion from reviewing multiple data inputs is that HANA has no advantage over the competing databases in dual mode processing. And of the databases in question, HANA is far more expensive than any of the competing offerings and far higher in maintenance as well as even in 2019 it severely lags the competition instability and is constantly being patched. Furthermore, we have reports of HANA losing in performance to SQL Server! And this is on a far smaller hardware footprint. Do readers have any idea how large the price discrepancy is between HANA and SQL Server, and the hardware cost difference between HANA and SQL Server? It is enormous. HANA was supposed to usher in a new day of transformative data processing and entirely in memory applications. An entire new way of developing applications with (according to Hasso Plattner) the amount of code being dropped in half. Why is HANA losing in performance to SQL Server? This would be like LeBron James losing a game of one on one to a midget.

One would have to say

“maybe LaBron has lost a step.”

If Lebron can’t beat a midget, the Lakers would have to begin asking themselves,

“Why are we paying LaBron all this money.”

This is how weak HANA’s value proposition is, and HANA comes with indirect access liabilities that none of the other competing applications does as we covered in the article The HANA Police.

Given HANA’s expense and promises that are truly scandalous. SAP owes many companies refunds for the discrepancy between the promises of HANA and reality.

Where are the Missing Benchmarks for HANA?

Secondly, if HANA’s superiority is so proven and such a settled issue, why has SAP refused to perform any benchmarks for the standard SD benchmarks on HANA as we covered in the article The Hidden Issue with the SD HANA Benchmark? Why does SAP disallow competitive benchmarks for analytics as we covered in The Four Hidden Issues with SAP’s BW-EML Benchmark.

What did John Appleby say as to why SAP had not performed a benchmark for HANA using SD?

“You know, you get no-nonsense from me Bill.

I’ve not run the benchmark but I believe it’s because:

1) SD doesn’t run well on HANA

2) SD doesn’t accurately represent how customers actually use systems in 2014

3) HANA does run well for how customers really use systems in 2014″

That is curious, so SD benchmarks have been suppressed for HANA because HANA does not perform transaction processing very well. That is the results are embarrassing. Read the full analysis at the article How Accurate Was John Appleby on HANA Not Working Fast for OLTP?

Brightwork Stated that Code Pushdown Does Not Speed Performance?

Now we will go to Paul’s next statement.

“In regard to pushing code down to the database, once again, it is possible to verify this by doing an experiment where you write two versions of a query, one with code pushdown, and one without, and see which runs faster. Then it becomes a straight yes/no rather than a matter of guesswork and speculation.”

When did we say this is not true?

Again, the article by Bärbel Winkler is arguing against claims we did not make. Is this how to dispute “fake news,” by contradicting things that were not said?

Let us review what we wrote. We wrote that the code pushdown is first not innovative, as Oracle has had this capability for many years How Accurate are SAP’s Arguments for Code Pushdown? In fact, SAP did not use the term stored procedure, instead opting for a new term it created called “code pushdown.” This is explained by Rolf Paulson.

“in fact, SAP did not use the term stored procedure”, you may add “or (complex/aggregate) view” because what can be easily done by database views SELECT… GROUP BY…” for decades in database views just couldn’t be used on the ABAP platform due to limitations of OpenSQL and classical ABAP views , therefore a lot of stupid aggregation had in ABAP code, nested loops, which are now “pushed down” into CDS views like it was possible on other platforms for years, just taking a database view as query-source”

This was done specifically because it would make it sound like SAP was doing something new. SAP has been pushing the innovative narrative with HANA, while backward engineering other databases as we covered in the article Did SAP Just Reinvent the Wheel with HANA?

The problems with code pushdown is that it locks the buying into the database, which is one reason why it is difficult for companies to move away from the Oracle database, and is one reason that Oracle has promoted the use of stored procedures. Likewise, SAP is promoting placing code in the database in part to block out competing database vendors and to have a reason to restrict database portability so that they can push out other database vendors from the market.

In some very extreme situations, there may be a necessity to push code into the database, but the maintenance goes up and portability also goes down. Therefore it should be done with a full consideration of the implications.

Paul does not appear to have read what we said about HANA.

Bärbel Winkler lacks an understanding of the topic and does not know who to ask for input.

Sharing an Fake News Article?

Satisfied that the entire article has been falsified, Bärbel Winkler jumps to what she really wants to do, which is brand the article fake news and then spend time discussing the dangers of fake news.

“Why is it important to be able to judge the credibility of an article before sharing it?

This is where the Irish Times article Combating ‘fake news’: The 21st century civic duty which actually prompted me to write this blog post, comes in. As the authors of the article, Stephan Lewandowsky (link to an interview about fake news) and Joe Lynam, put it:

“But truth is under threat, thanks to the practitioners of disinformation and the four D’s of deception: distort, distract, dismay and dismiss.”

“This never-ending flow of distraction, distortion, dismissal and dismay creates the fifth and most consequential D: Doubt. Doubt in our newspapers, doubt in our governments, doubt in our experts and doubt in the truth itself.”

“The 21st century equivalent of clearing snow from the path of your elderly neighbour should be preventing your neighbour believing (and spreading or sharing) every outraged headline.”

“This new civic duty becomes more urgent every day. False information spreads faster on Twitter than the truth. But information does not spread by itself, it requires people to spread it. So before sharing anything on social media, do some fact-checking.””

Let us slow down Bärbel Winkler. Let look at the claims made in your article.

The Factual Challenges Contained in the Bärbel Winkler Article

Article ClaimReality
He has been predicting things for many years that keep not happening.Brightwork has the best record on predicting SAP. See our direct comparison to SAP's accuracy.
Brightwork predicted a specific date that SAP would open S/4HANA to AnyDB. No date for S/4HANA move to AnyDB was ever predicted.
Brightwork predicted SAP would ditch HANA.This is never a prediction we made.
Brightwork stated that HANA underperforms disk based databases.We never made such a statement. See article for full explanation.
Brightwork stated that code pushdown does not speed performance.Never made this claim. We never would as we know it does. Our critique is more nuanced.

The Ad Hominem (Personal Attacks) Contained in the Bärbel Winkler Article

Ad HominemReality
The Brightwork article was fake newsNeither Bärbel Winkler nor anyone interviewed or quoted are in a position to contradict our claims.
The article contains emotional language which means it is fake news.This is a highly subjective determination. Other writing from SAP that is far more emotive (such as from Bill McDermott) is never fact checked by SAP resources. By this standard, any article could be declared fake news in part by its tone, use of grammar, etc..
The Brightwork article deals in conspiracy theories.Nothing in the article even comes close to the definition of conspiracy theory. See the included definition of conspiracy theory.
Brightwork is similar to a end of days preacher.Who is more accurate on SAP? SAP or Brightwork? Did Run Simple happen? Is HANA 100,000x faster, etc...
Brightwork uses the term “we” in articles. True, but meaningless. And "we" is a standard English usage to describe an entity or overall work of an entity rather than one individual.
Brightwork is small.Also true, but also meaningless to knowledge or accuracy. See Infosys example.
Article is an opinion piece.The author would not know, but claim is made based upon supposed "emotive" language rather than reading the content.
Article made an outrageous claim.The claim that HANA is fake innovation is hardly outrageous. And we have the research to support the claim. The author made not attempt and is not equipped to falsify our claim.

This is an incompetent article on the part of Bärbel Winkler. Her claim about our article being fake news is warrantless. It is also incredibly lazy, proposing to have come to a conclusion without doing the most elementary research to determine if her claim is true.

First contradict what the article actually said, then the conversation about fake news can begin. Not before. It is quite apparent that Bärbel Winkler would like our article to be fake news, and working backward from this conclusion, Bärbel Winkler assembled a very weak combination of inaccurate factual challenges and ad hominems to try to prove the conclusion.

Conclusion

Bärbel Winkler has used a problematic term, fake news to describe the article. The term fake news was first used (at least in recent times) by Trump to discredit fact checkers, without providing counter-evidence. And this is the same approach employed by Bärbel Winkler in this article. SAP is similar to Trump in that both SAP and their adherents believe that SAP has the right to tell any lie and to never be fact-checked on their lies.

It took very little mental effort for Bärbel Winkler. She had some type of “feeling” about the article, obtained quotes from resources with a pro-SAP bias and then included people without the domain expertise to contradict or even faithfully restate the claims of the article.

Secondly, Bärbel Winkler entirely left out her financial bias from the equation, pretending (as nearly all SAP resources do when they write) that she is some independent entity who does not rely upon SAP for her livelihood. Overall, the article is filled with logical errors and extremely fuzzy thinking.

Because the level of thought behind the article is so weak, it actually took me a long time to write this rebuttal because I am being forced to do the thinking of both Bärbel Winkler and the people quoted in the article who don’t appear capable of doing much more than working off of the basis of hearsay. The incorrect claims in the Bärbel Winkler’s article, that is which are counterclaims to claims I did not make, means that a substantial section of people in the SAP space are not capable of understanding the content of Brightwork articles and are having to work off of only the superficial elements of the article. Bärbel Winkler seems to spend about 50% of her mental energy observing the emotional nature of the writing.

Since HANA was first introduced there have been many more people repeating statements about HANA than ever understood the validity of the claims or ever put any time into understanding either HANA or even databases. Even SAP’s most prominent resources like John Appleby and Thomas Jung make repeatedly false claims about HANA and even in 2019, the general knowledge of foundational database topics among SAP resources is incredibly weak. See this article where John Appleby is beaten in a debate by someone who is an expert in DB2 which we covered in the article John Appleby, Beaten by Chris Eaton in Debate and Required Saving by Hasso Plattner. John Appleby not only fakes his HANA expertise, but also his DB2 expertise, and is questioned on this topic by Chris Eaton.

SAP and SAP resources intend to not be fact-checked. And intend to call into question the fact-checkers, Brightwork and UpperEdge being just about the only ones (email us another fact checkers of SAP and we will add them to the list if we agree.) SAP is almost never fact-checked as the majority of those that provide information on SAP are connected to SAP financially. SAP and SAP resources do not want unpaid off entities providing analysis of SAP. Rather they would prefer that readers gobble down entirely SAP controlled information as we covered in the article Are SAP’s Layoffs Due to an Impressive Transformation?

The most important feature of the article, which is what the layoffs mean is entirely unaddressed by Bärbel Winkler. And furthermore, Bärbel Winkler proposes a type of censorship where SAP or Bärbel Winkler determine what links can be shared. The SAP space is already highly censored, but it seems that Bärbel Winkler thinks it needs to be kept this way. We have critiqued many sources on SAP, but we have never said that some sources should never be read at all. One has to wonder, what Bärbel Winkler is so concerned about limiting other readers exposure to non-conformist SAP information.

The Problem: A Lack of Fact-Checking of SAP

There are two fundamental problems around SAP. The first is the exaggeration of SAP, which means that companies that purchased SAP end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About SAP

We can provide feedback from multiple SAP accounts that provide realistic information around SAP products — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Content

References

https://blogs.sap.com/2019/03/25/how-to-avoid-accidentally-spreading-fake-news-when-sharing-links-on-sap-community/?

https://en.wikipedia.org/wiki/Newton%27s_law_of_universal_gravitation

https://en.wikipedia.org/wiki/Conspiracy

*https://news.sap.com/2018/10/sap-s-4hana-leader-2018-erp-value-matrix-by-nucleus-research/

https://www.informationweek.com/applications/sap-well-be-no-2-database-player-by-2015/d/d-id/1101825

https://www.industryweek.com/companies-executives/quest-simplify-manufacturing-it

https://blogs.sap.com/2018/12/10/reasons-not-to-move-to-s4-hana/

*https://www.biznews.com/global-investing/2017/07/13/sap-bill-mcdermott

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

Appleby’s False HANA Statements and the Mindtree Acquisition

Executive Summary

  • For years John Appleby provided information about HANA to the market.
  • There was a specific timing to when Appleby released false information around HANA.

Introduction

We covered in The Appleby Accuracy Checker: A Study into John Appleby’s Accuracy on HANA the extremely low accuracy of information provided by John Appleby over a period from roughly 2012 all the way up to the present day. However, the false information provided had a high intensity from 2013 to 2015, and then declined after 2015. Something very specific happened in 2015 which may have been strongly motivational for Appleby to release false information.

Appleby’s Role as Shill for SAP for HANA

John Appleby was “The Global Head of HANA” for the consulting firm Bluefin Solutions. And he was the primary shill for HANA particularly from the period of 2013 to 2015. John Appleby routinely published false information around HANA is what appears to be a classic shill relationship with SAP.

SAP’s support for what Appleby was doing was evidenced even in the comments made by Hasso Plattner. Appleby is repeatedly bailed out by Hasso Plattner at specific instances when Appleby is losing debates against commenters, as we covered in the article John Appleby, Beaten by Chris Eaton in Debate and Required Saving by Hasso Plattner.

How Appleby’s Prominence Helped Build the Bluefin Solutions Brand

Prior to John Appleby building a prominent role in media with his HANA posts, most people that worked in SAP would have been unaware of Bluefin Solutions as a company. There is little doubt that Bluefin Solutions prominent (but now obviously misleading) role in promoting HANA was what would have made Bluefin Solutions particularly attractive to being acquired.

This was referenced in the acquisition press release.

The Acquisition of Bluefin Solutions Mindtree

On July 16, 2015, Mindtree announced it has signed a definitive agreement to acquire Bluefin Solutions.

“Headquartered in the UK, Bluefin delivers solutions to some of the world´s most prestigious companies and has earned multiple awards including the SAP Pinnacle Award, SAP HANA Partner of the Year, SAP CRM Partner of the Year and SAP BI Partner of the Year. Bluefin offers one of the industry’s most highly regarded team of experts for transitioning to SAP HANA, digital and real-time-analytics.”

Well, this is exaggerated as most press releases are. But notice the focus on how Bluefin were experts in transitioning to SAP HANA.

Appleby’s Double Financial Incentive to Lie

Appleby would have already had a strong motivation to exaggerate HANA. Nearly all of the SAP consulting companies did, but Appleby’s articles put him in a special category of being the most prominent proponent of HANA. However, with the Mindtree acquisition on the horizon, where he, as a senior member, was very likely to receive a substantial payoff, would have had an even higher incentive to lie. Of the articles that we chose to evaluate for our The Appleby Accuracy Checker: A Study into John Appleby’s Accuracy on HANA, notice the dates on each article.

DateArticle Subject
November, 13 2013What in Memory Database for BW
December 27 2013A Historical View of HANA
March 6, 2014TCO of HANA
April 17, 2014On HANA Replacing BW
June 2, 2014On HANA SPS08
July 21, 2014On HANA Versus DB2
September 9, 2014HANA FAQ
November 18, 2014HANA Performance with OLTP
December 15, 20142014 HANA Predictions
March 19, 2015The BW-EML Benchmark
April 18, 2015What Oracle Won't Tell You About HANA
June 25, 201810 Customer Use Cases

This is not an exhaustive list of Appleby’s articles, but these were some of the most notable.

Furthermore, Appleby’s production of articles dipped sharply after 2015. However, we did not draw a correlation to the Mindtree acquisition until after we completed the Appleby Accuracy Checker, and check the date of the Mindtree acquisition.

It now appears that Appleby was making a particular effort to exaggerate HANA prior to the Mindtree acquisition, with his prominent articles tailing off after mid-2015. Being an insider he would have had knowledge of the negotiations with Mindtree (and perhaps other potential buyers) many months before the acquisition was announced. By exaggerating HANA and Bluefin Solutions’ involvement in HANA projects, Appleby would have considerably increased the price of the buyout and thus his compensation. In financial manipulation, pushing up the price of an asset that you know will decline after you sell it is called “pump and dump.” And it looks like this is what Appleby did with Bluefin Solutions.

Appleby’s Departure from Bluefin Solutions

Appleby convinced the world how great Bluefin Solutions’ HANA practice was, that he led. Yet after the acquisition, Appleby leaves Bluefin Solutions along with other founders. Why? To us, it seems quite likely that post-acquisition, Mindtree found out the truth about Bluefin Solutions’ HANA practice. And recall that Bluefin Solutions was also being sold on “explosive” growth in HANA. So Mindtree was buying the future. This was a future that never materialized.

The overstatement of the HANA practice along with the lack of growth meeting the pre-acquisition expectations would have created friction with his owners. While it is not that unusual for founders to leave a consulting firm, if Appleby was a major asset to Bluefin Solutions, and part of the reason for its acquisition, it seems odd that Mindtree did not incentivize him to stay.

Bluefin Solutions 4 Years Post Acquisition

Bluefin Solutions has seen its prominence decline in the SAP market since the acquisition. Four years ago HANA had great growth plans, but in 2015, SAP stopped reporting customer numbers as we covered in the article Why Did SAP Stop Reporting HANA Numbers After 2015?

The reason for this was simple.

SAP was able to falsely publish its numbers of customers in the 2012 to 2015 timeframe with various consulting companies being counted as customers. However, after the ecosystem had been counted as customers, it was no longer possible to show a growth story, because HANA never took off. If SAP had continued to publish customer number for HANA per quarter as they had been, it would have been apparent that HANA’s growth had leveled off, and it would have undermined the previous statements that HANA’s growth was “explosive,” (Ron Enslin) or that HANA was “the fastest growing product in SAP’s history.” (McDermott, Appleby) After this point, SAP switched to publishing S/4HANA numbers.

Mindtree could have been forgiven for thinking it was purchasing an up and coming consulting firm, one with a special relationship between its “Global Head of HANA” and Hasso Plattner. But what Mindtree did not know, and could not have known as they were not a firm with SAP knowledge (Mindtree was purchasing Bluefin Solutions to gain entry into the SAP market), is that most of the information John Appleby and Bluefin Solutions were releasing to the market was false. This would eventually have blowback, and if Bluefin Solutions were selling privately to customers on the same basis as the claims Appleby was making in print, it is an absolute certainty that Bluefin S0lutions ended up with many disappointed customers.

Even though Bluefin Solutions is listed has having somewhere between 200 employees, its website traffic has dropped to less than 10,000 page views in March of 2019 and they are clearly an SAP consulting company on the decline.

Who is to Blame?

Those within Bluefin Solutions that we have spoken with primarily blame the “incompetent Indian management” of Mindtree for their decline, and the fact that the Mindtree management does not understand the SAP market.

However, the decline of HANA, which was the main item that differentiated Bluefin Solutions has also hurt. Not only did the HANA market level out, but the second implementation market that Bluefin Solutions had intended to leverage their HANA media prominence to enter, S/4HANA, never materialized.

“Our S/4HANA team has certified SAP HANA specialists across 33 countries including SAP Mentors and HANA Distinguished Engineers. We are SAP’s preferred implementation partner in the S/4HANA space and are fast becoming the leading S/4HANA subject matter expert and S/4HANA delivery partner in the market.”

The quotations on Bluefin Solutions’ website even in 2019 are odd. Bluefin is a company of roughly 200 people, however, they claim HANA specialists in 33 countries. If there were an average of 3 HANA specialists per country this would amount of 99 HANA specialists in total.

Even though there are extremely few companies live on S/4HANA, Bluefin Solutions is also the “the leading S/4HANA subject matter expert.” Consulting firms are not entirely composed of consultants. They have salespeople, administrative staff, etc.. Bluefin Solutions statements would mean that they would rapidly run out of consultants.

Here are the solutions where Bluefin Solutions claims expertise.

  • BI/BW
  • Enterprise Performance Management
  • S/4HANA
  • Travel and Expense Management
  • Leonardo
  • C/4HANA
  • Fiori and SAPUI5
  • SAP Cloud Platform
  • SAP Managed Services

Bluefin Solutions continues to distribute false information about HANA with or without Appleby. Notice that this service offering page describes a solution that is still being developed, which is made up of acquisitions. But as we covered in the article How Accurate Was Bluefin Solutions on C4HANA?

C/4HANA is still being developed and made up of the following solutions.

“In a press release today SAP stated: “Following the completed acquisitions of market leaders Hybris, Gigya and CallidusCloud, SAP now ties together solutions to support all front-office functions, such as consumer data protection, marketing, commerce, sales and customer service.””

Yet Bluefin Solutions wrote an article about C/4 as a functioning product right after it was acquired. Does Bluefin Solutions actually have resources in Hybris, CallidusCloud and Gigya? Again, how are all these various areas of expertise fitting into the number of consultants that Bluefin Solutions has?

Conclusion

It looks extremely likely that Appleby was particularly motivated to exaggerate HANA’s capabilities and Bluefin Solutions’ HANA practice due to an impending acquisition by some company (the winner being Mindtree).

Now here is where the ethical determination comes in.

Mindtree is just another in the bottom-feeding and unethical Indian consulting companies. Therefore was it unethical for John Appleby to exaggerate HANA in order to get a better price for Bluefin Solutions? 

When I found out that the acquiring company was MindTree I chuckled. Is it wrong to lie, if you are stealing from an Indian consulting company? Mindtree, just like Infosys, Wipro, TATA and the rest are scamming the H1-B program and offer the most atrocious consulting performance. In my view, Indian consulting companies should be stripped of their business license. The best thing for both US domestic employees and US companies is if all the Indian consulting companies went out of business – along with many of the domestic consulting companies. Therefore, on balance I have to applaud what Appleby did – to the degree that we was able to shive Mindtree.

As for the second factor, there are repeated cases where Appleby lists customers doing things with HANA that don’t make any match up with our research into HANA. These cases seem to demonstrate Bluefin Solutions standing up massive HANA instances, almost as if they have no limit to their budgets. Brightwork Research & Analysis has performed more research on HANA than any other entity and we find it highly unlikely that Appleby and Bluefin Solutions were doing what they claimed with HANA.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://www.bluefinsolutions.com/news-and-press/mindtree-to-acquire-bluefin-solutions

https://www.bluefinsolutions.com/technology/technology-platform/sap-s-4hana-solutions

https://www.bluefinsolutions.com/technology/customer-engagement/sap-c4hana

https://www.nohr1044.com/about

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

The Appleby Accuracy Checker: A Study into John Appleby’s Accuracy on HANA

Executive Summary

  • For years John Appleby provided information about HANA to the market.
  • This is a study that measure’s John Appleby’s accuracy.

Introduction

HANA was introduced in 2011, although little actually happened with HANA in 2011. In 2012 and particularly into 2013 and forward, John Appleby of Bluefin Solutions published large amounts of inaccurate information about HANA. Appleby’s inaccuracies were so extensive, that this study was created in order to catalogue many (but certainly not all) of Appleby’s statements about HANA and to measure the typical accuracy level of Appleby’s statements about HANA.

The Method

This study is a document analysis of some of Appleby’s most popular and influential blog posts. That is one half of the study. The other half is the reality of HANA. Brightwork Research & Analysis has performed the most and the most extensive research of HANA and this website over 250 articles on HANA and S/4HANA (the ERP system mated to HANA). That is with the caveat of research outside of benchmarking. The entities with the benchmarking evidence around HANA are the database and hardware vendors. However, because of license and partnerships, these benchmarks cannot be published, and in the one case where a benchmark was published, it was heavily redacted and rigged in favour of HANA as we covered in the article The Problems with the Strange Lenovo HANA Benchmark.

Brightwork Research & Analysis is also one of the only information providing entities that cover SAP that is not financially tied to SAP (or to any other vendor) (for example, Forrester, Gartner, Nucleus Reseach and all of the US-based IT media outlets are financially connected to SAP). This lack of financial relationship allows us to publish our analysis of accuracy without concern for retaliation on the part of the SAP.

Understanding the Retaliatory Measures Against Independent Research

The standard retaliatory measure being for the vendor to threaten to remove or reduce funding unless the offending analyst is fired, as Oracle requested against DeWitt when he published a benchmark that caused Oracle displeasure as Oracle was not ranked at the very top of the benchmark. Oracle went further in the DeWitt case by threatening the University of Wisconsin with no longer hiring its graduates. The position of firms like SAP and Oracle is that those that publish research that is not consistent with their marketing materials should be removed from performing research. And in the case of Oracle, they were intent on bringing this pressure against a major university.

The Method Used

The method was to fact check statements made in Appleby’s articles and rate each of the statements for accuracy. This resulted in an average accuracy.

What Statements Are Measured

Appleby is very well connected to SAP and has received a steady stream of information from SAP since he developed a relationship with SAP’s upper reaches. In his article 10 HANA Predictions for 2015, Appleby made “predictions” but these were just things that development was working on at the time. We did not measure these statements for accuracy.

In some cases, Appleby made statements such as HANA worked with S/4HANA — which is true but would be like crediting someone for saying the “sun rises in the East.” Therefore, the study measured statements, where something more than elementary and universally agreed upon observations, were made.

The Evidence for the Percentage Rating

  • What you see below in the Appleby Accuracy Checker Table. This table (which is searchable, see the search box at the top) is designed to provide a listing of the measured statements.
  • The evidence that the statement’s accuracy percentage is found in the article link, which in each case links to a Brightwork Research & Analysis article where the statement is analyzed. Using a CTRL-F for the analysis article for the item you are looking for should bring you directly to the section you seek to find the supporting analysis.
  • Because Appleby would not have made claims that were not cleared by SAP, and because most of the claims were also made by SAP, this study can also be used as a rough approximation of SAP’s accuracy on HANA. Our Study into SAP’s Accuracy covers that topic separately.

Appleby Accuracy Checker Table

Appleby's StatementAccuracy % of the CommentLink to Analysis Article
HANA runs on BW up to 100x faster.0%What In Memory
DB for BW
SP06 of HANA had all the capabilities of a mature database.0%What In Memory
DB for BW
HANA was the only database with full HA.50%What In Memory
DB for BW
HANA simplifies the environment but installing multiple applications on a single HANA instance. 0%What In Memory
DB for BW
DB2 BLU does not accelerate all of the queries that HANA does.0%What In Memory
DB for BW
There is a 1 TB limit on DB2 BLU0%What In Memory
DB for BW
In Nov 2013 DB2 BLU did not support BW.0%Appleby Beaten by Chris Eaton
Maintenance is reduced with HANA0%Appleby Beaten by Chris Eaton
HANA only has two patches (of any kind) per year.0%Appleby Beaten by Chris Eaton
HANA had constant innovations.0%Appleby Beaten by Chris Eaton
In 2013 HANA had 24 months of stability behind it.0%Appleby Beaten by Chris Eaton
SAP did not purposefully withhold information from customers that would have shown DB2's certification.0%Appleby Beaten by Chris Eaton
Bluefin did DB2 substantial work.0%Appleby Beaten by Chris Eaton
HANA would never need separate row and column stores for transactional data and reporting data.0%Appleby Beaten by Chris Eaton
IBM BLU is not well suited to OLTP workloads.0%Appleby Beaten by Chris Eaton
HANA disrupted the database market.10%Appleby Beaten by Chris Eaton
Microsoft's "in memory" database is incremental compared to HANA's.0%Appleby Beaten by Chris Eaton
HANA is SAP's fastest growing product ever.0%A Historical View of HANA
The Model T was not quickly adopted.0%A Historical View of HANA
HANA was priced in a way to gain widespread adoption -- like the Model T.0%A Historical View of HANA
HANA was designed similarly to the Model T to reduce database maintenance.0%A Historical View of HANA
HANA, like the Model T, eliminates inefficient industries that surround databases.0%A Historical View of HANA
SAP has a club of customers that have seen 100,000 times performance improvement from implementing HANA.0%A Historical View of HANA
Oracle, IBM and Microsoft have formidable sales and marketing teams to push their databases (but by inference) SAP does not.0%A Historical View of HANA
HANA reduces the number of components required for development.0%A Historical View of HANA
TCO is only the license and hosting cost.0%Appleby's TCO of HANA
Because Appleby is not paid by SAP, he has no pro-SAP bias and no pro-HANA bias.0%On HANA Replacing BW
All duplicated data between ERP an the data warehouse is waste.0%On HANA Replacing BW
HANA is a data platform and application platform.0%On HANA Replacing BW
HANA does not provide a number of functions required for a complex enterprise data warehouse.100%On HANA Replacing BW
BW takes care of SAP security and authorizations, complex hierarchies, non-cumulative data for stock and many other capabilities.100%On HANA Replacing BW
SAP is still investing in BW.100%On HANA Replacing BW
HANA supports cold data.0%On HANA Replacing BW
BW is being implemented in the cloud.10%On HANA Replacing BW
HANA won't replace BW100%On HANA Replacing BW
HANA was doing things no other database could do.0%HANA SPS08
Microsoft cannot get traction in the SAP market, which is why SAP does not support MySQL or PostgreSQL.0%HANA SPS08
Microsoft cannot get traction in the SAP market, which is why SAP does not support MySQL or PostgreSQL.0%HANA SPS08
Oracle produces FUD, but SAP only provides information.0%HANA Versus DB2
SAP has to be sensitive to positioning HANA because of reseller agreements with IBM and Oracle0%HANA Versus DB2
John Appleby is unbiased because he does not work for SAP.0%HANA Versus DB2
HANA can solve problems that cannot be solved in DB2 or Oracle.0%HANA Versus DB2
HANA is a platform not just a database.0%HANA Versus DB2
A 34 question form can only function properly if supported by HANA.0%HANA Versus DB2
Only SAP writes software that takes advantage of HANA's design.0%HANA Versus DB2
HANA is enterprise ready (in 2014).10%HANA Versus DB2
Oracle and IBM need to catch up to HANA.20%HANA Versus DB2
HANA can be made reasonable in price.0%HANA Versus DB2
SAP apps run far faster on HANA.0%HANA Versus DB2
HANA running slower that competing databases is entirely due to poor code.0%HANA Versus DB2
HANA is an application platform.0%HANA FAQ
HANA runs from 10 to 1000x faster than a regular database like Oracle.0%HANA FAQ
HANA is a reinvention of the database.0%HANA FAQ
HANA is based upon 30 years of technology improvements.0%HANA FAQ
HANA allows the build of applications not possible on traditional RDBMSs.0%HANA FAQ
HANA allows the renewal of existing applications like the SAP Business Suite.0%HANA FAQ
HANA simplifies SAP's applications.0%HANA FAQ
No other database vendor could have added a column store to their databases.0%HANA FAQ
HANA is not just a database0%HANA FAQ
HANA offers more out of the box than Oracle or DB2.0%HANA FAQ
HANA was built from the ground up.0%HANA FAQ
HANA was built from the p*Time and TREX acquisitions.100%HANA FAQ
HANA stores all data in memory.0%HANA FAQ
HANA stores all data in a columnar format.0%HANA FAQ
HANA can reduce the data footprint by 95%0%HANA FAQ
The real benefits to HANA come from when the logic from the applications are optimized and pushed down to the database level.0%HANA FAQ
The real benefits to HANA come from simplified applications.0%HANA FAQ
HANA supports all applications on the same instance of data at the same time.0%HANA FAQ
HANA perfectly serves the needs of all applications with one "system of record" instance.0%HANA FAQ
(adopted from McDermott) HANA is attached to everything SAP has0%HANA FAQ
HANA reduces TCO.0%HANA FAQ
HANA simplifies landscapes.0%HANA FAQ
HANA performance, advanced analytics (Predictive, Geospacial, Text analytics) and simplicity often mean a business process can be changed to be different compared to competitors.0%HANA FAQ
HANA provides risk mitigation0%HANA FAQ
A modern database should be a database and a platform.0%HANA FAQ
HANA is an entirely new database while databases from IBM and Oracle and Microsoft are "traditional" and therefore dated.0%HANA FAQ
Companies are looking to replace "traditional" databases.0%HANA FAQ
SAP tried to keep SAP licensing simple.0%HANA FAQ
Oracle requires 30 SKUs to equal HANA.0%HANA FAQ
HANA has more functionality than Oracle's entire software stack.0%HANA FAQ
HANA works out of the box for data warehouses.35%HANA FAQ
The fact that ByDesign was ported to HANA is evidence that this is a major factor for ByDesign.0%HANA FAQ
HANA was designed to replace DB2 and Oracle on SAP customers.100%HANA FAQ
It is feasible to scale HANA to very large sizes. customers.15%HANA FAQ
HANA stores data primarily in a column format with 99% of the tables for ERCP being column oriented.0%HANA FAQ
HANA is a good Big Data platform.0%HANA FAQ
HANA leads all database in HA.0%HANA FAQ
Appleby's explanation of his testing demonstrating that his test proved OLTP performance.0%HANA Performance with OLTP
HANA has simpler data structures.0%HANA Performance with OLTP
Simpler data structures lead to 5 to 10x faster performance than AnyDB.0%HANA Performance with OLTP
A test is sufficient to prove OLTP performance even if it was incorrectly setup.0%HANA Performance with OLTP
HANA is going to kill AnyDB0%HANA Performance with OLTP
SD does not accurately represent how customers use systems in 2014.0%HANA Performance with OLTP
The BW-EML benchmark should be discussed in conjunction with the SD benchmark.0%HANA Performance with OLTP
Endorsing Hasso's prediction that there would be large scale customer adoption of suite on HANA.0%2015 HANA Predictions
HANA would be used for consolidated workloads.0%2015 HANA Predictions
Transaction processing is 2x compared to competing DBs out of the box.0%2015 HANA Predictions
HANA is the most comprehensive data platform.0%2015 HANA Predictions
How customers use SD has changed making the old SD benchmark completely inapplicable.0%BW-EML Benchmark
The BW-EML benchmark should be considered a replacement for the previous SD benchmarks.0%BW-EML Benchmark
The BW-EML database should be discussed for HANA without consideration for other databases being compared to HANA.0%BW-EML Benchmark
Benchmarks become out of date as technology and business needs change, and this is why the BW-EML benchmark was created.0%BW-EML Benchmark
There is a clear move to OLATAP databases.0%BW-EML Benchmark
BW-EML is a cross vendor benchmark.0%BW-EML Benchmark
HANA is less expensive than Oracle0%What Oracle Won't Tell You About HANA
The inclusion of ASE means HANA is lower cost than Oracle.0%What Oracle Won't Tell You About HANA
Because SAP bundles various items with HANA, this means HANA is lower cost than Oracle.0%What Oracle Won't Tell You About HANA
SAP on Oracle has no future.0%What Oracle Won't Tell You About HANA
SAP will only be supported on Oracle until 20250%What Oracle Won't Tell You About HANA
HANA enables a new class of applications.0%What Oracle Won't Tell You About HANA
HANA SPS09 provided a ton of innovation.0%What Oracle Won't Tell You About HANA
In S/4HANA, SAP achieved a 70% data reduction.0%What Oracle Won't Tell You About HANA
Oracle locks you in to engineered systems.15%What Oracle Won't Tell You About HANA
SAP HANA is a cloud platform.0%What Oracle Won't Tell You About HANA
HANA is less expensive than Oracle.0%10 Customer Use Cases
Server architecture would be able to run a typical large scale database in-memory.0%10 Customer Use Cases
SAP is building a digital platform powered by HANA and the Cloud Platform.
0%10 Customer Use Cases

Conclusion

On average, for the statements measured, Appleby had a 6.43% accuracy to his statements around HANA.

Appleby had and continues to have a strong financial incentive to promote HANA. Individuals repeatedly say that just because they have a financial incentive to say something, does not mean that this financial bias impacts their statements and positions. We have been told that paid placements don’t impact media content. We have been told that consulting companies that staff for SAP, have no incentive to guide their clients to SAP solutions. And when we test that individual (or company) we find the opposite is true.

SAP customers continue to use sources of information with 100% SAP bias when making their SAP investment decisions, regardless of all the evidence of the outcomes of financial bias. This is why we consider SAP (and most other IT areas) to be entirely non-scientific.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

References

https://blogs.saphana.com/2013/11/13/comparing-sap-hana-to-ibm-db2-blu-and-oracle-12-in-memory-for-sap-bw/

https://news.ycombinator.com/item?id=15886333

That time Larry Ellison tried to have a professor fired for benchmarking Oracle from programming

https://en.wikipedia.org/wiki/David_DeWitt

“In essence, a DeWitt Clause forbids the publication of database benchmarks that the database vendor has not sanctioned. The original DeWitt Clause was established by Oracle at the behest of Larry Ellison. Ellison was displeased with a benchmark study done by David DeWitt in 1982, then an assistant professor, using his new Wisconsin Benchmark, which showed that Oracle’s system had poor performance.[6]– Wikipedia

https://en.wikipedia.org/wiki/SAP_HANA

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

John Appleby, Beaten by Chris Eaton in Debate and Required Saving by Hasso Plattner

Executive Summary

  • John Appleby made bold predictions on HANA.
  • We review how he was badly beaten by Chris Eaton in a debate on HANA.

Introduction

John Appleby’s article on the SAP HANA blog was titled What In-Memory Database for SAP BW? A Comparison of the Major Vendors and was published on Nov 13, 2013. In the comments section of this article, let us see how Appleby fared against Chris Eaton of IBM in defending his claims.

The Quotations

Comments by Chris Eaton of IBM

“You make a claim that there is a 1TB limit on BLU.  That is not the case.  Perhaps you are getting this from our beta program where in fact we asked clients to initially limit their testing to 10TB of data (not 1TB)  but at the end of that program we tested with clients of 100TB and in the GA version there is no hard limit on BLU tables. In fact IBM has a sizing guide for clients that helps clients size their system. The guide actually has recommendations from the smallest systems up to 100TB and guidance for beyond that.  One thing I find curious in your HANA column is that you list 500TB raw and 100TB compressed. Does that mean HANA only has 5x compression? Also on the Enterprise Readiness section there are solutions today with BLU for HA and DR that keep the column organized tables fully available after a failure of the primary server (without the need to rebuild anything). And as you have noted, IBM is not standing still and is improving this story even further as we speak. In fact if you look at enterprise readiness it is of course more than just HA and DR. It typically encompasses transparent encryption, separation of duties, workload management, minimizing downtime, rolling patches and much much more…that’s exactly why IBM put BLU in the already enterprise robust DB2.

Now on to perhaps conjecture rather than fact, I’m surprised that you list “Maintenance is reduced” under HANA but I guess you mean data movement because I’m shocked to be honest at the number of patches (which I call maintenance) and the number of outrages (Chris means “outages”) required to apply those patches compared to BLU which follows the regular IBM fixpack cycles for DB2.

On the availability you bring up a very good point. But you may not be aware that in fact the technical certification work is complete. In fact IBM did some work to optimize BLU for BW and shipped those optimizations in FP1. These optimizations were actually developed jointly between IBM and SAP engineers so that our joint clients could benefit from the combined value. In fact SAP has added capabilities in the DBA cockpit and in BW itself to exploit BLU. So all of the technical work is complete and passed with flying colours (as I understand it) and both sides know that the products work together because both sides worked together to make sure that was the case for our mutual clients (as our joint clients expect us both to continue to do).  And yet for some reason the official OSS Note has not been published. I’m left wondering why SAP is arbitrarily withholding value from their clients (our mutual clients) who can drive their own business benefits from our joint technology.  If I were a customer, I think I would be upset with a vendor withholding value that they have already paid for (and in fact are entitled to if they are under current support contracts).  Hopefully, for our mutual clients benefit, this paperwork will be released shortly.”

On the last point, this is the problem with SAP both approving of databases while having a database that it is pushing customers to use. Naturally, they have the ability to withhold information from customers to make the story look better for HANA.

Let us see how Appleby responds.

Appleby’s Response

“This is a blog around BW primarily, and BLU is definitely not ready for BW and won’t be supported until at least January now. Even when it is supported I have serious doubts to its viability for the current release of DB2.

So all the documentation I have seen for BLU limits it to 1TB and that support for larger systems would come in a future release. Could you clarify the following questions, in case I’m confused? Can we talk about physical RAM rather than the amount of CSVs you can load?

– Does IBM DB2 BLU now support columnar tables in multiple nodes (scale-out)?

– If not, what is the maximum amount of RAM supported for p-Series/AIX and x64/Linux?

– Is there support for data across multiple p-Series books in e.g. p795?

On your patch point, I really don’t get why IBM keep going on about this. DB2 has two major patches a year. HANA has two major patches a year. In addition, HANA makes bi-weekly builds available to customers which allow faster innovation during development cycles. Then customers lock down your revision when you are ready to go live, and patch periodically. With HANA you get the best of both worlds: the latest innovations when you need it, and the choice not to patch if you don’t want to. With DB2, you have to choose a Fix Pack. When customers find bugs, IBM ships customers interim patches. This means as a customer, when you find bugs, you have to run a version of the database that potentially no-one else is running and hasn’t been properly regression tested.

Here is contradicts Eaton on BLU’s readiness for BW after Chris told him the updates to the FP1 release. Appleby ended up being wrong on this.

Appleby then contradicts Chris’ statement regarding the 1 TB limit, where Appleby was wrong again.

Appleby then lies about the number of patches that HANA has per year. HANA had two “official patches” per year but had many more unofficial patches per year. Particularly at this time, HANA was being patched in a continuous fashion, because HANA’s development was so shoddy. And the reason that IBM “keeps going on about this” is because HANA was incredibly unstable at this time and is still nowhere near DB2’s stability even in 2019.

Appleby continues..

On a related note, I think IBM has a serious problem that we won’t see until the next release of DB2. With HANA SP7, SAP HANA has a ton of new innovations that differentiate its ability to deliver apps. With DB2 10.5 FP2, there is no innovation and DB2 BLU is an impressive version 1 product but the HANA has 24 months of stability behind it, and the development team is moving that much faster. Let’s see what HANA SP08 and DB2 10.5 FP3 have to offer!

These items turned out not to be innovations. They were development. We covered this in the article Did SAP Just Invent the Wheel with HANA?

Secondly, HANA does not qualify as innovative as we covered in How to Understand Fake Innovation in SAP HANA.

Your accusation is pretty damning – you are accusing SAP of deliberately withholding IBM innovations from their customers. This isn’t the SAP that I know, and they immediately certified DB2 10.5. As you say, the IBM DB2 team and SAP NetWeaver team have a close working relationship. Hopefully someone at SAP can chime in on this.

I suspect that this is not the case and that the reality is that the DB2/BW team missed the drop for the last SP Stacks of NetWeaver in Aug/Sept and therefore they will have to wait until the next releases in Jan/Feb (there’s a specific calendar week for each NetWeaver release). The NetWeaver team has a published schedule at https://service.sap.com/sp-stacks (login required) and unfortunately when a sub-team misses a deadline, that code does not make it out to customers.”

Appleby points out that HANA has 24 months of stability behind it, which we know at this point in time is untrue, as Chris’ earlier statement about HANA downtime is quite true. HANA at this time did not have a single month of stability behind it. 

Appleby also does not credit DB2 with having many more months of stability and being an overall far more stable database back in 2013, as well as today when this article is written in 2019.

Appleby accuses of Chris of making an “outlandish” claim of withholding information from customers that would help IBM. Appleby does this knowing that Chris has to be careful in how he presses this claim as Chris works for IBM. Allow us to make the claim. SAP definitely withheld this information to benefit HANA versus DB2. SAP has furthermore been hiding HANA from any benchmark competition since its inception, as we covered in the article The Hidden Issue with the SD HANA Benchmark.

Now let us see what Christ Eaton in reply.

Chris Eaton’s Response

“Now I can say a bit more than I alluded to in my previous comments. As of today (not January/Feb as you claim) BLU Acceleration is supported by SAP NetWeaver BW 7.0 and above (as you have now partially updated). All a client needs is the support packages listed in SAP Note 1825340 and then apply the correction noted in SAP Note 1889656 released today to effectively “turn on” the ability to use BLU that is already in those support packages. Now the first 3 bullets in your comparison above are still factually inaccurate and somewhat misleading and the second 3 are subjective opinions (not based on facts). If you really want to be factual you should give DB2 a full circle and HANA a 3/4 circle in the availability category since BLU is actually available for BW 7.0 and higher where as HANA is only available on 7.3 and higher. Your descriptions no longer match your circles. Or perhaps you want to rewrite this blog to come up with another set of arbitrary comparison points.”

Yes, as is typically the case in every comparison we have reviewed from Appleby, the comparison is rigged in favor of HANA. Here Chris Eaton is calling him out on it, and Chris Eaton is correct.

Let us see what Appleby says in reply.

Appleby’s Reply

“I’ve already updated the blog with information taken from publicly available information issued by IBM and SAP. If you have specific factual inaccuracies, then please send me an email and I will make corrections. No need to air your dirty laundry in public.”

That is curious, as Chris Eaton called out the inaccurate comparison chart, and that chart was not changed. In fact, it remains unchanged to this day in 2019. Chris Eaton just told Appleby what his specific factual inaccuracies were, and now Appleby is responding by gaslighting him.

At this point, it appears that Appleby is growing uncomfortable with Chris Eaton’s commentary. Appleby moves to take the discussion offline. The framing is clear. Appleby can make any claim he wants no matter how false, but if Chris Eaton contradicts Appleby in a public forum then Chris Eaton is breaking the rules and airing his “dirty laundry.”

Let us see Chris Eaton’s Response

Chris Eaton’s Reply

“Not sure how factual errors in your blog are considered “my dirty laundry”.  In any case here are some of the factual errors and misleading statements about DB2 in the blog posting (even after you have changed it).

1) Availability – you give DB2 a 1/4 circle and HANA a full circle and yet even you wrote that DB2 10.5 is available on more versions of BW than HANA is.

2) Scalability – In the HANA column you quote database sizes yet in the DB2 column you quote memory sizes and then claim a difference. Misleading at best.

3) Enterprise Readiness – You compare the capability (high availability and disaster recovery) with a feature which in DB2 we call HADR. Yet as I mentioned in other postings, today’s DB2 10.5 support both HA and DR. In fact, we can do this today with the same RPO and RTO as HANA without the use of our HADR feature. You say in the paragraph about DB2 that there is no “HA or DR” and that is incorrect. Again misleading to give HANA a full circle and BLU a quarter circle.”

Chris Eaton clearly knows DB2. And again calls out Appleby on the incorrect information included in his article that has, unlike Appleby just claimed, not been removed from the article.

Let us see Appleby’s Response.

Appleby’s Response

“Chris – please take this offline, I’m happy to respond to your emails and make any corrections, but this comment thread is getting pretty tiresome.

1) It’s my opinion that whilst BLU is supported, that support is very limited due to all the restrictions and the maturity of the BW on BLU code-line. And whilst BW can be run on BLU with BW 7.0, I don’t think that is a useful benefit. There are specific reasons why SAP only brought support for BW 7.3.x+ and this is largely because of the OLAP Compiler: BW 7.0.x was specifically tuned for BWA and running BW on HANA for BW 7.0.x would provide limited benefits over BWA.

2) They are both RAM – HANA supports an unlimited number of nodes with Tailored Datacenter Integration. Both DB2 and HANA require significant query processing memory for in-memory analytics so you cannot use all RAM for your active data.

3) HA/DR is not supported with BW on DB2 BLU. This is a hard requirement according to the joint IBM and SAP team. Please see the following, from SAP Note 1825340: “Your database server does not use HADR”. If you think this is incorrect, then please take it up with your team.”

Yes, it is quite tiresome to get fact-checked. Also, how likely is it really that Appleby will make corrections if Chris Eaton “takes it offline.”

Chris Eaton’s Response

“Hi John, since you work for a 3rd party (not IBM or SAP) and I’m assuming you are not just making stuff up, then I’m left with wondering if someone from one of our two companies is feeding you inaccurate information. I’m sure you have contacts within SAP, but it would appear that maybe those folks may not be aware of all the facts here either. I’d be glad to talk with you directly about the reality here (you already have my contact information as in a previous post I told you that if you want to check facts you can reach out to me anytime) or you could ask to speak with someone at SAP that has actually worked on the DB2 optimizations inside of BW. I’m pretty well connected here as I work in the same office as the DB2 BLU development team. This also happens to be the same office where much of the technical certification work is also done by both SAP and IBM employees (certification work takes place in both Toronto and Waldorf). This would allow you to correct some of the misinformation you list above if you actually do the research with someone in the know (like me).

If you get a chance to speak with someone at SAP who actually knows the details here, ask them about the contents of SAP NW 7.00 SP29, or 7.01 SP14, or 7.02 SP14, or 7.11 SP12, or 7.30 SP10,  or 7.31/7.03 SP09 or or 7.40 SP04. These are support packages that are already out there today, available to clients, but as you know there is no “published” note that says this combination is officially supported. Hence my wondering why there is any withholding of value from clients. We do agree however on one thing…”This isn’t the SAP that I know” (or at least have always known from my dealings with some really great technical people there).”

Our analysis of John Appleby’s role with HANA is that he was Hasso Plattner’s hand-picked shill for HANA. Lets take a look at the definition of a shill.

“A shill, also called a plant or a stooge, is a person who publicly helps or gives credibility to a person or organization without disclosing that they have a close relationship with the person or organization. Shills can carry out their operations in the areas of media, journalism, marketing, politics, confidence games, or other business areas. A shill may also act to discredit opponents or critics of the person or organization in which they have a vested interest through character assassination or other means.

In marketing, shills are often employed to assume the air of satisfied customers and give testimonials to the merits of a given product. This type of shilling is illegal in some jurisdictions, but almost impossible to detect. It may be considered a form of unjust enrichment or unfair competition, as in California’s Business & Professions Code § 17200, which prohibits any “unfair or fraudulent business act or practice and unfair, deceptive, untrue or misleading advertising“.[7]– Wikipedia

Appleby repeatedly published false information about HANA, and it is difficult to believe that this was done without the express support of SAP. As we will see later, Hasso Plattner even shows up this post to try to back down Chris Eaton and to support Appleby. And this is not the only article by Appleby to have a comment by Hasso Plattner.

SAP did not hide such notes related to database certification — when it did not have a database to sell. But once it did, SAP switched gears and began protecting HANA in any way that it could. As a long time SAP consultant, I don’t think there is a way of lying invented that SAP has not engaged in, and this is the SAP that I have known since 1997. So I find these statements about “not the SAP I know” ludicrous.

John Appleby’s Response

“Hey Chris,

If you recall, I did reach out you and Paul and the response I got was “we’ll get back to you after quarter close”, and you didn’t reply to any of my questions above either. I thought they were pretty specific in their nature? Are you able to respond to those?

You are on shaky ground, accusing SAP of anti-competitive behavior in the in-memory database market. I thought IBM and SAP had a good relationship and it’s sad to see that IBM feels this way.

I pulled open a BW 7.31 SP09 system and searched the source code for DB2 BLU code and can’t find anything mentioning “ORGANIZE BY COLUMN” (which is the syntax to create a column table in DB2 BLU) so I’m not sure where you get your information from. Perhaps you can share the name of the function modules, programs, or transactions in NetWeaver that would allow the conversion of row objects to BLU column objects?

John

P.S. they did originally release some code in the 7.01 SP14 code-line for BLU, but they then removed it. See SAP Note 1881171 – DB6: Activation of an InfoCube or a DSO fails

P.P.S there is an official statement from SAP and IBM on DB2 BLU tables, updated 17th October 2013

SAP Note 1851853  – DB6: Using DB2 10.5 with SAP Applications. The use of column organized tables is currently not allowed with SAP applications. The use of column organized tables with SAP NetWeaver BW is currently under investigation”

SAP is not only guilty of anticompetitive behaviour in the in-memory database market, but the entire business model of SAP is also based upon anti-competitive behaviour. SAP application of type 2 indirect access is quite obviously a violation of the tying arrangement clause of US antitrust law as we covered in the article SAP Indirect Access as a Tying Arrangement Violation. The lawsuit by Teradata against SAP specifically accuses SAP of anti-competitive behaviour as we covered in the article How True is SAP’s Motion to Dismiss the Teradata Suit?

IBM consulting has a very good relationship with SAP. IBM has made enormous amounts of money in implementing SAP. However, the IBM database group stopped having a good relationship with SAP once SAP entered the database market with HANA and began making enormously false claims about HANA as we covered in the article IBM Finally Fighting Back Against HANA.

Hasso Makes An Appearance!

We find the timing of Hasso’s appearance a bit curious.

Appleby is being called into question by Chris Eaton, and suddenly Hasso appears. Did Appleby reach out to Hasso for a lifeline? We don’t know, but it is certainly peculiar.

Now let us see what Hasso says.

“isn’t it great chris, that you wont see any of these support packages any more once the system is running on the hana enterprise cloud. and this will be done at a fixed monthly price.  we should discuss the future here, the optimization of the installed base is very important but doesn’t let us leap forward.”

Hasso’s comment is that there will be no more support packages once the system is running on the HANA Enterprise Cloud, which is now the SAP Cloud, which has almost no production HANA instances running on it even in 2019. So this future state envisioned by Hasso has still not come to pass roughly 5.5 years later. And overall, this comment really has nothing to do with the thread. Chris Eaton is asking Appleby to answer questions and make adjustments to his false article.

Hasso is doing nothing here by pivoting the conversation. So not only is the timing peculiar but Hasso’s comment is nothing but diversionary. But as we will see, this diversion will not last for long.

Chris Eaton’s Response

Chris is going to put on notice that he is now interacting with Hasso Plattner. For people that work in SAP, even interacting with Hasso by blog comment is a big deal. Furthermore, in terms of the power dynamic, it is all with Hasso. Furthermore, all Hasso has to do is reach out to a high-level person at IBM and Chris can be easily spanked for this comment thread.

Given this context let us see what Chris Eaton says. At first, it seems like Chris Eaton takes the bait.

“I agree Hasso, it will be nice when I can just log into my cloud service provider and ask to run my chosen SAP Application and choose to store the data in BLU Acceleration for Cloud (or wherever 🙂 and not have to worry about issues like does this module fit with this widget and do I have to read this OSS note or that one. Clients expect that the company you advise for and my company work together so that they can make their choices and move forward to run their businesses and compete in their markets. Having both our offers working well together (as they have for a very long time now even in an era of co-opetition) is what our mutual clients expect.”

Well, neither SAP or IBM have done much in the cloud since this time, and in desperation, IBM made a very expensive acquisition of Red Hat. See our analysis of why IBM’s acquisition of Red Hat will likely not work out What is the Real Story of How IBM will Use Red Hat?

Overall, there is just not much to this quote. But on the other hand, Hasso has changed the subject. So there is not much for Chris Eaton to respond to.

Hasso Plattner’s Response

“I fully agree with the working together approach. but i can’t see where db2/blu and hana should work together.

if ibm wants to run e.g. sap bw on db2/blu in the cloud for some customers, ibm has to swallow the much higher system costs and that is ok. if we talk about an installation at the customers site, sap should inform the customer about the reality in performance, cost of operation and system costs.”

And all of this is for one simple reason. SAP planned to use its application control over the account to push other database vendors out of SAP accounts, with zero technical justification for doing so.

Here Hasso lies, and in a major way. HANA is far more expensive in both license and TCO than DB2.

Chris Eaton Responds

“I personally agree with you on the first statement.  On the others, well this is business as usual I think. Each company would advise clients on what they believe to be the business value of their offerings and customers decide.  I believe one solution has lower costs, you believe the opposite…that’s fine…that’s how the world works and in the end customers decide (we don’t decide for them). And I know our two companies have always had a great collaborative relationship when it comes to ensuring clients success (even when there are some products on both sides that compete on capabilities with each other) and I’m sure as a member of the advisory board you will do whatever you can to make sure that continues so our mutual clients benefit from the great work being done at both companies.”

Brightwork Research and Analysis is completely independent from all vendors, including all of the vendors discussed here. We, unlike Forrester, did not take money from SAP to create a fake TCO study How Accurate Was The Forrester TCO Study? even before HANA was implemented almost anywhere. In fact, we have performed the most HANA research of an independent entity and published the most and the most detailed analysis of this topic, and we can say definitely that HANA is the highest TCO database that is sold today. DB2 is not even close to the cost of HANA.

Chris Eaton’s Response

“To answer your specific question

1) No

2) 16TB

3) Yes

But don’t forget that with BLU you are not limited in size by the amount of active data that can fit in real memory. So folks have the option to run on any size system they want and choose the price/performance point that is the right one for their business.  I’m not sure what documentation people are feeding you but there is no IBM document that states there is a 1TB cap on BLU…if it did that document would be wrong. I know, I work in this code so take it from an expert here.  In our past exchanges I pointed out that you were being given false information (the blog about setting the record straight) and looking back at those comments you actually stated that you received incorrect information and corrected the posting (and I respect that).

Yet this time I’m telling you as a DB2 subject matter expert that there is no 1TB limit and yet you insist on sticking to that point. I have to ask why?

Here Chris Eaton pivots back to the questions that Appleby asked. So he only accepted the diversion by Hasso temporarily.

Chris Eaton then contradicts what Appleby previously stated about the amount of active data that can be placed into memory for DB2 BLU. Chris Eaton also points out that even after he is corrected, Appleby sticks to the 1TB limit. To us, the reason for this obvious, Appleby wants to tell his customers that DB2 has a 1TB limit. Appleby is after all in the business of selling consulting services for HANA, not DB2.

Chris Eaton continues.

On your comments about FP2, you are making conjectures about things I think you know nothing about (since you are a HANA expert I wonder who is sending you these kinds of inaccurate claims to try to make some fallacious argument). FP2 was shipped to support some changes in the client side for DB2 for z/OS v11. It had nothing to do with DB2 BLU or SAP or LUW for that matter. It was actually shipped 6 weeks after FP1 because it was for updates to the client for the DB2 z/OS release so when you make claims about innovation and then refer to a client side release for a different product…well I can only say that I’m sure you are close to the HANA developments but you don’t have the depth on the DB2 development and I would therefore suggest that you rely on the knowledge of those that do (and correct the false claims made above).

More corrections and a clear annoyance with Appleby for writing about topics he does not know and has not researched. This also in part is related to the fact that Appleby does not put adequate effort into any investigation that he performs.

Chris Eaton continues..

On your latest comments.  I haven’t accused SAP of anything. I’ve asked why they are not supporting two products that they know work together (since their development team in conjunction with the IBM development team worked in close partnership to make that happen). On the development side the two companies have a fantastic partnership…the teams work closely together to drive value for clients (and that’s why I said above that people involved in the technical partnership here are great and are doing the right thing for customers. I have seen first hand the value customers are deriving from this joint technology partnership). I have always felt that it’s customers first!

I have to intersect here, after working with IBM on several projects, I have never seen IBM put customers anywhere but last. In all of my IBM experiences, they seemed very comfortable lying to customers about SAP. They also gave the distinct impression that if I did not tow the IBM line that they would try to get me replaced on the project. The line for IBM consulting is maximizing the billing hours for IBM resources on the project. Now I know some good people out of IBM’s database group but have never worked with them on a project. IBM’s reputation is that they are utterly cutthroat.

Chris Eaton continues..

And having worked for IBM for a very long time I’ve been in an environment of co-opetition for almost 20 years now (almost any product in IBM has a working relationship with another company that also has products that compete with other offerings from IBM … HW, SW, services). But that’s what helps customers get value when companies (like IBM and others) are open to supporting what customers are asking for.  I hope SAP will do the same shortly.  I’m not taking shots at any product or company here. I’m pointing out that you are making false claims about DB2 in this posting and that is what I’m asking you to correct so that clients who are trying to make an informed decision actually make it with factual information and not FUD.”

Chris Eaton really did accuse SAP of cheating, and we agree that SAP most likely did cheat. SAP is always cheating in some way or another, so it is normally a good assumption to make.

As we said, when it comes to benchmarking HANA, SAP cheated from 2011 until the time this article was written in 2019.

Appleby’s Response

“Thanks for the response Chris. I will always update my information but given that your first response was incorrect “100TB of data and no hard limit”, I’m glad I waited. There is a hard limit of 16TB of RAM for BLU then, of which you must retain a good quantity for the sort heap in my experience.

Both HANA and BLU have a very similar architecture for hot vs cold data and HANA, like BLU, also only requires the working set to be in memory. For BLU this is done at a block level and HANA it is done at a partition-column level and each has their pros and cons. HANA aggregates much more quickly than BLU as a result, and BLU has slightly lower active memory requirements. Swings and roundabouts.

I like to think of myself as a technology expert and I’m a long-time DB2 user capable of reading the DB2 10.5 FP2 release notes. They are available here. So I repeat my point… there is no innovation in DB2 10.5 FP2, just some client updates. I think we agree on this point.

If you don’t think you’re accusing SAP of anti-competitive behavior then you should re-read your posts before pressing submit.”

Appleby would looooove to bring this conversation to a closure.

Finally, we get to something that Appleby says that is true, which is that Chis Eaton did accuse SAP of anti-competitive behavior. Which puts Chris Eaton and IBM in good company.

Chris Eaton’s Response

“Oh John, this is now in the realm of the laughable.  Let me help you out here so you don’t continue to make false, misleading or unsubstantiated claims.  If you really want to tell people that BLU is limited in database size and memory, here are limits based on today’s technology.  As pointed out above the maximum server supported by BLU today fits 16TB of memory.  When I plug that into our best practices sizing guideline spreadsheet (I had to make a new row because we didn’t bother to go that high) 16TB of memory in BLU would be the recommended amount of RAM for 1.8PB (yes that’s 1800TB) of uncompressed user data. Honestly, I don’t expect anyone to run BLU with 1800TB of user data today and as such I really don’t expect anyone to buy 16TB of RAM to support that size database. If you are wondering, the physical limit on the amount of data that can be stored in a DB2 single partition database is 2048PB (2,048,000TB). Which is why I said previously there is no hard limit (should have said, no realistic or practical hard limit). Feel free to quote me if you like that the maximum size database for DB2 BLU today is 1800TB if you want to be accurate and follow IBMs best practices for performance with BLU. I wonder what the size and cost of a HANA system would be to support 1800TB of user data? Theoretically is there a certified configuration that supports the RAM required for that much user data? If you think my facts are wrong, I would like to invite you up to Toronto for a sit down with me and the other BLU experts to discuss the facts here with you. I’d be glad to host you here to discuss how you as a “a long-time DB2 user” (although looking at your LinkedIn profile I don’t see you listing any DB2 experience and don’t see any DB2 endorsements from anyone, nor do I see Bluefin with any information on DB2 practices) but I’m sure you can share your experiences when we sit down and educate you on BLU. So you set the date and I will have a line-up of experts to show you the reality of BLU’s capabilities. I’m not interested in bashing other products here but you should be aware of the capabilities of DB2 if you want to make claims like this blog posting.”

Hmmm….looks like we aren’t the only person to accuse Appleby of

“false, misleading or unsubstantiated claims.”

This comment, in particular, is humorous.

“I wonder what the size and cost of a HANA system would be to support 1800TB of user data?”

What Chris Eaton is getting at here is that HANA is exorbitantly priced per GB. A 1800TB database priced in HANA is unthinkably expensive. HANA is also the only database priced this way. Notice further on that Appleby will not engage at all on the price of HANA.

Next Chris Eaton questions Appleby’s claimed DB2 experience. We don’t know that Appleby’s DB2 experience is, but whatever Appleby claims, we can be confident it is less than what he says.

Chris Eaton continues.

“When we sit down to talk perhaps you can tell me what the pros are for having to hold an entire column-partition of data in memory even if the query doesn’t need to process all of that data for the query results (like BLU with data skipping and page level memory caching). I’m not a HANA expert so perhaps you would like to discuss this in person with me.  And you can enlighten me on claims that aggregations are faster by storing unnecessary data in memory (rather than using that same memory to hold other, more useful data).

So are you planning to update this blog post with the facts?  Are you planning to call those people (that you say ask you for comparisons) back to let them know they may have been misinformed? By the way you also have some errors of omission on the Oracle In-Memory stuff too and if you like I can talk to you about that in Toronto too (if you look me up on linked in you can see that the 3rd most endorsed skill I have is actually Oracle).”

Yes, and this is the illogical nature of HANA, which was originated with what amounts to a crazy idea from Hasso Plattner. The entire database should not be loaded into memory.

And Chris Eaton finishes off by not only accusing (and backing up) Appleby’s claims around DB2 but also Oracle.

The response from Appleby is going to be…..interesting. But note that Chris Eaton did not address the accusation of SAP of anti-competitive practices. Chris likely wants to get away from this point as he works for IBM. However, this is the problem eventually reality intervenes.

Before Appleby can respond, Chris Eaton now receives an endorsement from Julian Stuhler. Earlier we questioned Hasso Plattner getting involved. Is possible that Chris Eaton called on Julian Stuhler to endorse Chris? We think this is far less likely, as it is still Appleby who is on the ropes, not Chris.

Julian Stuhler’s Comment

“Jon

I am not really qualified to comment on any of the Hana-specific debate above, but having known Chris Eaton for more than 10 years I am in a great position to speak to his DB2 technical knowledge and character. As well as being one of the most respected IBM technical authorities on DB2 for Linux, Unix and Windows, Chris served as a member of the International DB2 User Group (IDUG) Board of Directors for 10 years (I was also on the Board for several of those, and IDUG President for one of them). He is also in the IDUG Speaker Hall of Fame, one of a few people around the world who have consistently been voted as the best speaker by his peers at IDUG technical conferences. One of Chris’ specific areas of expertise is how key DB2 technologies stack up against competitor’s offerings, and I’ve personally heard him talk knowledgeably and in depth on many aspects of Oracle.

I’ve worked with DB2 as an independent consultant for over 25 years and Chris is definitely one of my “go to” guys in the event of any DB2 for LUW technical issue or query: I greatly respect both his in-depth technical knowledge and his personal integrity.

I like a good technical debate as much as the next guy, but when it descends to attempting to undermine the technical credibility and honesty of someone like Chris, you’ve either lost the argument or lost sight of the original discussion.”

That is easy, Appleby has lost the argument.

Appleby’s Response

“Hey Chris,

I can only assume IBM must be feeling the squeeze by how aggressively you guys are defending yourselves on a SAP blog. I have no problem with updating any factual issues, don’t worry.

The IBM sizer assumes that your working set is 10% of the total data store. It also assumes 10:1 compression with BLU. My experience has shown neither of those to be true in real-world customer scenarios but it obviously depends on the data and the business scenario.

One of the key points that people forget about is that SAP fits its entire ERP for 80,000 users onto one IBM x3950 with 80 cores and 4TB RAM running SAP HANA (2TB data, 2TB calculation memory). 4TB RAM certainly goes a long way! With ERP on HANA, we of course don’t need separate row and column stores for transactional data and operational reporting data. Plus as Hasso Plattner says, we can then use the DR instance for read-only queries for operational reporting.

I will blog separately around this but BLU is, as you well know, not well suited to OLTP workloads, so if you want to run ERP on DB2, you must run it on the DB2 row store. You can (in theory) then use the BLU column store and periodically update BLU with the contents of the row store. But you need a separate row cache, column store with BLU acceleration and ETL. That’s both complex, and wasteful, and introduces latency into the process.

I’m not going to get involved with a “mine is bigger than yours” discussion around skills and LinkedIn endorsements. Not sure why you went there. But for the sake of clarity, Bluefin is a Management Consultancy for customers who run SAP technologies, and 17% of SAP ERP customers run the DB2 RDBMS.”

If Appleby had no problem updating factual issues, we would not have recorded over 100 false statements by Appleby in our Appleby Papers.

Next Appleby moves into HANA promotion. Then Appleby states..

“With ERP on HANA, we, of course, don’t need separate row and column stores for transactional data and operational reporting data. Plus as Hasso Plattner says, we can then use the DR instance for read-only queries for operational reporting.”

That is what SAP thought at this time, but then in SPS08, suddenly SAP added rows oriented tables to HANA. Yes running an entirely column-oriented database for a transaction processing system never made any sense. Appleby himself describes this change in our critique of his article in How Accurate Was John Appleby on HANA SPS08? This article is written in November 2014, and in June of 2014, or seven months later SAP already had to change its design.

Therefore, what Appleby states here is reversed. HANA’s performance for ERP systems must have been absolutely atrocious before they added row oriented stores.

Appleby Continues

“I will blog separately around this but BLU is, as you well know, not well suited to OLTP workloads, so if you want to run ERP on DB2, you must run it on the DB2 row store. You can (in theory) then use the BLU column store and periodically update BLU with the contents of the row store. But you need a separate row cache, column store with BLU acceleration and ETL. That’s both complex, and wasteful, and introduces latency into the process.

I’m not going to get involved with a “mine is bigger than yours” discussion around skills and LinkedIn endorsements. Not sure why you went there. But for the sake of clarity, Bluefin is a Management Consultancy for customers who run SAP technologies, and 17% of SAP ERP customers run the DB2 RDBMS.”

It isn’t? DB2 was a row-oriented store for decades before it added column-oriented tables and “in memory.”

Appleby does not want to get involved in any discussion where his qualifications are questioned. This is a pattern we have observed by Appleby, that wherever he is weak, he states he lacks interest in going into that particular topic. Chris Eaton only “went there” because he suspected that Appleby was exaggerating his DB2 knowledge.

Let us be clear. When it comes to DB2, Chris Eaton’s is clearly bigger than John Appleby’s.

Finally, Appleby’s statement about the percentage of Bluefin’s customer base that run DB2 is highly misleading. At this time Appleby lead Bluefin’s HANA practice, so his only incentive was to sell HANA consulting services. And secondly, HANA was what Bluefin was primarily known for. Chris Eaton rightly pointed out that Bluefin had little DB2 work.

Chris Eaton’s Response

“Hopefully we are getting to the end of this arduous thread.  I’m not sure what you mean by squeeze…all I’m trying to do here is make sure that facts are clear and that you (or any others) are not making false statements about DB2 (intentionally or unintentionally misleading, like 6TB limit above…that’s not a limit on the database size at all as I said in my comment above, that’s how much memory servers currently support and is exactly why we optimize not just for all active data in memory).  By the way on the 10x compression there are several references that will all say the same thing about 10x or more data compression…you can watch my blog for clients speaking on this very topic.

What Appleby means by “squeeze” is that IBM is felling the pressure of DB2 being removed in favour of HANA (for what turns out to have been for false claims).

Greg Eaton continues…

Also as I have said many times in other posts, for DB2, BLU is just one of a multitude of capabilities. For OLTP IBM already has an in memory scale out solution that leads the pack called pureScale. It supports up to 128 nodes in a cluster and has centralized in memory component called the CF (taken from the parallel sysplex on the mainframe) which use memory to memory direct write protocols for hot pages and locking. This blog was about BW so I didn’t bother to bring it up but pureScale is already supported by SAP for OLTP.

And finally the only reason I brought up experiences is because you make claims about DB2 that are wrong and quote your credentials of long time user of DB2 to support your claims. I have my doubts that this is true and that’s why I have invited you to come and learn about DB2 and you can share with me your experiences (no need to do that on this blog I don’t think).  It was Jon that went down a slippery slope but as Julian points out I do have the credentials to talk about DB2 at great depth.”

Oooohh, here Chris Eaton flat out accuses Appleby of falsifying his DB2 background. Actually, this is completely within Appleby’s behavioral pattern, but it is surprising to see Chris Eaton accuse him of this, especially considering Hasso Plattner is likely monitoring his responses.

And in fact…..

Hasso Plattner Magically Reappears!

We now have the second serendipitous appearance of Hasso Plattner!

“chris, if you want to see what a company could do with 25 tera bytes of dram to analyze 8.000.000.000 records of point of sales data, please come and visit the hpi in potsdam. you guys know the team there. when you see students intelligently analyzing pos data while new data is streaming in at high speed, you might change your mind with regards to scale out and dram sizes. btw the whole system (1000 nodes, 25 tera bytes dram) cost less than 1 million us and was assembled in the us. only if the data is in memory applications like you would see are possible. unfortunately the company supporting the research doesn’t want to go public. the discussion with my phd candidates is completely academic and free of any bias. -hasso “

We learn a lot about Hasso from this quote. Apparently, he cannot be bothered with any type of capitalization. Later Hasso discusses his position as a professor of computer science, which is odd, as we have yet to run into a P.h.D who refuses to use caps.

Hasso makes some absurd claim which is “just Hasso being Hasso.”

The amusing part is where Hasso declares that his P.h.D. student’s lack any bias. How amusing! I wonder how your graduation prospects look at HPI if you perform an analysis which shows that HANA underperforms Oracle or questions using 100% column-oriented tables in a database meant to support ERP?

I also wonder if Larry Ellison created the Larry Ellison Institute if one could say that those P.h.D.’s were also free of bias?

Appleby’s Response

“I built a similar system for a customer with 8bn POS records on a 4-node 160-core 2TB HANA appliance based on IBM hardware – the cost of this system on the open market is around $500k. We found that 8bn transactions is around 200GB of DRAM, and we replicated master data into all nodes to avoid join colocation problems.

With HANA we expect about 18m aggregations/sec/core so we get real-world performance of around 3 seconds for any aggregation with just 160 cores.

Why would you need 25TB and 1000 cores to do the same? The numbers feel intuitively wrong.

John”

So this builds on Hasso’s pivot of the conversation.

There are several more comments after this from different participants that we skip as they are not relevant to the main discussion thread we are tracking. And we pick it up with Chris Eaton again.

“Well you can tell that it’s not Thanksgiving day today in Germany or Canada 🙂   All our USA friends are eating turkey and not reading blog posts but you and I are not 🙂

I’m happy to see you here writing about the values of SAP and even of HANA.  I have no issues when someone wants to write blogs to tell the world what they perceive as the value of their solutions.  But the issue here Hasso is that John’s initial blog posting was full of inaccurate claims about DB2 and so I corrected him but still this blog contains inaccurate claims about DB2.  Note above that I said that the recommendation for 16TB was that much raw data and I also said that I don’t expect anyone to use it for that. But when John claims there is some limit on a BLU system at 1TB (initially) and now 6TB people will read that and think that’s the database size limit (especially because in the column just to the left he talks about 100TB and 500TB for HANA…clearly not the amount of memory on those systems 🙂  So storage to storage comparisons I’m happy with, memory to memory comparisons fine too as long as they are accurate. But when someone makes a false or misleading claim or comparison, that’s when I feel compelled to make sure the public knows these are not facts but rather errors (either unintentional or with the intention to mislead, I can’t say, but I assume the former and that’s why I comment to make people aware they are in error).

On the point, yes “every single byte has to come to dram memory before it can be processed” yes of coarse we agree there. But it A) doesn’t have to stay in dram for long if it’s not needed by other workloads and B) it doesn’t have to come into dram at all if we can know in advance the query does not need those bits.  So yes of course the database can be much larger than available memory when you look only at the amount of data that needs to be processed at any given time for optimal performance (and even larger when you look at many of today’s databases running on Oracle for example where the data is not compressed at all or poorly compressed).”

We took out the last part of the comment because it began to discuss hockey and the hockey team Hasso owns.

But this will be interesting to see how Hasso responds.

Hasso Plattner’s Response

“that’s true. but i am in bermuda and we will have turkey later.

let’s talk about col store. in my definition a cold store contains only table entries, which under normal circumstances cannot be updated any more. inserts are only taking place when data from the hot store moves to cold store (once or a few times a year). active data has to remain in the hot store. any analysis including cold data has also to access the hot store in case of accounting, sales orders, purchasing orders etc. the cold store has an empty delta store and doesn’t need any secondary indices because all queries will be handled with full column scans. yes, not all data has to be in memory in the cold store. the usage patterns will be monitored and intelligent data provision is possible. as we heart here several times, hana loads ables by requested attributes only. a purge algorithm releases table memory, if teal wasn’t used for some time. if there is not enough memory left to load the requested columns tables will be pushed out of memory based on usage patterns. to make it absolutely clear, hana is not different in principle to any other in memory db when the cold store is concerned.

to filter data in the io-channel, as oracle, sybase iq, and others (blu?) are doing is possible and could be an option for the hana cold store. i don’t know whether the sysbase iq option has been ported already. again we have to protect our cores from doing low level work. it is more important in my opinion to improve repetitive use of large data sets. instead of reducing data for multi dimensional analysis, we can build temporary result sets and reuse them. i described this already for the hot store. this applies to the cold store as well.

i totally agree that facts should be facts. an in memory data base is mathematically fully predictable and all facts have to be very precise. let’s work on that together.

your hockey analogy has a flaw, you really believe you can program the intuition of a coach? good luck.

but be careful, if a player comes to early on the ice you get a penalty, if the player comes to late, you might get scored on. happy thaksgiving

This is a content-free series of paragraphs, at least within the context of this discussion.

Hasso pivots away from defending Appleby’s comments and then states that they will work on facts being precise. This coming from Hasso, who lies about as much about Appleby is not something we take seriously.

Appleby’s Response

“The original point of my blog was simple – and I think the point was simply made: DB2 BLU is not ready and not supported on an EDW like NetWeaver BW.

Technology moves quickly and a 1TB limit was moved to a 16TB limit. The factual error (since corrected) doesn’t have any impact on the original point and meaning of the post.

In fact the Summer 2013 release of DB2 10.5 BLU reminds me very fondly of HANA 1.0 SP01, from Summer 2011. HANA SP01 was well suited to standalone data-marts, as is the current version of BLU. HANA 1.0 SP01 didn’t support scale-out, didn’t have a text analysis engine, or spatial engine, or graphing engine, or planning engine. It couldn’t handle OLTP workloads, much like the DB2 BLU column store. All of that makes a lot of sense for a Version 1 product. I’m not critical for BLU for being a Version 1 product.

As we know, in the last 2 years, HANA grew up – and now you can run complex transactional and analytic workloads on one store, with no duplication of data with indexes or aggregates. This drives huge operational and strategic benefits to those customers with a bold vision to transform their businesses.

What I’m less clear on – and interested in – is what the roadmap for DB2 is in this context. What is clear is that the database market is changing. HANA and Hadoop have killed the EDW market – Teradata and Netezza are in real trouble. HANA has disrupted the database market to some extent, in that Oracle, IBM and Microsoft have build column stores which mimic some of HANA’s capabilities. This is flattering.”

HANA really did not kill the EDW market. Hadoop killed the Big Data market, but here is an open question as to whether data science will uncover the predicted value of all of those giant data lakes. It is easy to throw data into a bin, its much more difficult to deliver value from that data investment. And in 2019, Hadoop is being taken over by other databases.

According to Teradata, one reason that they are in trouble is that SAP was at this time and years after pushing them out of SAP accounts using anticompetitive tactics as we covered in the article How True is SAP’s Motion to Dismiss the Teradata Suit.

HANA never did disrupt the database market. Let us look at a listing of the most used databases.

Does this look like disruption? Where is HANA? Let us say HANA is right below Apache HBase. That would be less than 1.7% of the market. Yes, Oracle should be very afraid. 

Let us look at DB Engines.

Does this look like disruption? 

Who is disrupting the database market? HANA or PostgreSQL, MongoDB, Redis, and AWS’s RDS. But Appleby’s Bluefin does not implement any of these things, therefore the true “disruptor” of the database market is of course……whatever Appleby is responsible for selling consulting services for. (insert in ______)

Appleby continues…

“And what’s also clear is that Oracle, IBM and Microsoft and SAP all have strong customer bases for database, being the 4 leading players.

If I were to simplify the strategies of the top 4 players, I would put them like this:

Oracle: Bolt on a column store that allows customers to replicate row store data and accelerate OLAP workloads. Use this to drive engineered systems revenue. Market heavily.

Microsoft: Provide simple columnar in-memory functionality and make it easy to adopt. Rely on the SME market not wanting to invest and be happy with an incremental improvement.

IBM: I’m really not sure. Seems like different parts of the business want different things.

SAP: Build a platform that is truly different and attempt to disrupt Oracle, Microsoft and IBM.”

We aren’t supporters of any vendor much less Oracle, however, does Appleby actually believe that Oracle “markets” more than SAP? Oracle’s marketing copies SAP! We created an explanation that we have normally sent to Oracle resources which explains the word “hypocrisy.” But in this case, we recommend this article for Appleby Teaching Oracle About Hypocrisy on Lock-In.

There is no evidence that Microsoft’s “in-memory” is more incremental to HANA’s “in-memory.” SAP won’t allow benchmarks to be published against any competing database (SQL Server or otherwise). Therefore it is difficult to take Appleby’s claims seriously.

Appleby continues….

“In the meantime, Hadoop and MongoDB are quietly eating up market share for analytic workloads but aren’t being heavily used for OLTP workloads.

What SAP, Hadoop and MongoDB have in common is that they don’t have a legacy database customer base to keep happy and retain market share from (unless you count ASE and IQ, but SAP is much more invested in HANA).

All of this makes life very interesting.”

Well finally, Appleby mentions MongoDB. We are perfectly in favor of open source databases replacing commercial databases, but Appleby’s point makes no sense. SAP’s only argument here is that they offer a younger database than the other competitors.

Also, SAP has complete legacy ERP and application customers that they need to protect. Should it also be the case that SAP can’t innovate? Actually, that turns out to be the case. But is this Appleby’s point? Appleby seems to like to position SAP as some type of startup.

Chris Eaton’s Response

“Then why the full circle for HANA and the quarter circle for DB2 still?  Why the statement of 500TB on the HANA side (i.e. storage) and the memory limit on the DB2 side (i.e. memory)? Why the HA inaccuracies still?

By the way I would be surprised if the “strategy” from SAP is to “attempt to disrupt Oracle, Microsoft and IBM”.  My guess, and Hasso is likely more appropriate to comment here, is that SAP believes that their strategy is to provide value for clients.  In fact the “strategy” of most successful IT companies is to provide value.

Each may go after it in a different way, but company A provides value to a client and in return the client provides monetary value back to that company…that’s how the IT economy works.

That is hypothetically how the IT economy works, but that is not actually how the IT economy works, at least not among the largest IT providers — an this is because of a little thing called “monopoly power.”

Chris Eaton continues…

Having a strategy to just disrupt another company won’t get you very far (there isn’t much value in that for clients).”

Ahhh…Chris Eaton has caught Appleby in more inconsistencies.

This is where we depart from Chris Eaton. SAP distinctly planned to take as much database market share from Oracle, Microsoft and IBM, and to do so without providing any value any of their competitors. SAP already did this, using its ERP system to push out far more capable vendors in other applications areas and allowing SAP to sell its vastly inferior applications as we covered in  How ERP System Was a Trojan Horse. Furthermore, IBM, the IBM consulting division that is, has been complicit in helping SAP distribute false information in order to push SAP solutions for which IBM has consulting resources trained.

Therefore, IBM has a long history of going to market by supporting SAP’s lies. But now, SAP has turned on IBM.

Appleby’s Response

“Well, this blog is my opinion and I don’t agree that DB2 BLU is scalable for BW workloads, even if it was supported. If you have proof otherwise, I’d love to see it but since DB BLU doesn’t work at all with BW, we are unfortunately left with opinion and hearsay.”

SAP did not and will not allow IBM to provide this proof because they will not allow competitive benchmarking against SAP. No SAP published benchmark for HANA has been performed that shows other databases against HANA.

Appleby continues…

“Both HANA and DB2 BLU are actually memory-bandwidth bound for Data Warehouse applications like SAP BW. This is why SAP mandate 128GB RAM per Intel E7 CPU for BW workloads – the Intel E7 platform and specifically DDR-3 memory, can’t get information to the CPU fast enough for any more. BLU will have precisely the same problem on the E7 platform.

On the Power 7 platform, BLU will fare slightly better, because as you know, the Power 7 CPU has better memory bandwidth. From what I read, P7 is about 50% better than E7 core-core running BLU. You can do 256 cores in one Power795, so that’s equivalent to say 384 E7 cores. Now we are already in the realms of hearsay, which is frustrating, but I hear BLU aggregates around 3-5x less efficiently than HANA so you only have the equivalent of 128 HANA E7 cores, which means you max out at around 1.6TB of active RAM in the real BW world. Neither Hasso or I believe your 1.8PB story for BW workloads because the data is mostly hot.

In the meantime, SAP have tested 1PB of active data in a 100TB HANA cluster, and it works great for BW workloads. I’ll gladly revisit this blog when BW on BLU support is announced and we can do comparisons with real-world data.

SAP have publicly stated that they wanted to disrupt the legacy RDBMS market for the benefit of customers because none of the database vendors were doing this. You can spin this how you like.”

Yes, this last sentence is partially correct. SAP did intend to disrupt the RDBMS market, but it was never for the benefit of customers. It was for money, which is the same reason the only reason that Appleby published false information about HANA.

Appleby Fact Check

Amusingly, someone created the title “Appleby Fact Check” and made the following comment.

First, he noted a previous quote from Appleby.

“Both HANA and DB2 BLU are actually memory-bandwidth bound for Data Warehouse applications like SAP BW. This is why SAP mandate 128GB RAM per Intel E7 CPU for BW workloads – the Intel E7 platform and specifically DDR-3 memory, can’t get information to the CPU fast enough for any more. BLU will have precisely the same problem on the E7 platform.

On the Power 7 platform, BLU will fare slightly better, because as you know, the Power 7 CPU has better memory bandwidth. From what I read, P7 is about 50% better than E7 core-core running BLU. You can do 256 cores in one Power795, so that’s equivalent to say 384 E7 cores. Now we are already in the realms of hearsay, which is frustrating, but I hear BLU aggregates around 3-5x less efficiently than HANA so you only have the equivalent of 128 HANA E7 cores, which means you max out at around 1.6TB of active RAM in the real BW world.”

Then “Appleby Fact Check” stated the following.

“Even if we accept the dodgy 3x to 5x claim which is opposite to IBM claims (let’s actually wait on new benchmarks?)…

128 cores x 128 GB mandated per core = 16384MB = 16 TB active RAM.

This would support far larger than 16TB warehouse in most real world scenarios, i.e. where not all data is very hot.  On DB2 with BLU of course, not HANA.”

This is quite true.

But Appleby was none too amused by this comment or the name of the commenter.

Appleby’s Response

“To those who follow my writing:

If I regret one thing about this sequence of articles, it has been the negative impact to the community of being in the firing line between two tech giants. It is a sad day when a blog has responses written by someone who calls themselves “Appleby Fact Check”.

I hope that 2014 brings a more collaborative working environment. The responses to this blog show the darker side of enterprise software.

Happy New Year!”

This is extremely humorous to us. But there can’t be a collaborative working environment on this topic. Appleby sees no future for DB2 for SAP. SAP’s stated plan was for every database to eventually be replaced by HANA. How are other vendors supposed to collaborate on that?

But nothing we can write is as amusing as what Appleby Fact Check writes in response.

Appleby Fact Check’s Response

“Whereas you feel that dishing out endless misinformation about products that are not called SAP HANA is collaborative and enlightened?

I don’t work for a tech giant, therefore that is just another meaningless assertion.  Since you like opinions so much, I would venture that you might simply be “in the firing line” of people who don’t like your modus operandi.  Putting out such a large amount of misinformation is not an accident.

Happy New Year and here’s hoping for a modicum more objectivity and accuracy in 2014.”

Ooohh, nice Appleby Fact Check.

Before Appleby can reply a new commenter jumps in.

Paul Vero’s Comment

“John Appleby wrote: “I can only assume IBM must be feeling the squeeze by how aggressively you guys are defending yourselves on a SAP blog”

Paul’s comment is in reply to the copied quote above.

Paul goes on to say..

Interesting claim. I’ve been reading John’s posts for a while, let’s consider some evidence:

a) Endless anti DB2 BLU posts from John, IMO bordering on rants.  I don’t know whether this is off his own bat, but judging by his statements he has a very close relationship with SAP so for all we know he’s their attack dog by proxy.  Certainly at every opportunity he spins against DB2 10.5 with BLU Acceleration.

Yes, this is our conclusion also.

Appleby was and is a proxy or shill for SAP, and the things we wrote were approved by SAP.

b) The withdrawal of SAP Notes that contained details of DB2 BLU support, along with the failure to announce support for BW even though (according to Chris Eaton) the certification tests were actually fully passed some time ago.

SAP has a strong commercial motivation to undermine DB2 by any means necessary.

This is the problem with SAP controlled the note system and also having a competitive product against DB2. We illustrate this with the following graphic.

IBM is simply observing the same anti-competitive behaviour that SAP has used against other vendors for decades. Ever since SAP diversified from its ERP system into other applications, SAP has argued that no non-SAP application can work with SAP ERP like an SAP application, and that functionality is secondary to integration.

Again, IBM has no problem with this anticompetitive behaviour and supporting it when the victims are other consulting companies, as IBM consulting specifically supports SAP in pushing out non-SAP applications. But when the same tactics are used against IBM’s database business it is no longer so amusing.

c) The announcement by SAP that they will not be publishing any standard SD benchmarks for HANA, despite its supposedly superior OLTP performance. So large customers with very high OLTP throughput needs cannot check the OLTP performance of HANA with their current databases but instead must cross their fingers and hope. Obviously DB2 has a massive lead on the 3-tier SD benchmark, and my guess is that SAP’s problem is that HANA cannot even get close, hence the announcement.

Yes, and they still, up to 2019, have not published these benchmarks as we covered in the article The Hidden Issue with the SD HANA Benchmark.

The pattern is clear, SAP will allow no benchmark to be published that does not show HANA as superior to other databases. This explains why no benchmark has been published even 5.5 years after this article.

d) SAP says that instead “[they] are working on a new set of benchmarks, which is more suitable for Suite on HANA”. Interesting. If they were truly confident that HANA has the superior OLTP performance that they claim, not just great OLAP performance, then why not publish HANA results for the SD benchmark as well instead of all the bluster?  Shouldn’t customers be allowed to compare?

No, customers do not have the right to compare….according to SAP. SAP what it feels is the right to profit maximize. It might even be said by the capitalist theory that SAP has a fiduciary responsibility to its shareholders to rig the benchmarks. Let us ask, under the private company rules, what duty does SAP have to release information about IBM’s database? In fact, what duty does SAP have to release any accurate benchmark of any kind? Many of those who support the current generally acceptable capitalist model is none. A company can release any information, no matter how false as long as it meets one objective — to maximize profits.

What this means is that profit-maximizing companies cannot be trusted to provide accurate information to the market. Rather they provide self-serving information to the market. Therefore, all consumers of this information should be aware that private profit-maximizing entities are unreliable. This is why we recommend fact checking every one of these private entities and for companies to find entities that will fact check, that has made a concerted effort to be both independent from those they fact check and to not profit maximize.

If one looks at the charter of corporations, their responsibilities are to their shareholders first. Customers and employees and anything else is subordinate, and increasingly distant from maximizing profit.

Furthermore, both IBM and SAP work this way. Both SAP and IBM operate under the principle of applying leverage to maximize how much they can extract from customers. It is logical from IBM to ask SAP to behave ethically, but also highly hypocritical as IBM like SAP have a long tradition of putting their interests in front of customers’ interests.

For those interested, this video is quite interested in the history of maximizing shareholder value. 

Paul Vero continues..

e) The non-appearance of said “more suitable” benchmark, originally promised for Q3 yet still to appear half way through Q4. I seems it’s proving harder than they thought to find a benchmark that showcases HANA’s superiority!  SAP should publish an SD benchmark for HANA in the interim.

Obviously HANA OLTP performance will improve in time, since SAP will rewrite its code to push logic into the database (i.e. a stored procedure approach) while opting not to do the same for rival databases.  This will help disguise the fact the column store is inefficient for OLTP purposes.  Meanwhile, it seems that hype and spin must cover the gaps.”

This is all true. However, the benchmarks SAP came up with, the BW-EML (now called BWAML and the BWH) have not comparisons against other databases published, even in 2019.

This approach to pushing its code into the database layer is exactly what SAP did. SAP, of course, claimed it was necessary for performance, however, it is to eliminate portability. And it is anticompetitive. Although we wonder how IBM pushes their customers to DB2 over competing databases for its own applications. We don’t know, but we suspect IBM does something similar. Another question we have is whether Oracle pushes the customers of their applications to the Oracle database. How about Microsoft? As for the code pushdown, Oracle has created a language called PL-SQL and been very big proponents of stored procedures. This coincides with Larry Ellison constantly saying how difficult it is for companies to move away from the Oracle database. Well, one reason for this is that the application layer code has been pushed into the database. It is curious that the only anticompetitive behaviour that any of these companies can see is that of their competitors. I have had many conversations with people from SAP, Oracle, IBM and Microsoft on the topic of anticompetitive practices and in every single instance when the subject switches to their company, I am told that

“It is just good business.”

That it appears impossible to perceive anticompetitive behaviour if the company engaging in the anticompetitive behaviour is also sending you checks.

The Federal Trade Commission could put a stop to all of this. They could make it illegal for any application company to own a database, and this would require each vendor to sell off its database business. Corporations are by their nature unethical, and we have a long history that shows us that when companies can abuse power in the market unless stopped, they will do so.

The Return of Hasso Plattner!

Once again, when Appleby is called out, Hasso returns in just the nick of time.

“Hey guys keep your calm. let’s look at the benchmark issue first.

the current sd application of the sap erp suites reads tables without projection (didn’t matter in the past), maintains multiple indices (some via db, some as redundant tables), maintains materialized aggregates to achieve a decent response time for oltp reporting and still has some joining of tables trough loops in abap. all this is bad for a columnar in memory db. the current select ‘single’ is 1,4 times slower for a normal projection

(equal for a projection with one attribute, significantly slower for a projection of all attributes of a table with hundreds of attributes. the oltp applications have a large amount of supposedly high speed queries and transactional reporting. some of this had to be moved in the past to the bw for performance reasons. also planning activities should be part of the transactional scope, just think about the daily delivery planning.

without any major changes to the sap erp logic, we achieved an average performance gain beyond a factor two, which was the threshold  sap agreed upon with our pilots (that happened at sapphire 2011). the operational savings are substantial, but already the next release of the sap erp suite on hana will take advantage of no aggregates, less indices, no redundant tables , adequate projections, use of imbedded functions in hana for planning and other functionality. these changes will come to sd and other application areas soon. only then it will make sense to work on a new sd benchmark. the current system is at work at sap for more than two month now and i think anybody who really wants to know how it works is more than welcome. definitely our partner ibm has all the connections to do so. the upcoming savings in disk storage (permanent footprint), in the necessary in memory capacity, in simplicity of the data model, the power of using views on views for reporting are mind boggling. to some extend hana and traditional databases cannot be compared any more. databases with two data representations (row AND column) cannot compete with databases with one data representation (row OR column).”

It seems the only time that it is important to keep things calm is when Appleby is being called out for lying. It seems if Appleby is lying about HANA, Hasso considers this “calm.” But if Appleby is challenged, suddenly things are no longer “calm.” Hasso could say…

“Please allow Appleby to lie about HANA, DB2, Oracle and SQL Server unimpeded.”

Hasso is really just diverting attention away from Paul Vero’s comment. And then he finishes off with repeating the marketing inaccuracies around HANA. His comment..

“to some extend hana and traditional databases cannot be compared any more”

Is entirely undermined by the fact that even after DB2, Oracle and SQL Server had column-“in memory” capabilities added, SAP still did not allow the publication of competitive benchmarks.

Hasso continues

“the notion of using stored procedures is true.  twenty years ago i decided against using them in the otlp system, because we couldn’t find a common ground between oracle, ibm, msft and sap’s db. with the unbelievable scan and filter speed of in memory column store (if data is in memory) older principles of data processing are coming back: filter, sort, join, merge etc. the best way to verify this is when you convert brand new applications to hana and achieve orders of magnitude improvements.”

Hasso decided against store procedures (or perhaps others at SAP decided and Hasso is taking credit for this decision on the part of someone else, as Hasso has never been a primary technical mind at SAP), because SAP had no database and was trying to maximize its potential customers for its ERP system by porting it to as many database vendors as was feasible.

Hasso then switches back to HANA promotion mode. However, his comment is not relevant. There is no reason other database vendors could not also see these benefits after they add their column store. But he/SAP is preventing customers from seeing how other databases would compete with HANA.

Hasso continues..

“that hana is becoming a platform with numerous prefabricated algorithms for planning, genomes, predictive analytics etc is a deviation from the old paradigm. the other data base vendors have to follow (i think sap could give them the specs) or be at a disadvantage. customers using hana can choose whether they want to use only the data base functionality (standard oltp AND olap, plus analytics of unstructured data) or exploit the power of the platform.”

Looking back 5.5 years after this is written we can say definitively that this did not happen. SAP made this claim on several occasions as we covered in How Accurate Was Vishal Sikka on the Future of HANA? But none of this came true.

Hasso appears to think that a response to why SAP won’t back up their claims about HANA with benchmarking as an opportunity to restate exaggerated claims around HANA. Even in 2019, HANA has yet to master OLTP and OLAP from one database as we covered in the article HANA as a Mismatch for S/4HANA and ERP. Even by 2019, HANA had no significant market of performing analysis of unstructured data as we covered in How Accurate is SAP on Vora?

Hasso continues..

“now we come to ‘data has to be in memory’ or ‘intelligent prefetching’. hana can do both. but since the compression works so well (yes it could be even better, if hana works for olap in non realtime mode only) and the savings through changes in the data models are huge, sap sees no reason not to keep 80-90% of all transactional data in memory (table purging on fixed schedule). sap once has had a seven tera byte data footprint. currently the system runs at one tera byte and it is easily kept in memory. the projection for next year is that he 100% in memory footprint will shrink to less than 500 giga bytes. in ware house scenarios we expect much higher numbers and obviously not all data has to be permanently resident in memory. several scale out scenarios are currently being tested and again main memory doesn’t seem to be the bottleneck.”

And more distraction from the question of proving these unceasing claims of superiority.

Hasso continues..

“with regards to the frequency of corrections i can only say i have never seen a more rigorous testing environment than the one around hana. since hana is doing a lot of leading  edge applications (not possible two years ago) it might experience critical feedback from customers. the hana team reacts instantaneously and has a sophisticated escalation procedure up to the executive board.  even i on the supervisory board get escalation reports.”

This is not what has been reported to us from the field. We were not tracking HANA back in 2013, but for the past several years we have been. And if you have pressure from SAP sales you can get good HANA support, but the problem is that HANA is a fragile database even in 2019. HANA has lead to data loss at clients. Even if HANA could outperform competitors in benchmarks (which is doubtful considering how SAP lags competitors in database knowledge and database developers) it could not make up for the long term instability of HANA.

Hasso continues..

“i fully understand that our old partners are upset, that sap invaded their space, but oracle did it with erp, ibm did it with the dream team (siebel,i2,ariba) and microsoft with dynamics. each company has to fight for their own destiny and the best product offering for their customers.”

This is partially true, but in part quite false.

None of these companies (and none of them is paragons of ethical behaviour) made the number of false claims around their products that SAP did with HANA. It is curious that the strongest point that Hasso has, that these vendors push their own application customers to their own databases as well — was not used by SAP. This would open up the topic that all of these companies engage in anticompetitive practices, and that all of these companies should be broken up by the Federal Trade Commission. SAP is based in Germany, but the FTC has oversight over SAP’s behaviour in the US or could work with European Union authorities.

Hasso continues..

“please don’t think that marketing or false statements will have any impact over time. there is nothing more transparent than application using an in memory data base in the cloud. nothing can be hidden away.”

This is curious. Hasso directly correlates marketing with false statements. So Hasso agrees that SAP’s marketing equal false statements? It appears that we and Hasso finally agree on something. Also, why is it ok for SAP to make false statements? And if Hasso is correct and false statements don’t work, then why do Appleby and Hasso seem so dead set on making them?

“please stick to the facts and be willing to accept new insights.

i wrote this this blog as a professor of computer science at the university of potsdam and as an adviser in the sap hana project. i am not an employee of sap any more.

any comments are welcome. -hasso plattner”

This is a portion of the comment that illustrates what we have found from extensively analyzing Hasso, which is that he is simply dishonest. This comment is in response to Paul Vero asking why SAP won’t allow competitive benchmarking and to calling out Appleby for providing false information to the market, which is what the Chris Eaton interaction was also about. And after spending a large chunk of text prior to this reiterating the marketing pitch for HANA instead of answering the question, Hasso asks the reader to “stick to the facts.”

Wasn’t that the point of Paul Vero’s comment? That Appleby was not sticking to the facts?

Then Hasso adds on

“be willing to accept new insights.”

Again Chris Eaton and Paul Vero stated what we can confirm, Appleby and Hasso and SAP for that matter do not have new insights on databases. First, neither Appleby nor Hasso knows their subject matter when it comes to databases, but secondly, both men are lying fire hydrants.

Hasso then lies again by stating that he is writing this as a

“professor of computer science at the University of Potsdam.”

Really? So Hasso is writing this and no longer has any connection to or any of his wealth tied up in SAP? Hasso is now completely independent of SAP, and the HPI is not heavily invested in the HANA storyline? This is just amazingly dishonest. Secondly, how is Hasso Plattner a professor of computer science anywhere? Hasso has no undergraduate or graduate degree in computer science. And he also has on honorary P.hD. as we covered in the article Does SAP’s Hasso Plattner Have a Ph.D.?

Let us go over how the academic system works. You require an advanced degree to be a professor. And you must have studied in that area where you are teaching. Hasso worked out a deal with the University of Potsdam where he was given a professorship if he located his HPI there. But that is corruption, that is not a legitimate professorship.

Paul Vero’s Response

“Thank you for your post. It’s hard to know where to start to reply to so many interesting points, so I’ll start by summarizing what I hope are the areas of apparent agreement, as overall I think there is probably a lot more agreement than disagreement among most of the posters here.”

This seems to be a nod to Hasso’s power, because having read this entire thread, this is entirely untrue. There is really massive disagreement. Appleby has been called a liar, as one who falsifies his background, someone who disseminates false information about HANA and competitors, SAP has been accused of hiding SAP notes that show certification of DB2, etc.. It is hard to look at this thread and see “a lot more agreement than disagreement.” But when talking to a person much more powerful than ourselves, it is natural to defer to them.

Therefore, the reality is bent in order to not offend the more powerful entity. Hasso has had the benefit of this distortion field for decades and it has meant that Hasso now entirely overestimates his knowledge in the areas that he covers.

Paul Vero continues

“Firstly, I think there is general agreement that column-organized databases including HANA cannot compete with row-organized databases on *raw* OLTP performance.  However this won’t matter in SAP systems (and many other systems) since there are many mitigations as Hasso has described.  Regarding updates, the elimination of indexes and aggregations for reporting.  Regarding data retrieval, the elimination of SELECT * in favour of selecting only the required attributes.

On this last point, elimination of SELECT * will of course also significantly improve the performance of row-organized databases (though nowhere near to the same extent as column-organized databases). Unfortunately some programmers ignore this point; changing culture is difficult.”

This is true in fact, but SAP did not agree with this at the time. They stated that OLTP processing could also be better met with a 100% column-oriented design — which sort of shows the limits of SAP’s database knowledge and anyone who agreed with them at the time. As previously discussed, SAP moved away from this position only six months later when they added row oriented tables to HANA in SPS08 as we covered in the article How Accurate Was John Appleby on HANA SPS08?

Paul Vero continues.

“Secondly, the superiority of column-organized databases for analytics is a given.

Thirdly, it is a long-established principle that performing operations in the database, i.e. the stored procedure approach, is far faster (often orders of magnitude) than working externally. Unfortunately again most programmers ignore this fact and prefer to work outside the database.”

Yes column-organized tables in databases are better for analytics. But the trade-off stated here regarding the performance of stored procedures is not so cut and dried. There are also many overhead issues with placing stored procedures in databases, an immediate loss of database portability, long term maintenance issues, etc.. Paul Vero makes it sound as if programmers are illogical when they don’t use stored procedures. This is not true. For example, we are on the other side of the stored procedure debate and fully aware of the different arguments. But also, the other database vendors do not agree that SAP should be pushing code into the database layer, as this eliminates portability and cuts them out of the SAP market for those SAP products.

But secondly, even if a few of these items are agreed to, none of these things have been the focus of the discussion up to this point. This comment is a bit like saying that all of the participants agree that water is wet and sand is sandy, and so lets first state those topics on which we agree. This simply postpones getting to the actual topic. It is unlikely this preamble would be necessary unless this comment was being written in response to Hasso Plattner.

Paul Vero continues (and finally gets to the actual topic areas)

“Now for some areas of greater doubt, to me at least. I’ll pose these as questions with my current take, for people to disagree with

Is HANA database really superior to DB2 BLU (column-organized DB2 database)?  Obviously BLU is newer, but from what I’ve read and seen there is already much more similarity than difference. HANA has a lead in some areas, such as support of shared-nothing partitioning across nodes. On the other hand, the DB2 memory management is superior at present and so BLU makes better use of resources. On both these points, it seems generally expected that there will be future convergence.

On the other hand, there are a couple of obvious differences that could persist across releases. With DB2 BLU, they seem to have done a better job with the compression. The fact that BLU can evaluate range predicates on compressed data is a definite advantage, but also an advantage that could be overstated since most predicates evaluate equality (which HANA does support on compressed data). With HANA, the main advantage is better performance for small delta updates to the column-store, allowing operational analytics in the same database. SAP have made much of this and it does seem a significant advantage, certainly in a SAP context. It will be interesting to see whether IBM decide to change tack on this.”

This is all good information. Here Paul Vero is debating the proposed superiority of HANA versus DB2, which was originally proposed by Appleby. Let us see Appleby’s comparison graphic again.

Yes, Appleby has HANA completely dominating the other databases. 

Paul Vero continues and asks a very poignant question.

“To what extent is SAP still committed to supporting other databases that compete with HANA, and if not then will the perception of SAP shift over time to that of a company more like Oracle? Personally I’ve always respected IT companies that push for the use of open standards (unlike Oracle).  I trust that the SELECT * elimination will at least be to the code common to all databases, not just HANA, but why develop a separate HANA procedural scripting language for HANA (SQLScript) rather using the SQL standard, SQL/PSM?  Had SAP adopted established standards then these application coding improvements could have benefitted all compliant database technologies instead of just HANA.”

Here Paul Vero is basically saying that Oracle functions to block out competing database vendors by forcing the customers of its applications to use the Oracle database. Let us take this time to insert a Larry Ellison quote.

“Its not enough to win, all others must lose.”

The confusion on the part of Paul Vero is that SAP only supports open standards when it can use the standard to get more customers.

Let us see what Hasso says in reply.

Hasso’s Response

“to whom it may concern,

it is hard to repeat the same arguments again and again.

seven years ago we analyzed at the hpi in potsdam the database call profile for oltp and olap systems. once we decided to move to insert only, disallowed materialized updates, disallowed performance enhancing redundant tables (e.g. with a different sort sequence), eliminated select *, replaced joining by looping over select single there was not much of a difference in the db call profile  any more. in oltp systems we find a significant amount of applications, which are of the analytical type. if we take into account that many oltp type applications had to be moved to a data warehouse just for performance reasons (not to slow down data entry) and that modern planning/forecasting/predicting algorithms became realtime applications there is no significant difference between an oltp or an olap system with regards to db call profiles at all.”

the interesting new applications are the ones where massive data input can be analyzed on the fly. an interesting example is to monitor soccer players via sensors in real time with regrads to spacial moves. despite there is most likely no real business to be expected, the fact that 200.000 inserts per second are the highest data entry rate ever experienced in an sap system is remarkable. the system is based on hana.

Hasso goes back into full parrot mode and again pivots away from the question around whether SAP will certify other databases.

“so, i recommend to stop the argument: row for oltp, column for olap.

older applications, especially the 20 year old sd-benchmark scenario are not representative for modern applications any more. if you don’t trust me (i am pretty neutral, because as a professor i would loose my students being biased) ask the now over 1200 startup companies, who switched from classic rdbs to hana.”

That is not true. SD in ECC or S/4HANA works the way it always did. Neither Appleby not Hasso explain why this assertion is true, they merely make the assertion.

The most likely explanation is that HANA cannot perform competitively in the SD benchmark.

Secondly, Hasso is not a professor. Yes, he may have wiggled the title from The University of Potsdam in return for setting up the HPI at the university, but he is unqualified to be a professor anywhere that he did not found an institute on campus. It is also doubtful that Hasso would lose his students as they most likely worship him (Hasso is worth roughly $12 billion). And it is unlikely Hasso even does much teaching. The number of people who are worth over $10 billion and still work as university professors is exactly zero. Hasso spends over 1/2 of his time on a sailboat, which is likely why he was in Bermuda when he wrote these replies. Hasso would have had his enormous sailboat docked at plush Bermuda harbor. This is not a man who will be grading many papers.

Does Hasso teach a lot of course from Bermuda’s Hamilton Harbor? 

Secondly, 1200 startup companies did not switch to HANA. This entire paragraph is lies. And the argument was about Appleby providing false information, not

“rows for OLTP and column for OLAP.”

Hasso continues..

“i reported already in this blog, that the sap erp system migrated from a classic rdb to hana with great performance and tco improvements.

to provide possible improvements to the installed base without upgrading to a latest application release not much can be done. but for the latest release of the suite many of the code improvements will be made available to the current row (or row+column) oriented databases. the existing code base will go forward, every function developed for hana will be tested whether it is feasible to run on the other rdbs as well. i think it must be in the interest of sap to move as many customers as possible to the latest release.”

We now know that there are no performance or TCO improvements by moving from the other databases to HANA. The second paragraph is difficult to discern in meaning. SAP has been intent on blocking out other databases and actively runs down all other databases in the sales process. Therefore this seems to soft peddle this fact.

Hasso continues..

“some of the facts you stated are not correct. hana compresses the same way like blu or others, but in order to be oltp ready some of the compression potential was traded for speed. and with regards to data skipping while scanning attributes, i think it was already in terex (one of the forefathers of hana).

based on legal advice sap decided not to use any of the existing sql dialects for stored procedures.

so, that’s enough. if you are really interested in more  details, please read some of the other blogs in this forum or try out hana yourself in the amazon cloud or contact one of the startups.”

HANA compresses nothing like what SAP and Hasso have claimed as we covered in the article Is Hasso Plattner and SAP Correct About Database Aggregates? So what Hasso saying has been demonstrated by many inputs we have received to be false.

If Paul Vero reads other blogs, by either Appleby or Hasso, all that Paul Vero will receive is more false information.

Appleby’s Response

“Hasso,

Thanks for this. The point of this blog was (originally) just for those customers who run SAP BW: what is their choice for an in-memory database. I think that HANA makes a compelling argument here, even with the (relatively) limited options for business change that are present in a Data Warehouse like BW.

But the wonderful thing about blogs is that they sometimes take a direction that is totally different to the expected, inside the comments stream. This comment inspired me to write a follow-up blog, where we can continue this discussion. It will probably take the team a few hours to make this link live.”

So let us look at this. Appleby, who led Bluefin’s HANA consulting practice at the time thinks that HANA makes a compelling argument for BW? We can imagine that any person who leads any consulting practice in any other database would make the same argument.

Appleby was called out as providing false information by three different individuals and was told he did not know much about DB2 by someone with obviously a great deal of knowledge around DB2.

It is highly doubtful that Appleby was happy with the direction that the comments on this blog took.

Appleby’s Response

“Some interesting points here. I’ll try to make a few observations.

First, I have personally built dual-purpose transactional/analytic systems with SAP HANA that consume >200k events/sec and provide real-time reporting on that data. Whilst you can configure Oracle or DB2 to have better bulk-loading performance than HANA, once you add indexes, HANA is faster for inserts. OLTP/OLAP is a legacy concept that we created in the 1990s to solve the analytic performance of transactional systems.

Second, because SAP is an Oracle, Microsoft and IBM VAR, they have some limitations on what they can do, as well as some anti-competitive legislation to work with. As a result, they announced that they will port any functionality onto those database, if the database supports it. The BW team is working hard to support DB2 BLU.

However DB2 BLU as it stands, only solves part of the story. It won’t accelerate BPC or Integrated Planning substantially for those customers that use that with SAP BW. In its current form it’s not suitable for SAP ERP systems so we can’t talk about accelerating financials or simplification. It remains to be seen what Microsoft and Oracle will provide, but it looks to be similar to DB2.

Third – SAP HANA and DB2 BLU are very similar for data-mart scenarios. In fact, IBM customers with DB2 row-based databases will find immediate value porting those to column-organized tables. In many cases the applications will run many times faster with no application change at all.

But when it comes to designing new transactional/analytic business applications, HANA and BLU are chalk and cheese. With HANA you persist information once, and run your application logic, planning functions, analytics, everything, against one column-oriented version of the data and then you build the integration functions and web app in the XS application server inside HANA. You can’t run BLU as a single persistence for an app (other than a data-mart) because its insert performance is poor. So you think OLTP/OLAP and store OLTP data in a row store, ETL to OLAP in a column store table and then you have a separate application server for the app. Up goes the complexity, up goes the memory usage, up goes the cost.

I really encourage you to do some more investigation into BLU and HANA and you will see how different they really are.”

Once again, up to this point, and up to 5.5 years later, there is no a single published benchmark that outside of a Lenovo study that lacks enough detail to say anything as we covered in the article The Problems with the Strange Lenovo HANA Benchmark.

Appleby and Hasso make a few more comments, but it is mostly a repeat of what was already said, so we will stop the article here.

Conclusion

  • Appleby was exposed in this interaction with Chris Eaton.
  • We believe that Hasso Plattner was summoned by Appleby to make a comment that would intimidate and partially divert the discussion.
  • Chris Eaton showed that while he was unwilling to challenge Hasso Plattner (and for good reason), he brought the debate back to Appleby’s false claims at every possible occasion.
  • As for Hasso Plattner, he showed enormous dishonesty in his commentary and added to Appleby’s false claims with his own false claims.
  • Appleby Fact Check, whoever that was, provided contradictions to Appleby, and clearly thought that the information from Appleby was so inaccurate that his name needed to be “Appleby Fact Check.”

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://blogs.saphana.com/2013/11/13/comparing-sap-hana-to-ibm-db2-blu-and-oracle-12-in-memory-for-sap-bw/

https://en.wikipedia.org/wiki/Shill

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Accurate Was John Appleby on What In-Memory Database for SAP BW?

Executive Summary

  • John Appleby made bold predictions on HANA.
  • We review how accurate he was in his article on What In-Memory Database for SAP BW?

Introduction

John Appleby’s article on the SAP HANA blog was titled What In-Memory Database for SAP BW? A Comparison of the Major Vendors and was published on Nov 13, 2013.

The Quotations

Appleby Performs Research?

I get a lot of requests for how SAP HANA compares to IBM and Oracle in-memory databases for the SAP BW software stack. I thought that I’d share the insight that I have found from primary research on all three platforms. Here is a high-level view of my analysis:

The Appleby Papers make up analysis of over 12 of Appleby’s papers. From our analysis, we have concluded that Appleby does not perform research. See our analysis of his “TCO” study of HANA How Accurate Was John Appleby on HANA TCO for Cloud vs On-Premises? And How Accurate Was John Appleby on SAP BW-EML Benchmark? Appleby only rigs 1/2 baked “investigations” to promote HANA.

This graphic above is another example. It is easy to figure out why he has placed these vendors in this ranking. Of the database providers, Oracle has most of the database market for SAP. DB2 is less of a threat. Therefore, Appleby positions Oracle as having none of the abilities, where HANA has 100% of the capabilities.

HANA SP06 Has All of the Capabilities of a Mature RDBMS?

“SAP HANA SP06 is the 6th major release of SAP HANA over 3 years and it contains all the major features of a mature RDBMS. What’s more, there are also customers using it for mission-critical workloads like ERPs and there are hundreds of live customers for BW on HANA. It is the only in-memory database with full HA/DR and the only that massively simplifies operation of the system with a single store.”

This article was written at the end of 2013. In 2019, HANA still does not have the capabilities of a mature database. Back in 2013 HANA was highly unstable and still did not have its row oriented store implemented. The idea that the late 2013 version of HANA was as mature as DB2 or Oracle is truly laughable. At the time of this publication, HANA probably did have more “in memory capability” but this quickly changed. However, SAP’s HA lagged Oracle or DB2 at this time.

Oracle 12c provided the column-oriented store with the in-memory capability and (already) had HA by July 1, 2014, which is 8 months after this article is published. Later Appleby claims that the 12c release did not include a comparable column store functionality, and stated that the July 2014 date is when the true in-memory option was provided. 

Secondly, HANA does not simply the data environment, because companies did not at this time and have not since then installed multiple applications on a single data store (that is multiple applications on one HANA database).

BW on HANA Can be Hundreds of Times Faster than Other Databases?

“Most significantly for BW, SAP wrote an OLAP compiler for HANA within the BW product. This means that when you run a query in BW, it is translated to “calculation engine” operators and not to SQL. For complex queries like exception aggregation, HANA does not have to materialize millions of rows within the OLAP server like it does for any other database: instead, all of the calculations are pushed down into the database layer. Impact: BW on HANA is in many cases 100s of times faster than any other database.”

Every software vendor will state that their method of solving a problem technically is superior to their competition. But SAP has been unwilling to back up these claims by allowing competition in benchmarking against competitive databases as we covered in The Four Hidden Issues with SAP’s BW-EML Benchmark.

DB2 BLU Cannot Accelerate All of the Queries that HANA Can Accelerate?

“DB2 BLU accelerates some of the queries that BWA and HANA accelerate, but not all. Specifically, it accelerates simple query groupings that can be expressed in OpenSQL. However BLU will not accelerate complex queries which cannot be expressed in OpenSQL (e.g. exception aggregation) and this can cause a 100x performance degradation compared to BWA or HANA.”

As with the previous explanation of the complete lack of comparable benchmarks to databases other than HANA, SAP has made no evidence to support this claim and has deliberately rigged the benchmarks so this comparison could not be made. All we are left with is Appleby’s claims, and Appleby is a lying fire hydrant.

Oracle 12 In Memory

“Oracle talked up the “ungodly” speeds of their Oracle 12 in-memory option at Openworld 2014. But, it doesn’t exist yet and is unlikely to until some time in 2014 – there is no release date. Oracle are a smart company and they have decided to offer the in-memory columnar option as a complimentary option to the row store. Yes, it will keep a synchronized copy of row- and column-store data. This will cause massive inefficiencies and storage costs, which I’m sure is great for Oracle’s engineered systems, which they will no doubt try to sell with it. SAP will presumably support this once it is available, probably some time in 2015. I included an analysis of Oracle 12 in-memory, because Oracle keep telling people that it exists.”

Oracle caught up to HANA with its mid-July 2014 release. Oracle did not do this because it was truly a need on projects, but simply to catch up to SAP as SAP created a demand for this in the market. There is still, even in 2019, little need for a mixed OLAP/OLTP database. We covered this in the article How Accurate with Bloor Research on Oracle In Memory?

Both Bloor Research and our conclusion was that the “in-memory” capability that was added to SAP, to DB2, and to Oracle was a gimmick that increased the maintenance overhead of those databases.

Only 2 Years Ahead?

“In my opinion, SAP HANA is approximately 2 years ahead of all the other mainstream in-memory databases on the market. It is very clear that Oracle, Microsoft and IBM are very busy building their solutions and I believe they will be good solutions, in time. But in 2013 and probably through 2014, if you want to invest in an in-memory database for SAP BW, there is only one option: SAP HANA.”

Appleby clearly is trying to make the entirety of the database about the in-memory portion of functionality, which is the only area where HANA lead at this time. However, the rest of HANA entirely lagged competing databases in every other dimension. Oracle caught up to HANA in mid-2014, and DB2 did so at the end of 2014. This is not two years, roughly a year after Appleby made this prediction. Appleby wanted companies to completely turn over the database in order to gain between 6 months and 1 year of better functionality in one area of the database? And after companies did this they would not only have a more expensive database than either Oracle or DB2 but would have a far less mature database than Oracle or DB2 and would have a far higher maintenance database than Oracle or DB2. All of this to gain access

This demonstrates the extreme financial bias of Appleby. It is clear that the only thing Appleby cared about was getting his company more HANA business.

Conclusion

This article receives a .5 out of 10 for accuracy. This is an article that repeats inaccurate SAP and marketing material through a third party that makes it appear as if the information is not directly from SAP.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://blogs.saphana.com/2013/11/13/comparing-sap-hana-to-ibm-db2-blu-and-oracle-12-in-memory-for-sap-bw/

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Accurate Was John Appleby on SAP BW-EML Benchmark?

Executive Summary

  • John Appleby made bold predictions on HANA.
  • We review how accurate he was his article on BW-EWL Benchmark.

Introduction

John Appleby’s article on the SAP HANA blog was titled Behind the SAP BW EML Benchmark and was published on March 19, 2015.

To some readers, this article may appear harsh in its judgment of John Appleby. However, by the time we evaluated this article we have read probably 20 articles from John Appleby, and have noticed concerning patterns that are now very easy for us to pick up on. This is why we refer to previous articles from John Appleby where he used very similar or identical tricks.

The Quotations

The Lenovo BW-EML Benchmark?

“Yesterday, Lenovo released a new record for the SAP BW-EML Benchmark and I thought that some might be interested in the purpose, history and details behind the benchmark.

The SAP HANA platform was designed to be a data platform on which to build the business applications of the future. One of the interesting impacts of this is that the benchmarks of the past (e.g. Sales/Distribution) were not the right metric by which to measure SAP HANA.

As a result, in 2012, SAP went ahead and created a new benchmark, the BW Enhanced Mixed Workload, or BW-EML for short.

Appleby stated something very odd about this benchmark in his comment in the article which we covered in How Accurate Was Appleby on HANA Working Fast for OLTP?

Appleby stated the following in the comment section.

“I’ve not run the benchmark but I believe it’s because:

1) SD doesn’t run well on HANA

2) SD doesn’t accurately represent how customers actually use systems in 2014

3) HANA does run well for how customers really use systems in 2014

SAP are in the process of creating a new benchmark which I understand will include mixed-workload OLTAP queries.

The BW-EML benchmark was designed to take into account the changing direction of data warehouses – a move towards more real-time data, and ad-hoc reporting capabilities.”

This is why this new benchmark has been introduced. Because HANA cannot compete on the previous benchmark. There is also zero evidence that SD has changed, so this is a made up topic by Appleby.

This is what SAP previously stated about its benchmarking.

“Since then the SAP Standard Application Benchmarks have become some of the most important benchmarks in the industry. Especially the SAP SD standard application benchmark2 can be named as an example. The goal of the SAP Standard Application Benchmarks is to represent SAP business applications as realistic as possible. A close to real life workload is one of the key elements of SAP benchmarks. The performance of SAP components and business scenarios is assessed by the benchmarks and at the same time input for the sizing procedures is generated. Performance in SAP Standard Application Benchmarks is determined by throughput numbers and system response times. The throughput numbers are defined in business application terms. For the SAP SD benchmark, this would be for example fully processed order line items per hour. The unit for the measurement of CPU power is SAP Application Performance Standard (SAPS)3 . SAPS is a hardware independent measurement unit for the processing power of any system configuration. 100 SAPS is defined as 2,000 fully processed order lines items per hour which equals 2,000 postings or 2,400 SAP transactions.” – SAP, BW-EML SAP Standard Application Benchmark

So it seemed SAP was quite satisfied with its SD application benchmark2….that is until they came up with HANA.

As we will see, SAP has yet to benchmark S/4HANA SD even up to 2019.

How Does the SAP BW-EML Benchmark Work?

“BW-EML is quite a straightforward benchmark, and quite elegant in some ways. It was designed to meet with the Data Warehousing requirements of customers in 2012, and there were two key goals:

1 – A focus on near-real-time reporting, with the capability to update data every 5 minutes in key reporting areas.

2 – Ad-hoc reporting requirements, with the ability to ask a question on any data slice without having the define aggregation levels in advance.

The SAP BW-EML Benchmark looked to achieve these goals with the following constructs:

1 – A scale-factor with the total number of rows (500m,1bn, 2bn or more). The 500m scale-factor is 600GB of generated flat files.

2 – A common data model is used with 7 years of DSO and 3 years of InfoCube, each representing 10% of the scale factor.

3 – 8 Web Reports are used (4 on the DSO MultiProvider and 4 on the InfoCube MultiProvider), running queries that are randomized both by characteristic and filter value and drill-down.

4 – Data Loads are run every 5 minutes during the run, loading and additional 0.1% of the total data volume over 60 minutes (1m for the 1bn scale-factor, or 100k rows per object).

5 – A benchmark driver logs in a large number of users (typically ~100) and runs reports for a 60 minute interval called the “high load” period.

The benchmark result is the total number of query navigations run over 60 minutes. What’s nice about BW-EML is that whilst the queries are random, they are chosen with groups of cardinality, so the variance in runtime has been shown to be <1%.”

This is interesting.

Let us see what Oracle has to say about the BW-EML.

“Oracle in conjunction with our hardware partners has published over 250 SAP standard application benchmarks in the last 20+ years and has always been able to gain SAP certification in a very rapid and problem-free process.

SAP is now promoting HANA as the database of choice for their applications and clearly has a conflict of interest when it comes to certifying benchmark results that show better performance than HANA. Of the 28 SAP standard application benchmarks, SAP has chosen to only publish results for HANA on the BW-EML benchmark (emphasis added).

Curiously, they have only certified one non-SAP database result for BW-EML benchmark, which was an uncompetitive result published by IBM on DB2 for the I-series platform.

Unfortunately, SAP won’t allow such a head-to-head comparison to be made to HANA because SAP has not published a single benchmark result for any of its transaction processing applications running on HANA. Why not? Customers should ask SAP: What are you trying to hide?

SAP formulated a new benchmark, BW-EML, in 2012 to evaluate the performance of databases running its  Business Warehouse (BW) analytics application. The benchmark is intended to measure typical demands made on the SAP Business Warehouse database from users generating real-time reports and ad hoc queries.

This is the only SAP standard application benchmark for which SAP publishes results with HANA. And the results of Oracle’s recent BW-EML benchmark demonstrate that Oracle Database 12c In-Memory runs this benchmark twice as fast as HANA does.”

This graphic from Oracle is problematic to us. HANA is sitting on Dell, but Oracle is sitting on Exadata, which is a highly expensive engineered system. And most Oracle databases do not use Exadata. Therefore the performance difference could be attributed in part to the hardware difference. There is no denying this, Oracle promotes Exadata as being far more performant than any other hardware for Oracle. 

The question we have is why didn’t Oracle simply place Oracle on the same Dell hardware that HANA used during the benchmark? 

In fact, this question is brought up by SAP.

“FUD factor #3: An Oracle Exadata Database machine is much more expensive than a commodity Dell or Lenovo server that databases often run on.”

And Oracle’s answer is the following.

““Sure, it is,” Colgan says. But for many enterprise workloads, the performance advantages provided by Oracle Exadata machines—optimized hardware-software systems preconfigured with compute, storage, and networking hardware—more than make up for the difference.”

This answer appears to be a pivot and it does not seem to address the core issue, which is that Exadata machines are certainly responsible for part of the performance difference between HANA and Oracle. The intent of the benchmark should be to keep hardware independent from the software.

SAP further elaborates on a topic which we agree with SAP.

“Oracle’s claims are based on nonequivalent systems. From a hardware perspective, the SAP HANA platform runs on well-priced hardware from 12 partners, including Dell and Lenovo. The Oracle product runs on expensive proprietary servers that make customers dependent on Oracle. On the software side, Oracle compares the speed of a one-node setup on a current Exadata server against a one-node Dell server benchmarked in October 2014. A fair comparison would be one in which the Dell server ran the same newer SAP software that was used on the Oracle Exadata machine. We do not want to speculate on the potential performance improvement if the Dell benchmark ran on the same version of the SAP NetWeaver technology platform as the Oracle benchmark. Yet the Exadata machine costs 20 to 30 times more than the Dell server and yields just 2 times the performance. Does it make sense to pay that kind of premium for little improvement in database performance?”

We cannot verify SAP’s statement about the cost differential between Exadata and Dell. It is most likely exaggerated, but Exadata is certainly significantly more expensive. The primary point is that Exadata is higher performing hardware than the Dell hardware used for HANA.

  • The essential point is that Oracle did not isolate the effect of the hardware from the effect of the software, which is the most elementary rule around testing.
  • Oracle clearly used Exadata to get the highest possible score, rather than trying to produce a comparable test.
  • HANA uses extremely wasteful hardware setups and has a problem addressing the memory in their hardware configurations as we covered in the article How HANA Takes 30 to 40 Times the Memory of Other Databases. Therefore the Dell server would have been quite expensive, the idea that the price actually goes up from there for Exadata is quite mind-boggling.

SAP’s Missing Benchmarks

Where are the rest of the benchmarks, the other 28?

Where did they go?

Are they now all obsolete because HANA performs so poorly on them? Appleby makes it appear as if publishing the BW-EML benchmark is a great accomplishment, but as is typical with Appleby, he deceives readers by leaving out all the information that provides the full picture. As the Oracle database has similar “in memory” functionality, it is quite odd that SAP’s won’t compete against a database that it repeatedly calls out as “legacy.”

According to SAP, HANA should be able to easily beat Oracle in an open competition, yet it dodges any competition. Why would SAP do this unless everything that SAP says about HANA’s performance is false?

What is a Permissible in the Benchmark?

Now we go back to Appleby for another quote.

“Anything which a customer might do is permissible, which includes any supported configuration and platform. This includes indexes, aggregates, and any other performance constructs. Anything that a customer can do is fair game.

There are pros and cons to this, but it drives a behavior of irrational optimization in many benchmarks. CPU manufacturers have been known to tune microprocessors to run SPEC benchmarks faster. Database vendors create parameters to turn off important functionality, to gain a few extra points in TPC-C.

One thing that surprised me on the SAP HANA database is that the configuration used by published results is basically the stock installation. What’s more, there are no performance constructs like additional indexes or aggregates in use.

From a benchmarking perspective that’s insignificant (benchmarkers routinely spend months tuning a database), but it is hugely significant from a customer DBA and TCO perspective.”

That is extremely odd because in real life implementations HANA is a very high overhead database.

Appleby is proposing an out of the box scenario which is very commonly how salespeople speak. We have participated in a number of SAP sales initiatives where the term “out of the box” some similar oversimplifying term was used, and we can’t recall as single time where this ends up being true when the time to implement came, as we covered in the article The Myth of ERP Being 90% Out of the Box.

BW-EML Scale Factor

“Getting back to the benchmark, the Scale Factor is an important point of note. BW-EML is like TPC-H, and runs at a factor. The minimum factor is 500m (50m per object), and this grows to 1bn (100m per object), 2bn (200m per object) and beyond.

Caution: benchmark results with different Scale Factors cannot be compared! From a SAP HANA perspective, they can be compared, because HANA performs linearly with respect to data volumes. Therefore you can safely assume that if you get 200k navigational steps with 1bn scale factor, you will get ~100k steps with the 2bn scale factor.

But you cannot apply this logic with other databases, because you cannot assume linearity of performance. For instance there is an IBM i-Series result at 500m scale factor, and a SAP HANA result at 1bn scale factor. These results are not directly comparable – who knows whether i-Series will be linear.”

Again appears to be more filler designed to impress the reader.

Lenovo World Record

“Today, Lenovo released a world record in BW-EML with > 1.5m navigational steps per hour for the 1bn scale factor. That’s an incredible 417 sustained queries/second benchmarked over an hour.

Even more impressive is that it beats the previous highest single-node result by >10x, despite only having 7x the hardware. The remainder of the improvement comes from innovations in BW 7.4 SPS09 and SAP HANA SPS09, which is equally impressive.

It’s worth noting that the larger scale-factors have not yet been run for BW-EML. For instance, many of my customers have typical DSOs of 1bn+ rows each, which means a scale-factor of 10bn would provide very interesting results. The capability to do 150k navigation steps per hour on a large-scale data warehouse would be deeply impressive.”

And how does this compare to other databases?

Appleby seems to be content describing how HANA performs for the BW-EML benchmark without considering any other database. This seems to change the definition of what a “benchmark” is.

Hardware Cost

“Some benchmarks (e.g. TPC-H) require the hardware cost to be listed as part of the benchmark submission. That’s a pretty important metric, but not one which BW-EML requires.

A rule of thumb for BW on HANA hardware is $100k/TB is a good target price. For other databases, the hardware price will no doubt vary significantly.”

This is not helpful in any way that we can see. This describes the price of HANA hardware, which the only important factor is that comparable hardware was used…..but again, not if the benchmark is rigged in the first place.

Benchmarks Continually Fall Out of Date?

“The fascinating thing about benchmarking is that they become out of date as technology and business needs change.”

There is no evidence presented for this. It is certainly possible, but then why is SAP still using the same SD benchmarks for other databases? Again, Appleby and SAP are looking for a cover or excuse to change the benchmarks.

If SAP were operating in good faith, they would release the previous benchmarks as well as any new benchmark. If HANA is as world-beating as SAP says that it is, SAP should have no problem beating any database on any benchmark – old or new. Let us recall that is the claim — every other database is legacy compared to the amazing feats capable with HANA.

There is a Clear Move Towards OLTAP Databases?

“In the market right now there is a clear move towards hybrid-workload or OLTAP systems in the Enterprise, which combine transactional processing and analytic processing. This represents a challenge though because SAP’s strategy for this is HANA-only, and there are two orthogonal goals to benchmarking.

The first is to size systems. Given that the future of SAP Business Applications is on S/4HANA, I expect that there will be a S/4HANA S4-2tier benchmark, like the SD-2tier that came before it. This will allow customers to use the SAP QuickSizer to estimate the size of SAP HANA required. Let’s call it a unit of S4PS, which will relate to on-premise deployments and cloud deployments. In all likelihood it will be mixed-workload with analytics and core transactions, driven by the SAP Gateway API layer.”

This is not a “clear move to combined transactional and analytic processing.” SAP followed by IBM, Oracle, and Microsoft (following SAP’s lead) added column oriented/”in-memory” capabilities to their databases, but there is no evidence that the processing of this type increased. This is because the applications above them stayed roughly similar. ERP systems are still primarily processing transactions. Companies have not removed their data warehouses and increased reporting off of the ERP system. Actually, S/4HANA even in 2019 is barely live anywhere, with most of the implementation go lives falsified or covered up as we covered in the article S/4HANA Implementation Study. Therefore extremely few customers anywhere are even using S/4HANA on HANA.

Appleby would like his proposal to be true because his income comes from promoting combined transactional and analytic processing, but he presents no evidence that it is true, and of course, he has a massive financial bias to misrepresent reality.

We also question whether the future of SAP is HANA. In 2019 it is apparent that SAP has been nowhere near as successful as SAP intended, and that it is very likely that SAP will have to reverse course and certify S/4HANA on Oracle and IBM as we covered in the article Why SAP Will Have to Backtrack on S/4HANA on HANA.

SD Benchmarks

SAP has produced benchmarks for HANA on transaction processing — but not for S/4HANA.

Notice these SD benchmarks.

All of these benchmarks are for ECC SD on HANA for different hardware, not a single S/4HANA SD benchmark. Why? S/4HANA is supposed to be what SAP is pushing customer towards — so how is this objective served by publishing benchmarks for S/4HANA? 

Well, as pointed out in the earlier comment from Appleby, S/4HANA SD does likely not perform well in this benchmark, so of course, SAP excludes it.

But why would S/4HANA SD perform worse than ECC SD? Well, S/4HANA is far less mature than ECC.

This is circumstantial evidence that supports the issues of immaturity with S/4HANA. This is the primary reason S/4HANA has had such a high failure rate, almost a complete failure rate actually, a subject that we have covered in detail in previous articles such as How Accurate Was SAP on S/4HANA Being Complete?

Why is BW-EML is the Last Cross-Database Benchmark?

“The second is to perform cross-vendor bake-offs, either for hardware or for the database layer. Customers really want to benchmark HANA against other databases, or even to know what HANA appliance is faster. It seems likely that given the future of SAP applications is HANA-only, that the BW-EML benchmark will be the last cross-vendor benchmark.”

Appleby should say “the first and the last” or perhaps the term “only” might apply.

SAP thinks so little of its customers that it is hiding its benchmarks from them, unable to prove HANA is superior to any of the competing databases. This is a bit like a gunfighter claiming to be the fastest gun in the West, who for the next three weeks refuses to leave his room.

Notice SAP’s own declaration around its benchmarks.

“To help the market easily and quickly make these judgments, SAP offers standard application benchmarks. When used consistently, these benchmarks provide impartial, measurement-based ratings of standard SAP applications in different configurations with regard to operating system, database, or hardware, for example. Decision makers trust these benchmarks to provide unbiased information about product performance.”

Its really impossible to read this, having analyzed the rest of the material thus far and not be amused. How exactly is SAP impartial with benchmarking if they financial benefit from rigging the benchmark against all other database vendors?

If decision makers trust these benchmarks to “provide unbiased information” those decision makers are fools.

Final Words

“The BW-EML benchmark isn’t perfect. It doesn’t have enough a result submissions, especially at scale (I hope the incredible Lenovo result will cause other vendors to submit). It’s also really quite hard to run and requires both benchmarking and SAP BW expertise.

There are also not any certified BW-EML benchmark results by Oracle, IBM DB2 BLU or Microsoft SQL Server. One might assume this is because they can’t get good results, but that is conjecture.

Hopefully in the future those vendors will make submissions so customers can understand both the relative performance of those systems against SAP HANA, but because the SAP Benchmark Council have a full disclosure policy on all benchmark methodology, also the amount of tuning effort required to get good performance.”

Isn’t perfect is a distinct understatement. The BW-EML is an entirely rigged affair.

SAP will clearly disallow any database vendor that produces a superior benchmark in the BW-EML to HANA, by asking for this or that technical feature to be disabled until the desired outcome is obtained. SAP has put itself in the position where it has to lie about any benchmark because it has made such exaggerated claims around HANA. Appleby finishes off the quotation in the most disingenuous fashion pretending that SAP just wants to help its database competitors to improve “customer understanding.”

Interesting, if beaten in benchmarks, is SAP willing to apologize for years of false statements about HANA?

Let’s assume the answer is no.

Secondly, Appleby entirely misleads readers by hiding the fact that a large number of benchmarks that SAP has historically required other vendors like Oracle and IBM to perform for certification have not been performed or at least published for HANA. If SAP followed the same rules that it makes other database vendors follow, then HANA could not be certified! 

This point is parallel to a statement from Kuen Sang Lam of Oracle in his article on the topic of benchmarking from 2016.

“You will be able to find SD benchmarks for ALL the different databases supported by SAP including Oracle, DB2, MS SQL, MaxDB, SAP(Sybase) ASE, and even Informix. The only database which did not have any published SAP SD benchmark was SAP HANA. It was very strange for the owner of the application to not to have its own designed benchmark published using their own In-memory database.”

Conclusion

The primary point of this article was for Appleby to distract the reader from the fact that the BW-EML benchmark has been rigged to make HANA look far more effective than it is, and Appleby entirely misleads his readers leaving out critical information so he can engineer his desired conclusion. Appleby fills the article with paragraphs that don’t have to do with the benchmark and the overall article has the distinct feel of someone tapdancing around the real issue.

This article receives an accuracy of 0 out of 10. It is intended to mislead readers as to a new rigged benchmark by SAP.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://blogs.saphana.com/2015/03/19/behind-sap-bw-eml-benchmark/

https://www.springer.com/cda/content/document/cda…/9783319202327-c2.pdf

https://dam.sap.com/mac/preview/a/67/mnPymWPAmmE7yyyXPglwXXl8OnyEAMlAXggXJlJlUDxlyPUv/41356_GB_40939_enUS.htm

https://www.linkedin.com/pulse/does-truth-matter-in-memory-benchmarks-sap-oracle-kuen-sang-lam/

https://blogs.oracle.com/oraclemagazine/the-undisputed-database-champ

There is a great quote which our research supports on the usage of HANA below from this article.

“Colgan agrees that many SAP customers have bought HANA, but she questions how many actually are using it, given the fact that SAP throws in HANA as part of deep-discount programs for its enterprise applications. Those discounts enticed customers to acquire HANA but “didn’t require them to implement it,” she says.”

https://www.oracle.com/technetwork/database/in-memory/overview/dbim-faq-for-sap-bw-eml-benchmark-2701327.html

https://www.sap.com/dmc/exp/2018-benchmark-directory/#/sd?id=eb3bca05-a530-47e3-a308-b8f9eaadbe26

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Accurate Was Carl Dubler on Why There is No S/4AnyDB?

Executive Summary

  • Carl Dubler made bold predictions on HANA.
  • We review how accurate he was in his article on a Why There Isn’t an “S/4AnyDB.”

Introduction

John Appleby’s article on the SAP HANA blog was titled Why There Isn’t an “S/4AnyDB” and was published on Oct 11, 2017.

The Quotations

HANA Is Not Getting All of the Growth it Deserves?

S/4HANA momentum continues with 4 major releases and over 1,000 live customers. Still, one of the top questions I get is, “Why doesn’t S/4HANA run on Oracle (or any other database besides HANA)?” This is a natural question, especially since SAP partnered with database vendors for our products prior to S/4HANA. Sometimes there is even speculation of an “S/4Oracle.” It’s not going to happen. Continue reading to learn what makes S/4HANA unique, and why ERP running on legacy database systems are now obsolete with S/4HANA and S/4HANA Cloud.

This is false. In 2019, which is roughly 1.5 years after this article, our research indicates that there are close to no customers live on S/4HANA, even though the SAP count is 2,000 customers. We covered the falsified S/4HANA customers in the S/4HANA Implementation Study. Our prediction is that there will eventually be an S/4Oracle as we covered in the article Why SAP Will Have to Backtrack on S/4HANA on HANA.

None of the previous databases like Oracle or DB2 are legacy, particularly with respect or in comparison with HANA as we covered in How Accurate is the SAP in Calling Non-HANA DBs Legacy? 

Carl Dubler receives a Golden Pinocchio Award for his assertion regarding non-HANA databases being legacy. Ordinarily, we state that Golden Pinocchio Award winners lie, but with Carl Dubler, it’s unclear whether he knows what he is saying. 

Why HANA?

Several years ago, SAP introduced the motto “Run Simple.” This is a nice sentiment, but how do we make it real? We start with the data model used to run ERP. The data model had to be simplified—this is the foundation of every innovation since.

The data model for S/4HANA is not simplified, as we covered in the article How Much Has HANA Really Been Simplified?

Some concrete examples:

Inventory Management went from 26 tables in ECC 6 on legacy database to 1 table on HANA.

In Finance, the Universal Journal (1 table) eliminates the painful process of FI-CO reconciliation (many tables).

A database’s complexity is far more than simply how many tables it has as we covered in Does S/4HANA Have a Simplified Data Model? Each of the tables in HANA is more complex than the row-oriented tables. Financial reconciliation is not due to many tables and is not really a feature of software. The complexity in reconciliation is due to the manual work in judgments, such as what revenue to recognized and when.

Carl Dubler receives a second Golden Pinocchio Award for his assertion regarding the reconciliation benefits of S/3HANA. 

Extended Warehouse Management is back in core ERP and no longer needs its own system since the complex model required by legacy databases is gone. The same is true for Transportation Management, Advanced Available to Promise, and Advanced Variant Configuration, among others.

EWM is a dead product. Transportation Management is live at very few locations, so any improvements in these applications have minimal relevance. None of these applications ever required a complex model, and none of the databases used by SAP prior to HANA can be classified as legacy.

These aren’t dreams—these are innovations and productivity gains our customers are getting now. Imagine liberating your finance department from unproductive spreadsheet rodeos like reconciliation. Imagine providing your colleagues in supply chain MRP runs whenever they want, not just once per day. You can do this all while simplifying the IT landscape. With a simpler data model, your IT department is ready for anything, and your business is equipped with new, real-time capabilities.

HANA has multiple performance problems with MRP as we covered in How to Interpret MRP Performance Problems with HANA.

For those interested in running MRP quickly, far more quickly than anything possible in any SAP system, contact us about Real Time MRP.

These gains are simply not possible running on legacy database systems. We had to take the dramatic step of re-designing for an in-memory, columnar database. Obviously, we chose HANA for that. It isn’t good enough to take the 26 tables for Inventory Management and run them in-memory on, say, Oracle 12c. Simply running in-memory isn’t enough—we needed to leverage in-memory to redesign for a single data model. If you have a couple minutes, take a look at my video explanations herehere, and here. If you have more time, get a deep dive here.

Oracle In-Memory outperforms HANA. As does IBM DB2 BLU. Secondly, reconciliation is not an analytical workload, therefore there is no reason to apply an in-memory database to this application.

This is also a recognition that with the move to a pure in-memory architecture, the two layers of database and application are merging. In the past, we depended on a separate database layer to organize data for transactions and analytics. Now the application can access the data with plenty of speed in-memory and not need to have it organized in advance. Trying to make this work with every database vendor’s different architecture would have limited the innovation and delayed the benefits for our customers.

The application layer and the database layer are not merging. SAP is trying to use monopoly power in the application layer to extend its monopoly into Oracle’s monopoly in the database layer and DB2. SAP has it backward, HANA lags across the board in capabilities in its database versus Oracle and DB2 and is also far less stable than either database.

This is why we won’t have an “S/4AnyDB.” We can’t move our customers back into the complex data models of into the past. We partner with database vendors for ECC 6 which we continue to support until 2025. This explains why you might see agreements with them: it’s to support the past, not enable the future.

If there is no S/4AnyDB, the only reason is that SAP plans to push out vendors to capture database market share. There is no technical reason AnyDB cannot support S/4HANA and support it better than HANA.

Is It Working?

The real-world results are proving this was the correct strategy. The impact of a simpler data model is amazing. Business processes are transformed everywhere (see illustrations here). The best proof, however, is in the unprecedented adoption numbers for S/4HANA, the fastest-growing product in SAP’s 4-decade history. In a bit over 2 years, more than 6,300 customers have purchased it, and over 1,000 of them are already live (check out their stories). The transformation from traditional DB to the new HANA architecture has not been a limiting factor.

In 2019, 1.5 years after this article was published, there are almost no real S/4HANA implementations, and S/4HANA is one of the least successful of SAP’s upgrades to its widely used ERP system. Even in 2019, SAP only estimates 2,000 “live” customers, the close to other 7000 customers do not have S/4HANA even estimated as live. The reason for this is that S/4HANA is still an immature application.

Carl Dubler receives a Golden Parrot Award for repeating false information around S/4HANA. It seems that Carl can be made to repeat anything that SAP tells him to say. 

Further Comment from Dubler

IDC and ASUG both have surveyed and found about 2/3 of SAP customers plan to go to S4 starting 2018. There’s data in the ASUG survey about plans for on prem vs cloud, etc”

Unmentioned by Dubler is that neither IDC nor ASUG have any independence from SAP. ASUG is simply a marketing arm of SAP at this point as we cover in How ASUG Lost its Way and Sold Out to SAP.

IDC was purchased by a Chinese construction firm and counts SAP as a customer as we covered in Can You Trust IDC and Their Now China Based Owners?

Comment by Tim Main of IBM

“Whilst also understanding IBM has had sample S/4 Simple Finance code successfully running over Db2 BLU for in excess of 24 months now, I personally believe the SAP strategy to only offer S/4 over SAP HANA (now 2.0 vs 1.0 before) at release 1709 vs prior releases of 1510, 1611 etc is largely a marketing choice that seeks to force a top to bottom SAP stack over a Linux platform (either Intel or indeed HANA on Power).”

Tim is quite right. SAP is using HANA to displace other database vendors, and the technical reasoning for customers having to use HANA is a way of SAP removing choice from SAP customers. Dubler responded to Tim Main, but his response was content free, so there is nothing to discuss or even analyze in Dubler’s comment.

Response from Carl Dubler

“It is natural to ask tough questions of an enterprise product before buying it. That is why we offer several free, personalized tools to help customers understand what S/4HANA will mean for them, including Business Scenario Recommendations and S/4HANA Readiness Check. S/4HANA momentum is very strong (as the numbers in the article indicate) across all customer types, so I hope the client you heard from will take another look. S/4HANA also comes with a complete choice, including SaaS, private cloud, on premise, and IaaS (AWS, Azure, Google Cloud) so they can balance their needs for customization/control with accelerated deployment.”

Carl can’t engage in any discussion beyond being a sales parrot, so he needs to try to get out of the discussion, which is what he does here.

Conclusion

This article receives a 0 out of 10 for accuracy. The entire article is false from beginning to end, and Carl Dubler appears to be a mindless parrot who simply repeats what he is told to say.

SAP’s Inaccurate Messaging on S/4HANA as Communicated in SAP Videos

Fact-Checking SAP Information on S/4HANA

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Mesurement

Appleby's StatementAccuracy % of the CommentExplanationLink to Analysis Article
S/4HANA is what allows key processes to be digitized.
0%
ECC was already fully digitized and digitized across key business functions.The Problem with Using the Term Digital Transformation on IT Projects
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
Fiori is a major advantage for S/4HANA.
10%
In S/4HANA implementations Fiori is infrequently used when S/4HANA. How Accurate Was SAP on the Number of Fiori Apps?
Fiori is far more efficient than what came before.
10%
In testing Fiori and S/4HANA, Sven Deneken's statements did not hold up. There was a particular weakness in actually making changes after noticing something needed to be changed, and we found the efficiency below that of ECC with of course SAPGUI.
S/4HANA is innovative as it brings "real time inventory."
0%
Sven Deneken brings up the topic of "real-time capabilities," however there is nothing particularly real-time or different in terms of a reaction than ECC. Whenever you make a change in ECC or any other ERP systems for that matter, the entry is real-time. Sven Deneken states that "the physical inventory is the same as the digital inventory." However, under what system would this not be true?What Happened to the Term Perpetual Inventory?
S/4HANA is innovative because it allows access to supplier information.
0%
Sven Deneken states that information about the supplier is "just a fingertip away." Sven Deneken may be familiar with ECC, where supplier data is also a fingertip, or say mouse click away. It called the Vendor Master in ECC.
Sven Deneken says that the cycle could be changed to daily or sub-daily.
0%
Why would that occur? This is a very strange scenario that is being laid out.
S/4HANA is innovative because it allows MRP to be rerun interactively for a product location.
0%
Sven Deneken is extremely confused when he states that S/4HANA allows a fresh MRP run to be performed for a specific product location and that this is a differentiator for S/4HANA. For a single product location, there is no ERP system that cannot run MRP for a single location. Secondly re-running MRP does not remove uncertainties. MRP can be re-run when something changes. For example, when the forecast changes.Performance Problems with HANA and MRP
Sven Deneken states this demo shows SAP has reimagined inventory management.
0%
However, all of this functionality, save for several of the graphics shown in the video have already been available in ECC for many years, in fact, decades.

The Problem: A Lack of Fact-Checking of S/4HANA

There are two fundamental problems around S/4HANA. The first is the exaggeration of S/4HANA, which means that companies that purchased S/4HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About S/4HANA

We can provide feedback from multiple HANA accounts that provide realistic information around S/4HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

S/4HANA Implementation Research

We offer the most accurate and detailed research into S/4HANA and its implementation history. It is information not available anywhere else and is critical correctly interpreting S/4HANA, as well as moderating against massive amounts of inaccurate information pushed by SAP and their financially biased consulting ecosystem.

Select the description that best matches you.

Option #1: Do You Work in Sales for a Vendor?

See this link for an explanation to sales teams.

Option #2: Do You Work for an Investment Entity that Covers SAP?

See this link for an explanation for investment entities. 

Option #3: Are You a Buyer Evaluating S/4HANA?

For companies evaluating S/4HANA for purchase. See this link for an explanation to software buyers

Search Our Other SAP S4HANA DB Portability Content

References

https://blogs.sap.com/2017/10/11/why-there-isnt-an-s4anydb/

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Accurate Was John Appleby on a Historical View on HANA?

Executive Summary

  • John Appleby made bold predictions on HANA.
  • We review how accurate he was in his article on a Historical View on HANA.

Introduction

John Appleby’s article on the SAP HANA blog was titled What is SAP HANA? A Look Through The History Of The Automobile and was published on Dec 27, 2013.

The Quotations

HANA Is Not Getting All of the Growth it Deserves?

Quite often, I wonder why there aren’t more people using SAP HANA. Don’t get me wrong, it’s SAP’s fastest ever growing product, it has a run rate of $1bn, there are a ton of developers, partners and appliances, and products and customers. I’ve been asked to do a webinar on why SAP HANA is different and it got me thinking.

This is often repeated but is false. The fastest growing product in SAP’s history is R/3/ECC. It is the product that built SAP into the company it is today. Here is HANA’s growth according to DB Engines.

HANA is not only not the fastest product in SAP’s history. It is not even a particularly fast-growing product. HANA’s growth plateaued in 2015 as we covered in Why Did SAP Stop Reporting HANA Numbers After 2015?

If HANA were anywhere close to the fastest growing product in SAP’s history, HANA would have taken the database market by storm and this question Appleby is asking here would not be asked.

Also, Bill McDermott can’t seem to figure out if S/4HANA or HANA is the fastest growing product in SAP’s history. Neither is, in fact neither are even close to ECC/R/3’s growth in the 1990s.

Barriers to Mass Adoption?

Do you think that when Henry Ford created Tin Lizzie, he was thinking “great now people will be able to drop down to the Jersey Shore for the weekend”? I suspect that he may have been that forward thinking. He certainly understood the impact of his innovation:

“If I had asked people what they wanted, they would have said faster horses.”

And this is certainly the case with databases. Oracle asked their customers what they wanted, and they built a faster horse, with Exadata. SAP founder Hasso Plattner took a different approach, and built a car, with SAP HANA. This is the problem: there are many barriers to getting mass adoption of a transformational innovation.

The Model T was very quickly adopted.

“Although automobiles had already existed for decades, they were still mostly scarce, expensive, and unreliable at the Model T’s introduction in 1908. Positioned as reliable, easily maintained, mass-market transportation, it was a runaway success. In a matter of days after the release, 15,000 orders were placed.” – Wikipedia

The Model T is still one of the ten most popular automobiles ever sold, and this was in a time with a far smaller overall automotive market, showing the Model T’s dominance. The Model T entirely revolutionized the automotive industry.

Appleby states..

there are many barriers to getting mass adoption of a transformational innovation.

Yet, Ford faced no such problem with the Model T. He literally built a better mousetrap and he introduced it at a good time. HANA is not a better mousetrap.

So the success of the Model T and what is really a copycat database in HANA makes little sense.

However, the second proposal that SAP did much more with HANA that just focus on speed is also odd because speed was always HANA’s primary selling point.

There is another major problem with comparing HANA to the Model T. The Model T was priced to sell many units, as is explained in this video. Ford did not produce the first car, he produced the first affordable car, that had the functionalities that the Model T possessed. This also allowed for mass production. For years, no car company could compete with Ford on price, and one reason they lacked Ford’s economies of scale, that was in part due to the fact that the Model T was more platform than a distinct car

Model T advertisement. From the collections of The Henry Ford and Ford Motor Company (04/22/08). This advertisement shows just a small number of options that the Model T could be purchased in. 

The Model T had an enormous number of variations (truck, sedan, convertible, truck).

The Model T even filled the tractor market. 

..and also the tanker truck market. 

Essentially the Model T’s variants filled every major car category that we have today, and more and did it from the Model T platform.

HANA is quite the opposite, it is a highly expensive product, the most expensive database.

The Model T was also a simple design with built-in low maintenance. Once again, HANA is the opposite. HANA is a complicated design with high maintenance. The entities in the database space that match the Model T are the open source vendors, not HANA. Or AWS and GCP, which are focused on continual cost cutting and driving improved efficiencies. HANA is nothing like this, rather it follows the Oracle model of extracting the most from the customer for the database.

Stables, Stableboys, Ironmongers and Coachmen

There are entire industries that the car industry destroyed, whilst creating an entire new industry. The same is true with the database industry. There is a whole industry of disk storage, database administrators and performance experts, all keeping traditional RDBMS running. SAP HANA doesn’t need expensive high-end disks, DBAs or tuning.

It does however create industries of high-end RAM production and the build of a whole new collection of business applications, which has the capability stimulate a much bigger growth than the industries which will be cannibalized.

This is a nice try, but it does not fit with what we know about HANA. HANA’s administration overhead is higher than databases like Oracle and DB2 and far higher than a database like SQL Server. HANA resources also bill out at the top of the market, and it also has the most expensive hardware of any database. This entire explanation by Appleby is just false, and he certainly knows this is false.

How HANA helps create new applications is a mystery. First, companies should not be developing with SAP except to the degree they are mandated to for customizations. Something else that should be mentioned is that development is not particularly tied to a database. Developers can typically develop an application to work with a variety of databases. That is there is independence between the two areas that Appleby is meshing together.

Appleby receives a Golden Pinocchio Award for his statements around HANA’s development usage and capabilities.

Roads (i.e. Infrastructure)?

Can you imagine driving your 2013 Ford Focus on the roads of the early 1900s? It wouldn’t last long!

The parallel is true with SAP HANA: despite what some people say, SAP HANA is only OK at running existing applications. You can move your existing large-scale ERP onto SAP HANA, but it will only deliver modest benefits out the box. But just like when the roads started to improve in the 1920s and 1930s, you get massive improvements when you start to optimize your application for SAP HANA.

The roads were quite poor in the early 1900s, but the Model T was part off-road vehicle and also performed quite well in the snow. The Model T did not require a high-quality road network to be utilized. A Ford Focus is designed to drive at much higher speeds, but only on modern roads. Therefore, while the Model T did create the incentive to improve roads, it succeeded, particularly in the beginning, because it did not require an entirely new infrastructure. But here Appleby is essentially asking for a new application “infrastructure” to be created just for HANA.

In addition to Appleby’s comment about redesigned applications for HANA running much better not being true, this puts the application behind in a way making it subordinate to the database.

Why should companies accept this dynamic? In fact, the most important dynamic in application development today is microservices, which use specialized databases, rather than one “Swiss Army Knife” database as proposed by SAP, Oracle, IBM, and Microsoft. Appleby is pushing HANA as the basis for a new application infrastructure, when in fact HANA is really a monolithic database.

HANA does make major dislocations in the applications that work on top of it, but the problem is that it pays off very little even if the application were re-coded to work specifically with HANA.

Also, as HANA is primarily an analytics database, some applications, like BW, require very few adjustments to work with HANA.

SAP and the 100,000x Performance “Club”

SAP has a “100,000x” club, for customers who have improved process efficiency by at least 100,000x, which I’ve always felt was somewhat missing the point. Put in business terms: you can reduce stock-outs, improve predictions of customer demand, reduce the time to close the books and reduce debtor days. And that’s just the start.

The statement made by Bill McDermott that HANA works 100,000 times faster than any competing technology as we covered in the article How Accurate Was SAP About HANA Being 100,000x Faster Than Any Other Technology in the World?

This proposals is both completely false and is something we routinely have lampooned Bill McDermott for saying. It also shows how untethered Bill McDermott is to any reality. Therefore it is amusing to see Appleby embrace this statement here and to demonstrates that like McDermott, Appleby is really just a salesperson.

We are waiting for the evidence that HANA improves anything by 100,000x.

The Horse Itself

There are now around 100m horses in the world, with around 27m in Africa alone, which has far fewer cars per capita. There are now far fewer horses in the western world than there were a century ago, and this no doubt slowed the adoption of cars. A typical horse would last 10-12 years of working life, so why buy a car when you have a perfectly well working horse?

Once again the same applies to the database. There are hundreds of thousands of Oracle, IBM DB2 and Microsoft SQL systems out there, all doing their thing. Why would you want to replace them with SAP HANA, when they work – at least until the infrastructure they are on needs replacing.

The opportunity of new industries

We don’t think anything of going to the shopping mall, filling the car full of shopping, going to the movies and filling up with gas on the way home. All of these industries were created by the car, including motels, drive-thrus, Formula-1, and a hundred other industries.

And this is the tough thing – how do you imagine up a new industry? This is known as the second-order effects of technology, which most often happens by accident. Certainly this is true of my experience with SAP HANA – you have to drive the road and take a leap of faith, to see the value. For instance in Capital Markets, you can run an Order Management System which also provides real-time P&L and Risk Management, on SAP HANA. These are the new industries made possible by technology.

This seems to fall into incoherence.

The central premise, that non-HANA database is like a horse to HANA’s Model T is inaccurate, because HANA underperforms all of the databases that it is compared to. And SAP is hiding HANA from competitive benchmarking that would demonstrate anything that Appleby is claiming is true as we covered in the article The Four Hidden Issues with SAP’s BW-EML Benchmark.

Appleby shows a distinct lack of interesting in supporting claims and an outsized interest in making statements based upon claims.

Why is SAP HANA the Car Rather than a Faster Horse?

This is the hardest question to answer, and it’s one that I feel technology leaders like Hasso Plattnerand Vishal Sikka also struggle with. I know that I do, and it is extremely difficult to explain new paradigms in a clear and consistent way. I believe there are three important dimensions:

First, SAP HANA is an application platform and not a database. It contains a database, and a number of additional engines including text, sentiment and search, mathematics and predictives, spatial and graph, ETL and streaming, plus application and integration services. You can build everything in one place where other platforms would need 10-15 different components and systems.

Second, SAP HANA is very fast, and this means you only need to store information once and then you can present it in any format or aggregation on demand. Because of this, you can use all of the functions above, on the same data set. This causes a dramatic simplification.

Third, because SAP HANA is a very fast platform with a lot of functionality, you build business applications that you couldn’t imagine. As a retailer you can predict product demand based on social sentiment, and automate distribution to reduce stock-outs, based on what is selling right this second and weather data. This is the second-order effect of technology.

Each of these proposals is false. HANA is a database, not a platform. HANA is slower than competing databases and has much less stability than the competing databases. HANA does not enable development more than any other database, and secondly, development and databases are separate from one another. Any developer should be able to develop code for different databases.

Comment by Hasso Plattner

“The ideas for what later became HANA were developed with an application perspective. how would a new enterprise software suite look like, if we only had a database with zero response time, was the research topic. quickly we came to the assumption that all redundant data structures like aggregates, duplicate tables with different sorting sequences, secondary indices for faster access to sets could be replaced by expressions using sql with extensions for analytics and other reusable functions. the first test programs showed a dramatic reduction in complexity. with views on top of other views and the introduction of data structures for hierarchies nearly all known reporting replaced. other mathematical concepts as predictive analytics, cluster analysis, planning functions were added. the closest thing to a database with near to zero response time was an in memory database. the main reason for dividing databases into ones for OLTP and others for OLAP – write versus read performance – disappeared on the drawing board.”

HANA is not any faster than any of the competing databases and does not have zero response time or anywhere near to it.

Removal of aggregates, duplicates or secondary indices does not speed HANA or reduce complexity, we covered the complexity topic in the article How Much Has HANA Really Been Simplified? The rest of the quote is nonsense as Hasso essentially demonstrates how little he knows about databases.

“We were lucky and got two prototypes from sap: P*time for row store and Terex for columnar store. we ask SAP to implement dynamic parallelism for the columnar store and as a result experienced unprecedented performance for complex queries.”

This is false. Both P*time and TREX were purchased around a year before Hasso stated that work began on HANA. These are never mentioned as acquisitions and their history is explained in the article Did Hasso Plattner and His Ph.D. Students Invent HANA?

After his portion of the quote is more Hasso Plattner nonsense. And then he finishes with this.

“Yes, the performance is impressive, the reduction of the data footprint valuable but the most important achievement is the reduction in program complexity.”

The data footprint of databases is not relevant to companies, except to those that pay per GB for their databases, as HANA is priced. A reduced data footprint also has little to do with program complexity. Hasso Plattner lacks a sufficient understanding of either databases or programming to be in this discussion.

And these types of statements illustrated how little Hasso knows. Hasso’s statements in this area are quite robotic, without an authentic understanding of the subject, he routinely falls back into making the same illogical claims.

Appleby’s Response

“Hi Hasso,

I couldn’t agree more. I recently did some research to look at the effects of trying to build apps on several traditional RDBMS, that encompass transactions, analytics and higher-order functionality like predictive and spatial. What we found was both interesting and shocking: even in 2013 it is impossible to build a DBMS app which combines both transactions and analytics, let alone higher order functionality. To get good analytic performance, you have to create indexes and aggregates, and these destroy insert performance and create massive load and duplication.”

Hasso made what are nonsensical assertions, and of course, Appleby agrees. As we pointed out, Appleby was and is still a shill for SAP, presents at SAP conferences and most likely jointly coordinated his posts and sent them for approval to SAP before publishing them.

Let us review the definition of a shill.

“In marketing, shills are often employed to assume the air of satisfied customers and give testimonials to the merits of a given product. This type of shilling is illegal in some jurisdictions, but almost impossible to detect. It may be considered a form of unjust enrichment or unfair competition, as in California’s Business & Professions Code § 17200, which prohibits any “unfair or fraudulent business act or practice and unfair, deceptive, untrue or misleading advertising“.[7]In marketing, shills are often employed to assume the air of satisfied customers and give testimonials to the merits of a given product. This type of shilling is illegal in some jurisdictions, but almost impossible to detect. It may be considered a form of unjust enrichment or unfair competition, as in California’s Business & Professions Code § 17200, which prohibits any “unfair or fraudulent business act or practice and unfair, deceptive, untrue or misleading advertising“.[7]– Wikipedia

Yes, this is quite accurate.

Appleby is being directed by Hasso Plattner, so if Hasso stated the moon was made of cheese, Appleby would agree with this also.

This quotation from Appleby is extremely odd — Appleby again sets up comparison that HANA is entirely different from the databases it competes against.

Let us review the timeline.

Notice that Oracle added column store functionality in July of 2013 (6 months before this article is written), but in memory did not arrive until July 2104, or 6 months after this article is written. Therefore if the requirement is for the database to have both column store and “in memory” then Appleby is correct. However, even in 2019, there is still little benefit to having a dual mode database. Nearly all of the HANA implementations that added any value at companies were implemented for BW, which does not require a dual mode database, and works just as well with a pure OLAP database. 

Even if we push the date to July 1, 2014, for full in combined OLAP/OLTP for Oracle, Appleby and SAP are asking for companies to decommission their current Oracle and DB2 databases, that were superior to HANA in every way but the “in memory” functionality in order to gain 6 months for Oracle and one year for DB2?

Secondly, we are quite confident that Appleby did not perform this testing because in every other article of Appleby’s that publishes a test, the test is incomplete, a false comparison or in some way inaccurate. We covered this in the article How Accurate Was John Appleby on HANA Not Working Fast for OLTP? Appleby lacks the ability or interest in testing anything because he simply wants to prove that HANA is the best at whatever the topic of interest is. Secondly, he has a massive undeclared financial bias to promote SAP.

If anything that Appleby was saying was true, HANA would not lose in performance to the competing databases.

“To do this, even with the latest traditional RDMBS and hardware, you must create layers of complexity to work around this, like separating out transactions, analytics, predictive and spatial analysis into separate databases.

With HANA, we found we could easily build a single system, storing atomic information and using it on demand for transactional, analytic, predictive and spatial. We spent no time massaging database performance or building artifacts to try to improve performance (but which actually destroy performance).”

That is not at all the model of the monolithic database vendors like Oracle, IBM, and Microsoft. That is actually the approach that we recommend that we covered in the article How to Understand the AWS Multibase Versus HANA.

Like SAP, Oracle, IBM, and Microsoft tell companies they can meet all of their needs from a single database type. HANA not only loses in performance testing against other monolithic databases but aggressively loses both in cost terms and performance terms against specialized databases. These specialized databases like Redis or DynamoDB combined with another open source row-oriented database like MariaDB or PostrgreSQL not only handily beat HANA in performance, but the costs are extremely low.

“What surprises me is that the traditional RDBMS vendors have not (in my opinion) understand a word of what you have been saying for the last three years, and are instead building faster horses with saddlebags, coaches and bigger whips. SAP should be very thankful for this, although those vendors do have formidable sales and marketing teams that have stables continuing to sell their “new, faster horses” all over the world and trying to convince customers that the car is a fad.”

Appleby spent the past three years before this article, and years after this primarily lying about what HANA can do. Secondly, both SAP and Appleby know far less about databases than the “traditional” database vendors. Therefore there is very little reason for any database vendor to listen to Appleby. Secondly, this comparison of databases like Oracle and DB2 as faster horses to HANA as car is completely false.

Appleby receives a second Golden Pinocchio Award for comparing other better databases to faster horses, and HANA to a car.

“I write these essays to improve my own communication style and to create a clearer message. It’s by writing that I clear my mind, clarify thoughts, and get feedback. Your simplicity point is important though, and one thing I learnt from Steve Lucas this year is that most people can only consume information in “threes or fives”.”

Appleby writes for one reason, to increase sales of HANA services for Bluefin Solutions. His articles never become more clear, because his articles are virtually entirely based upon false statements. And if you are learning something from Steve Lucas, who we have recorded as having some of the least informed comments from any executive who ever worked as SAP, then we would suggest that you find someone of substance from which to take pointers. But this shows the necessity of Appleby to show deference to SAP executives.

Conclusion

This article receives a 0 out of 10 for accuracy. It is not only it is littered with typical Appleby inaccuracies, but the comparison of HANA to the Model T is not only wrong, but HANA is also actually the opposite of the Model T in nearly every dimension. It is difficult to see how Appleby could have found a more ironic point of comparison for HANA.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://blogs.saphana.com/2013/12/27/what-is-sap-hana-a-look-through-the-history-of-the-automobile/

https://en.wikipedia.org/wiki/Shill

https://en.wikipedia.org/wiki/Ford_Model_T

https://www.everettcurrierfarm.com/apps/photos/photo?photoid=205397102

https://www.earlywineauctions.com/vehicles/787/1923-ford-model-t-tractor

https://myautoworld.com/ford/history/ford-t/ford-t-5/ford-t-6/ford-t-7/ford-t-7.html

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Accurate Was John Appleby on 10 SAP HANA Customer Use Cases?

Executive Summary

  • John Appleby made bold predictions on HANA.
  • We review how accurate he was in his article on 10 SAP Customer Use Cases.

Introduction

John Appleby’s article on the SAP HANA blog was titled 10 SAP HANA Customer Use Cases from SAPPHIRE NOW and was published on June 25, 2018.

The Quotations

SAP HANA is Less Expensive than Oracle?

This year was my tenth SAPPHIRE, and also the tenth time that Hasso started to talked about HANA. It was in the 2005/6 timeline when a senior executive at SAP Labs walked up to a whiteboard and wrote one number: 2010. The room of architects was puzzled – what did 2010 signify? This, he explained, was when Moore’s law predicted that server architecture would be able to run a typical large-scale database in-memory. The Hasso Plattner Institute had already completed initial prototypes of an in-memory database, and they knew the race was on to build a new database architecture: the clock was ticking.

No, it is 2019, and this still has not happened, secondly, it is irrelevant whether a database can be entirely stored in memory as there is as Pareto relationship between the data in a database and the data that is used. Even today when purchasing HANA SAP still discusses Hot, Warm and Cold storage.

Secondly, there is no clock ticking. Oracle, IBM and Microsoft have added column-oriented tables and “in memory” capabilities as contained in HANA, but these mixed designs are rarely used because the demand is not there. Specific databases like using a row-oriented table database like PostgresSQL and a specialized analytics database like Redis is far more effective than trying to perform multiple database processing in a single database. We covered this in the article How to Understand the AWS Multibase Versus HANA.

HANA and Digital Platform?

Bill Gates famously said “Most people overestimate what they can do in one year and underestimate what they can do in ten years”. Another thing is for sure: SQL databases are hard to build. The main reason for this is that the ANSI SQL is really complicated. The SQL:1999 standard, for example, is over 1000 pages long. It has been 8 years since we released SAP HANA 1.0 to the market, and we just released HANA 2.0 SP03.

During these 8 years we added an incredible amount of functionality: HA, DR, integrated engines, for search, graph, spatial, text. We integrated the XS and XSA application servers, data quality, and streaming. We then added predictive and machine learning capabilities, and

We are now focused on building the Digital Platform for the Intelligent Enterprise, powered by the SAP HANA Data Management Suite and the Cloud Platform.

Why do I mention all of this? in SAPPHIRE 2018, I now see customers, at scale, doing the most incredible things with SAP HANA. I can’t show you all of them, because there were around 600 sessions at SAPPHIRE focused on HANA, but here are my personal favorites, in no particular order.

Appleby told an enormous number of lies about HANA since he first started writing about HANA in roughly 2013. It is amazing to still see him say the exact same things that have already been disproven. HANA is still just a database.

Gustave Roussy Cancer Research

This is one of the most incredible stories I have heard: Fabien Calvo, Chief Scientific Officer at French Cancer Research Center Gustave Roussy joined me on stage to discuss what they are doing with SAP HANA. They connect the biological, molecular, genomic, radiation, surgery and pharmaceutical treatment data generated at the center, and with SAP Connected Health platform running on SAP HANA and SAP Medical Research Insights, Gustave Roussy can integrate different data types: electronic medical reports, demographics, biological data; genomic data; text analysis; imaging, including (in the future) radiology and pathology.

Amazing work – no wonder they were a winner of a SAP HANA Innovation Award!

There is no reason to think that HANA is better than any other database at doing this work. We have covered this extensively, and there is also no evidence that this type of cancer research benefits from primarily an analytics database.

SAP promised running genetic algorithms and a health focused solution as we covered in the article How Accurate Was Vishal Sikka on the Future of HANA?

None of this came true.

Siemens PLM Software

Siemens realized that they needed a mechanism to allow their organization to build applications, whilst ensuring that data could be trusted. They used a Hybrid BW/4 and SQL data warehouse – using standard SAP Business Content extractors to pull in SAP data, and Smart Data Integration to bring data from other applications like Essbase.

On top of that, they built a Logical Data Warehouse using the Data Warehouse Foundation, which means they keep a single copy of the data, and all the models are built on top of that, connecting the various data sources into one version of the truth.

They then allow their business units to self-serve by building their own SAP Web IDE-based applications within the HANA platform. This is all integrated with Githib, Jira and Bamboo for application development and testing.

Bryan Hunter from Siemens explains how they did this and how it dramatically simplifies application development.

This case study makes little sense. Any type of development in SAP is quite inefficient.

Adidas

adidas CIO Michael Voegele describes how they are using sport to change people’s lives. He talks about increasing the speed of the supply chain and building direct relationship with the consumers with SAP Board Member, Adaire Fox-Martin. This is a very business-focused keynote where Michael describes technology as the enabler, and it is of course powered by SAP HANA.”

HANA is an analytics database, how does it speed the supply chain and build a direct relationship with consumers?

This is the presentation that Appleby is referring to at SAPPHIRE. There are virtually no technical details referred to, which is what Appleby considers “business focusing keynote.” 

Suiker Unie – Smart Farming

This customer is not only a SAP Innovation Award Winner, but also Huub Watervaal, Chief Executive Officer at Nextview, who joined me on stage, is a member of our Startup Focus program, which provides access to SAP HANA for free, engagement with our 340k customers, support from our experts, all with no financial commitment.

Huub explained to me all about how Suiker Unie was looking for ways to improve sugar beet production. A small increase in production is a dramatic increase in efficiency and therefore margin. They use the HANA Rules Framework to plant the right beet, IoT sensors to predict crop diseases and reduce pesticides, they have an alerting system and are now looking to satellite imagery to further improve production.

We have been studying HANA since 2016 and have never heard of the HANA Rules Framework. Again, no matter what was done with HANA, one could have done it at far lower cost using open source databases, and if one desires, bringing them up on GCP or AWS. When custom development is to be performed, one should disengage entirely from SAP or the costs will spiral and the outcome will be far worse than using open source options. Open source databases like PostgreSQL work far better in the cloud than does HANA.

Itron

Itron are a manufacturer of Smart Meters, and they have partnered with us to deliver the Itron Enterprise Edition Meter Data Management (IEE MDM) powered by SAP HANA. Senior Program Director Moustafa Nazif joined me on stage to discuss their solution. Two things about Itron fascinate me in particular.

First, this is a fully functional translytical application (OLTP and OLAP), streaming data from millions of smart meters in real time. There are direct benefits to the solution, in simplicity of application design, and improved analytic capabilities.

Second, I have written about the second derivative effects of technology before, which is the unexpected benefits of technology (Henry Ford could not have predicted the success of Loves Travel Stops, for example, because it is a second derivative effect – highways being the first derivative). This is playing out with Itron – they are just starting to think about the second derivative effects. For example, could you link smart meters to healthcare: we know the customer is elderly, and the lights don’t turn on in the morning, should we call emergency services?

We don’t have a comment on this case study. The provided link is blocked and there just isn’t enough information to analyze.

Costco

SVP and Head of Fresh Foods at Costco, Jeff Lyons joins SAP Board Member Jen Morgan to tell the story of how Costco are using SAP technologies to reduce waste and predict demand. How do they do that? Of course, with the advanced analytic capabilities and machine learning algorithms using SAP HANA.

This video is filled with aphorisms without any detail, that seems to be an advertisement from Costco. Jeff Lyons is told at the end of the video that he is a class act. 

E*TRADE

When E*TRADE sought to re-platform their system for online trading, they looked to a next-generation DBMS like SAP HANA to solve the problem of application complexity and performance. Director of Operations Technology, Adam Yuan, joined me on stage to discuss how there really wasn’t an alternative to using HANA from his perspective: it was the only enterprise system which was able to dramatically simplify the application.

Really?

Well, that would show how little Adam Yuan knows. Secondly, as we covered in How Much Has HANA Really Been Simplified, HANA increases the complexity of databases. HANA is the highest overhead database that we track.

MemorialCare

Dan Exley from MemorialCare joins our GM and Global Head of Platform & Data Management, Irfan Khan, to discuss how MemorialCare used SAP HANA to meet the demands of modern requirements in healthcare, especially in an age where patients expect to have access to data.

https://events.sap.com/sapandasug/en/embed.aspx

This presentation is not from a Dan Exley as stated but from Ifran Khan, from SAP sales, and what follows is a typically content free sales presentation.

Vodafone

Ignacio Garcia, CTO of Vodafone Shared Services, joined me on stage to discuss what the British multinational telecommunications conglomerate is doing with SAP HANA. What is remarkable about this is the sheer breadth and depth of their deployment. They have specific LoB solutions like Fraud Management and Margin Assurance, and they are also in the process of moving their SAP estate onto HANA. Like many SAP application customers, they started with analytics, and are now moving onto the core transactional systems.

There is also very little detail in this case study with a CTO providing such high-level coverage that it is difficult to know what happened with this case study.

Swiss Federal Railways

Last, but not least, SBB produces its own energy to power its trains. To intelligently manage power demand in its railway system, SBB collaborated with SAP Innovative Business Solutions to develop a unique solution, powered by SAP HANA streaming analytics. Monitoring data points in real time throughout the energy network, it identifies peaks when they occur, determines which loads should optimally be switched off, and conveys this information to other systems that automatically turn off heaters in train cars and on railroad switches.

Markus Halder, Head of Power Demand Management Program joined me on stage to discuss how this enables SBB to better utilize its existing power capacity, postponing the need to build new energy infrastructure. Another winner of a SAP Innovation Award.

As SFR likely made a poor decision here and wasted a lot of money and time on HANA when there were far lower cost and more capable alternatives, but as so little information was shared about this case study, we can’t get into more detail than this.

Conclusion

This article makes is sound like there is significant information in these case studies when there isn’t. Interestingly, these are only Appleby’s top ten, which means even the best case studies he showcased still had so little information about them. And many of the companies are only discussing their HANA case study into order to promote their own companies.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other John Appleby Fact Checking Content

References

https://blogs.saphana.com/2018/06/25/10-sap-hana-customer-use-cases-from-sapphire-now/

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making