How to Understand AWS’s Multibase Versus SAP’s Singbase Approach

Executive Summary

  • SAP has been proposing that all companies should use a single database type and that they should buy HANA.
  • AWS’s CTO explains the benefits of the multi-base approach.

Introduction to Multibase

What is often left out of the analysis of database advice from commercial software vendors is how biased it and self-centered it is. Commercial database vendors don’t provide any information to a customer that is not in some way designed to get the customer to invest more deeply in the vendor’s commercial products. As bad as Oracle’s “advice” to companies has been, Oracle at least has respected, although highly self-centered, knowledge of databases. SAP’s rather insane advice to their customers has been far worse, and far more self-centered. For years SAP has been telling customers that they need to perform multiple

For years SAP has been telling customers that they need to perform multiple types of database processing from a single database. This is wholly false but has not stopped either SAP or their partner network for saying its true. We have covered in detail how SAP’s proposals about HANA have ended up being proven incorrect in article ranging from What is HANA’s Actual Performance?, A Study into HANA’s TCO, to How Accurate Was Bloor on Oracle In-Memory.

In this article, we will expand into a topic which shows how wrong SAP is. This perspective we will address is not brought forward by either SAP, Oracle, IBM or Microsoft, but by the entity providing thought leadership on the future of how databases being used…which is AWS.

Verner Vogels on Multiple Database Types

In an excellent article by Verner Vogels who is the CTO of AWS. Let us begin with how he starts the article.

“A common question that I get is why do we offer so many database products? The answer for me is simple: Developers want their applications to be well architected and scale effectively. To do this, they need to be able to use multiple databases and data models within the same application.”

Notice the last part of this paragraph, where Verner describes using “multiple databases and data models within the same application.” Wait what was that? We all know that applications have a single database right? How does a single application use multiple databases? What is Verner talking about?

Well, it turns out Verner is describing software development that is different than the monolithic environment. Verner goes on to say this..

“developers are now building highly distributed applications using a multitude of purpose-built databases.”

That is the application that we think of is one way of developing, but this is giving way to distributed applications that can access multiple databases. It is an unusual way of thinking about applications, for those of us who came up under the monolithic model.

The Limitations of the Relational Database

Verner goes on to describe the limitations of the relational database.

“For decades because the only database choice was a relational database, no matter the shape or function of the data in the application, the data was modeled as relational. Is a relational database purpose-built for a denormalized schema and to enforce referential integrity in the database? Absolutely, but the key point here is that not all application data models or use cases match the relational model.”

This we have seen in the rapid growth of databases like MongoDB and Hadoop that specialize in either unstructured data or data with lower levels of normalization. Verner describes how Amazon ran into the limitations of using the relational database.

“We found that about 70 percent of our operations were key-value lookups, where only a primary key was used, and a single row would be returned. With no need for referential integrity and transactions, we realized these access patterns could be better served by a different type of database (emphasis added). This ultimately led to DynamoDB, a nonrelational database service built to scale out beyond the limits of relational databases.”

Let us remember, AWS has a very fast growing relational database service in RDS. However, they also have fast-growing non-relational databases like DynamoDB.

The Different Database Types According to Verner

Below we have provided a synopsis of the different database types, their intended usage, and the database that reflects them by Verner.

  • Relational: Web and Mobile Applications, Enterprise Applications, Online Gaming (e.g., MySQL)
  • Key Value: Gaming, Ad Tech, IoT (DynamoDB)
  • Document: When data is to be presented as a JSON document (DynamoDB)
  • Graph: For applications that work with highly connected datasets (Amazon Neptune)
  • In Memory: Financial Services, Ecommerce, Web, Mobile Applications (Elasticache)
  • Search: Real-time visualizations and analytics generated by indexing, aggregating, and searching semi-structured logs and metrics. (Elastisearch Service)

And actually, it is a bit more complex than even Verner is letting on. This is because some databases that AWS releases or releases access to, end up being used differently than first intended. This is described in a comment on Verner’s article.

“It turns out that your products are so good that people do end up using them for a different purpose. Take Amazon Redshift. I remember when Amazon Redshift was launched, a question came from the audience if you can use Redshift as an OLTP database, even though it’s OLAP. Turns out using Redshift in an OLTP scenario is one of the major use cases, to build analytical applications. We are one of those use cases, we’ve built an analytical app on top of Redshift. The OLTP use case stretches Redshift once you start putting a serious number of users on it. Even with the best WLM configuration.

To solve for that, we’ve used a combination of Amazon RDS, Amazon Redshift and dblink plus Lambda and Elasticsearch. Detailed write-up on how we did it here:”

The Multi-Application Nature of Solutions Distributed by AWS

The multi application nature of solutions is explained as follows by Verner.

“Though to a customer, the Expedia website looks like a single application, behind the scenes Expedia.com is composed of many components, each with a specific function. By breaking an application such as Expedia.com into multiple components that have specific jobs (such as microservices, containers, and AWS Lambda functions), developers can be more productive by increasing scale and performance, reducing operations, increasing deployment agility, and enabling different components to evolve independently. When building applications, developers can pair each use case with the database that best suits the need.”

But what are packaged solutions offering? Well, monolithic applications that are the exact opposite of this. And as SAP is a perfect example of a monolithic application provider, SAP wants customers to use a single database, and further, they want customers to use “their” single database as in HANA. Which according to SAP can do all the processing as well as all the different database types described by Verner above? The one problem being, HANA can’t.

The AWS Customers Using Multibase Offerings

  • Airbnb: DynamoDB, ElastiCache, MySQL
  • Capital One: RDS, Redshift, DynamoDB
  • Expedia: Aurora, Redshift, ElastiCache, Aurora MySQL
  • Zynga: DynamoDB, ElastiCache, Aurora
  • Johnson and Johnson: RDS, DynamoDB, Redshift

Verner goes on to say.

“purpose-built databases for key-value, document, graph, in-memory, and search uses cases can help you optimize for functionality, performance, and scale and—more importantly—your customers’ experience. Build on.”

The Problem with SAP and Oracle Cloud and Leveraging the Multibase Approach

SAP and Oracle have been touting their cloud. However, with SAP and Oracle, the cloud is only a pathway to lead to SAP and Oracle’s products. This is as much true of databases. SAP and Oracle are closed systems. They dabble in connecting to non-SAP and non-Oracle, but only to co-opt an area so they can access markets. AWS and Google Cloud are quite different. Notice the variety of databases available at Google Cloud.

There are over 94 databases out at Google Cloud, and far more out at AWS. These databases can be brought up and tested very quickly. Selecting one of the databases brings up the configuration screen. Furthermore, the number of database and database types is only increasing with AWS and Google Cloud. 

Right after this is launched, one can bring up a different database type, (say NoSQL, or Graphic) and immediately begin testing. Under the on-premises model, this would not be possible. Instead of testing, one would go through a sales process, a commitment would be made, the customer would be stuck with (and feel the need to defend) whatever purchase had been made. We have entered a period of multi-base capabilities, and AWS and Google Cloud are the leaders in offering these options. This will transform or is transforming how databases are utilized. And the more open source databases are accessed, the worse commercial databases look by contrast. 

Conclusion

Packaged solutions ruled the day for decades. After the 1980s, custom coded solutions were for losers. They were to be replaced by “fantastic ERP” systems that would make your dreams come true. And who agreed to this? Well vendors and consulting companies with packaged software and packaged services to sell. Consulting companies became partners with packaged software companies, parroting everything they said, without evidence. Even to the point where almost no one in IT is even aware that packaged ERP systems have a negative ROI as we cover in the book The Real Story on ERP. ERP proved to be a false God and delivering both a negative ROI (but a positive ROI for vendors and consulting firms) while saddling companies with systems that put the ERP vendors in the driver’s seat of account control to extract more and more out of their “customers.”

Now as I read about distributed applications accessing multiple databases, are we entering a period where the pendulum switches to custom coding again. Under the SAP or Oracle paradigm, you accepted the databases that were “approved” by SAP and Oracle. All competition was driven out of the process. Oracle applications worked with the Oracle database. SAP finally decided to introduce HANA to push the Oracle DB out of their accounts. SAP now thinks that all SAP applications should sit on a SAP HANA database.

Verner is describing a combination of components that are selected and stitched together. Most of these databases are open source. And one can choose from a wide variety offered by AWS. This is inherently contradictory to packaged applications, because the packaged application uses one DB, and works in a particular and defined way.

While this is little-discussed AWS/GCP can be viewed as opposed to packaged applications. Sure, leveraging AWS/GCP will start with the migration of packaged applications, but once companies get a taste of freedom, it will begin breaking down the rules enforced by the packaged software vendors. And who will tell this story? Will it be Gartner? No. Gartner receives 1/3 of it is multi-billion dollar per year revenues from packaged software vendors, and it is doubtful that AWS or GCP will pay Gartner to sing their praises the way that the package software vendors have. Gartner presents SAP Cloud, Oracle Cloud, AWS and GCP as if they offer basically the same thing, but that AWS is simply “ahead” of SAP Cloud and Oracle Cloud. Gartner has no interest in educating their customers as to the reality of AWS and Google Cloud, as it cuts against their own corrupt revenue model.

AWS / GCS Question Box

  • Have Questions About Using AWS for SAP & Oracle?

    Our unbiased experts can provide you with honest answers and advice about how you can use Amazon Web Services with SAP and Oracle.

    Just fill out the form below and we'll be in touch.

References

https://www.allthingsdistributed.com/2018/06/purpose-built-databases-in-aws.html

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How True is SAP’s Motion to Dismiss Teradata’s Complaint?

Executive Summary

  • SAP filed a motion to dismiss Teradata’s complaint.
  • How accurate are the statements proposed in the motion to dismiss?

Introduction to the SAP vs Teradata Lawsuit

Teradata filed a complaint against SAP in June of 2018 asserting many things that Brightwork Research & Analysis has been saying for several years. (although our research does not agree with all of Teradata’s allegations.)

Naturally, SAP said they did nothing wrong, and filed a motion to have the complaint dismissed. The reporting of the contents of the motion is from The Register. We looked for but were not able to find the actual motion to dismiss. We evaluate SAP’s statements against our own research.

Our Disclosure

We have any financial or non-financial relationship with either SAP or Teradata.

Now let us get to the quotes from the motion.

SAP Changing the Topic on the Teradata’s Complaint

“It (the complaint) also made antitrust allegations claiming SAP had attempted to edge Teradata out of the market by locking customers into its tech, noting the German giant’s ERP suite S/4HANA can only run on HANA.

However, SAP slammed these claims in a motion to have the case dismissed for once and for all, which was filed with the District Court of Northern California at the end of last month.

It argued the joint venture, known as the Bridge Project, started because Teradata “had a limited customer base” and wanted to appeal to SAP’s users – but SAP painted Teradata’s push as wildly unsuccessful, saying that just one customer signed up.”

This is not related to at least the heart of Teradata’s complaint. When SAP partners with a vendor it is never (from SAP’s perspective) to improve that vendor’s ability to sell into SAP’s customers. SAP uses partnerships to neuter competitors and to copy intellectual property, but normally work against the competing vendor’s interests. As we covered in the article How SAP’s Partnership Agreement Blocks Vendors from Fighting Indirect Access, partnerships with SAP have helped to keep competing vendors from publicly complaining about indirect access.

This is of course not to deny that Teradata wanted to appeal to SAP’s users/customers. They certainly did. That is always the motivation for vendors to engage in partnerships with SAP. However, Teradata had been doing this for decades…..that is before SAP introduced HANA and began deliberately blocking out other database vendors. Teradata’s complaint is that SAP effectively blocked them out of accounts that they shared by using HANA and the restrictions around HANA to do so. We covered this topic in the article The HANA Police.

Therefore, in this argument, SAP is attempting misdirection.

HANA is Innovative?

“SAP had been working on its own database product for years before that deal, it said, and branded “the assertion that HANA is the result of anything but SAP’s technological innovation, investment, and development is factually groundless”.

Teradata was only bringing the lawsuit because it has “fallen behind” the competition, SAP claimed.”

SAP did not “work on its own database product for years before the deal.” SAP had several databases for years, and they also acquired Sybase, but those are not related to this topic. What ended up becoming HANA were two small acquisitions purchased roughly a year before HANA was released. We covered in the article Did Hasso Plattner and His Ph.D. Students Invent HANA? That while SAP falsified a story around Hasso Plattner and his students creating HANA from scratch, the supporting technologies for HANA was purchased with the intent of making them into HANA. SAP’s big addition to the design was to remove aggregates and indexes. Hasso Plattner nor Vishal Sikka, nor Hasso Plattner’s Ph.D. students ever contributed anything that could be called intellectual property to the exercise.

How do we know?

We analyzed what was claimed by Hasso Plattner in his books and in the SAP marketing/sales material where these contentions were made.

Hasso Plattner’s books aren’t so much books as marketing pitches. Riddled with exaggerations and inaccuracies, part of what Hasso Plattner’s books do is create a narrative where Hasso and SAP created some superlative innovation in column-oriented databases. None of Hasso’s claims regarding innovation hold up to scrutiny. All of Hasso’s books (4 in total) have one purpose….not to inform, but to sell HANA. 

There is little doubt that Teradata had superior database knowledge and that SAP did seek to learn from Teradata and to use the partnership to do so. Furthermore, SAP has a history of doing exactly that with other software vendors. SAP’s xApp program was really an extensive competitive intelligence gathering operation designed to extract IP from vendors so that it could be placed into SAP’s products. We covered the xApp program in 2010 in the article Its Time for the xApp Program to End.

HANA’s design is highly problematic and cannot meet SAP’s statements about it — except in analytics, where it is only better than older versions of competitive databases and only when using far larger hardware footprints as covered in How Much of HANA’s Performance is Hardware? SAP’s statements about HANA’s superiority are false.

The Mystery of HANA’s Lack of Use Outside of SAP Accounts

HANA is not purchased outside of SAP accounts; it is only purchased by accounts controlled by SAP where the IT customers failed to perform their research into SAP’s claims. If the outlandish claims around HANA are true, why aren’t non-SAP customers using it? No other database fits this profile.

  • Oracle sells databases to everyone not just to customers that buy Oracle’s applications and where they have account control.
  • SQL Server is found everywhere, not only on accounts where Microsoft sells their ERP system.

Bill McDermott stated that HANA works “100,000 times faster than any competing technology.” If that is true, why do only SAP customers buy it?

Teradata’s IP Puffery

Through the original complaint, Teradata overstates its intellectual property implying that they have some secret sauce no one else has. The designs that are similar to HANA are all over the place. AWS has Redshift which is similar in design to HANA. And both Google Cloud and AWS have Redis, which is also similar to HANA (although in a different dimension). Reading Teradata’s complaint is symptomatic of commercial software companies perpetually overstating how unique their software is. However, the claim is axiomatic, declarations of uniqueness and innovation are known to correlate with commercial software sales positively. Teradata’s complaint also exclaims how employees are made to sign NDAs so that Teradata’s technology secrets are not distributed outside of Teradata, but neglects to mention how much Teradata benefits from those same employees add to Teradata’s IP. Apparently, by inference, all of the Teradata IP was created by executives, and not employees. And where did Teradata originally develop its database from? That is right, from using database concepts that were in the public domain.

As with pharmaceutical companies which commercialize research that performed by universities and is funded by taxpayers through the National Institutes of Health, as soon as a software vendor wants to sell software, the public domain very conveniently recedes into the background, and the narrative of “their IP” is wheeled out front and center.

Big Money Equals More IP Protection?

As readers can tell, we find the IP theft argument made by Teradata to be the least persuasive part of their complaint. Other vendors have far greater claims regarding SAP stealing their IP than does Teradata. But Teradata is a rich software vendor and has the money to bring a case like this. Therefore, their IP concerns are considered relevant, whereas a smaller software vendor’s IP concerns are considered less relevant (perhaps irrelevant?).

Teradata Has Fallen Behind…….SAP’s Marketing Department?

SAP states that Teradata has “fallen behind” SAP. However, in technical circles, SAP is still not a respected database vendor. Teradata, although they are known to charge far too much for what they offer and to overpromise, are technically respected. The only place that Teradata has fallen behind SAP in databases is in marketing.

Teradata Cannot Compete with S/4HANA?

SAP goes on to make an assertion that is so absurd, that SAP must believe the judge will make zero effort to fact check the statement.

“Teradata has not been able to compete effectively with S/4HANA because it only focuses on its flagship analytical database and has failed to offer innovative and relevant compelling products,” the filing stated.

Teradata does not compete with S/4HANA. They compete with HANA.

The reason Teradata has not been able to compete in SAP customers with S/4HANA is that SAP made it a requirement that HANA only copy data to a second instance of HANA. This made Teradata uncompetitive as it would massively increase the cost (HANA is an exorbitant database in its TCO, which we estimated in the Brightwork Study into SAP HANA’s TCO) This is not merely a Teradata issue, SAP is using these rules against all the database competitors and using them against SAP customers. Reports of these abuses come in from different places around the world to us.

Therefore, SAP’s statement about failing to offer innovative and “relevant competing products” rings hollow. This is particularly true since HANA is not an innovative product, but as we covered, in Did SAP Simply Reinvent the Wheel with HANA.

SAP backward engineered other databases combined with its acquisitions of other database components. To hide this backward engineering, and to seem innovative, SAP has renamed items that already had generally accepted names. For instance, what SAP calls “code pushdown” is simply the same old stored procedure as we covered in How Accurate are SAP’s Arguments on Code Pushdown and CDSs.

Teradata Must Develop an ERP System to Compete?

The sentence related to Teradata only focusing on its “flagship analytical database” by SAP contains an important assumption that should be fleshed out by the judge during the case. The assumption made clear by this statement is that Teradata should not offer only analytical/database products to compete with SAP, but needs to develop its own ERP system.

This fits within the construct that SAP finds appealing, which is that the ERP vendor should control the entire account. And it is an inherently anti-competitive assumption. What is most curious, is that SAP does not even appear to realize how this exposes them as monopolistic in their thought processes. That is not supposed to be the assumption of ERP systems. ERP vendors are entitled to offer the customer more products, but selling the ERP system to a customer does not entitle that vendor to all of the IT spend of that company.

SAP Lacks Power in Its Own Customers?

One has to really stand in awe of SAP’s next proposal to the judge. SAP would like the judge to think that SAP lacks influence in……SAP accounts.

“SAP said Teradata’s allegations that it was monopolising the enterprise data analytics and warehousing market also fell flat, arguing it had failed to even identify SAP’s power in that market.

“The [complaint] alleges nothing more than that Teradata now has to compete in its favored marketplace,” SAP said.”

Here SAP’s attorneys try another sleight of hand. Rather than addressing anything true, SAP’s (what must be very highly paid) attorneys prefer to change the subject to see if the judge will notice.

Can judges be hypnotized? If so, SAP has a chance with this argument. 

Teradata’s allegation is that SAP is blocking them out of SAP accounts. And that this is anti-competitive. SAP has created false technical proposals including the incredible bizarre restrictions that HANA must be copied only to HANA and not to Teradata. I have discussed these limitations with people with decades of database experience, and none of us can make any sense of the restrictions. They are unprecedented and designed merely to capture market share. Those are real impediments to Teradata, and they are meant to be.

Furthermore, these restrictions are costing SAP customers in a major way. SAP wants customers to upgrade to S/4HANA, which comes with HANA, and as soon as they do, they will find themselves subject to all manner of restrictions that did not exist with the previous database they were using (Oracle, DB2, SQL Server). SAP plans to use these restrictions to push out from ERP making HANA mandatory and “making the customer’s choice for them.”

Teradata need not identify SAP’s “power in the analytics market,” as SAP has enormous and undisputed power in their clients. Anyone who has worked in SAP consulting knows this. Now those clients previously were happy to use Teradata and SAP side by side and did for many years. But SAP, through these restrictions made it difficult for Teradata to continue to do business in SAP accounts. In fact, according to the Teradata complaint, many of their customers they shared with SAP gave them ultimatums that they would have the previous levels of interoperability with SAP, or their customers would leave them. This is quite believable, as SAP greatly reduced the value of Teradata in SAP accounts by making the integration to Teradata so much more expensive.

The entirety of SAP’s restrictive policies is to injure competitors and to absorb more income from customers. SAP is in a particularly weak position here now that all of their claims regarding HANA’s superiority have been pierced as we covered in Articles that Exaggerate HANA’s Benefits and How to Deflect That Your Were Wrong About HANA.

S/4HANA and HANA are the Same Product?

“Regarding antitrust claims, SAP said Teradata “does not plausibly allege that SAP coerces its customers into purchasing HANA”. It added assertions that S/4HANA unlawfully ties HANA to ERP software are misguided, as they aren’t separate products.

Rather, it is one integrated product sold to customers as so, compared to separate ERP and database wares.”

SAP’s attorneys should have checked this with the technical resources at SAP because these two paragraphs are unsupportable and make it plain that the attorneys mean to trick the judge.

First S/4HANA is unlawfully tied to HANA because…

  • a) There is no technical reason to restrict S/4HANA to HANA. The evidence is that HANA underperforms the competing database alternatives as we covered in What is the Actual Performance of HANA. and..
  • b) Products that are tied together in order block out competitors are illegal under the tying arrangement clause of the US antitrust law. This is the exact clause of our antitrust law used by the DOJ to win a judgment against Microsoft back in the 1990s.

Something else which will be difficult for SAP to explain — how is an application like S/4HANA and a database like HANA a single integrated product? Can SAP name another ERP system that is “integrated as a product with its database?” Here is another question, if S/4HANA and HANA are the same product, why are they priced separately and listed as different products in the SAP price list? A third question. Is HANA now integrated to the BW also? As BW can be deployed on HANA, they must also be a single fused product!

Teradata’s Real Complaint is SAP Would Not Integrate with Teradata’s DB?

“Teradata’s real complaint is that SAP chose to offer this integrated system with HANA, rather than integrating with Teradata’s database; the antitrust laws, however, are designed to prevent injury to competition, rather than injury to competitors,” SAP said.”

This is a very strange wording by SAP. However, the issue that SAP is hoping the judge is confused by is that the restrictions are not technical. Teradata has been integrating to (often Oracle) databases on customers with SAP applications for decades. The issue is not technical; it is how SAP setup the charges and used indirect access to cut off their database from being accessed by Teradata. Indirect access is a violation of the tying arrangement discussed previously and covered in the article SAP’s Indirect Access Violates US Anti Trust Law. Notice Teradata’s use of the specific term tying arrangement in this quote from the complaint.

“On information and belief, SAP has also begun significantly restricting Teradata’s ability to access customers’ SAP ERP data stored in HANA (which is necessary for the functional use of Teradata’s EDAW products), thereby ensuring the success of its tying arrangement in coercing customers to adopt HANA.”

The second part of the paragraph from the SAP quotation regarding “designed to prevent injury to competition, rather than injury to competitors seems to be some type of wordplay. This would be like saying laws against murder are designed to protect society in general but are not designed to protect the murder of any one particular person.

Teradata is being blocked because of SAP’s unwarranted tying arrangement between S/4HANA and HANA. Teradata is a competitor, and SAP is not competing with them by offering customers to choose between HANA and Teradata. They are using the SAP ERP system, previous versions which did not have these restrictions, to be restricted.

This is stated in the Teradata complaint.

“Moreover, and on information and belief, SAP has begun significantly restricting Teradata’s ability to access customers’ SAP-derived data. Through this conduct, SAP has deliberately sought to exploit its large, existing ERP customer base to the detriment of Teradata and its customers. Given the extremely high costs of switching ERP providers, SAP’s ERP customers are effectively locked-in to using SAP’s ERP Applications, and SAP is now attempting to lock them into using only HANA in the EDAW market as well.”

This is the exact reason we have argued against ERP systems; they are continually used to take control of the customer’s IT spend through account control, as covered in the article How ERP Systems Were a Trojan Horse. 

The strategy by SAP’s attorneys here is called “muddying the water.”

SAP Requires More Explanation as to Inefficiencies?

“For instance, SAP said the US-based Teradata was vague about the “inefficiencies” it claims to have identified in SAP’s systems it offered; failed to precisely identify what trade secrets were stolen; and failed to allege that SAP breached the contracts drawn up for the Bridge Project.

“To the contrary, much of what the [complaint] alleges as purported misconduct (which SAP denies) is expressly permitted by the relevant provisions of the Bridge Project Agreements,” the ERP giant said.”

Here we agree with SAP on the trade secret allegation.

Conclusion

What can be taken from this motion to dismiss? The arguments related to trade secrets seem correct. SAP most likely did benefit from Teradata’s advice and expertise related to how to improve HANA. SAP would have naturally tried to learn things from Teradata. SAP was very unsophisticated regarding databases, particularly back when they were cobbling together HANA from acquisitions and with ideas gleaned from other databases vendors. But this backward engineering was not restricted to Teradata. And we have yet to see evidence that Teradata offered a substantial portion of IP that eventually became HANA.

Furthermore, HANA is not a competitive product. Therefore, whatever SAP may have taken from Teradata was either not particularly good, or SAP screwed up the implementation of the concept. HANA’s power comes from its association with SAP, not from HANA’s capabilities as a product.

SAP’s arguments against Teradata’s claims regarding anti-competitive behavior go beyond anything reasonable, dance in the area of insulating and make one wonder about the attorneys used by SAP. Any person who would make these arguments to me would so ruin their credibility; I would never listen to them again.

The impression given is that SAP hopes to find a weak judge who would believe such arguments. A motion to dismiss is automatic, but if this is what SAP came up with (assuming there was no miscommunication with the attorneys and SAP) this case appears to be a substantial risk for SAP. Teradata is asking for SAP to change the way they do business essentially. Teradata’s request is entirely consistent with demanding the SAP follow the normal rules of competition. SAP is asking that the US courts allow them to use tricks and deception to push vendors out of “their” customers. SAP claims to own them because they have sold them an ERP system. Teradata is asking for the courts to bar some of SAP’s behaviors as covered in the following quotation in the Teradata complaint.

“Teradata therefore is entitled to an injunction barring SAP’s illegal conduct, monetary damages, and all other legal and equitable relief available under law and which the court may deem proper.”

US courts are not the best place for anticompetitive enforcement. One question might be, why is the FTC not investigating SAP? The exact issues listed by Teradata in their complaint have been reported to us for years. But as the FTC no longer is interested in enforcing antitrust law, this is Teradata’s only option. The US economy is increasingly dominated by larger and larger entities, something which reduces competition and depresses wages.

Other vendors should show an interest in this case because SAP is claiming the vendor selling the ERP system has the right to push the other vendors from the account. If the US courts allow them to do it to Teradata, which is a vendor with large amounts of resources, they can do it to anyone.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.theregister.co.uk/2018/09/03/sap_response_teradata_lawsuit/

https://www.businesstoday.in/current/corporate/day-after-teradata-filed-ip-theft-suit-against-sap-vishal-sikka-terms-charges-baseless-outrageous/story/279442.html

https://assets.teradata.com/News/2018/2018-06-19-Complaint.pdf

How HANA Takes 30 to 40 Times the Memory of Other Databases

Executive Summary

  • HANA takes extremely enormous levels of memory compared to competing databases.
  • HANA has continual timeout issues that are in part due to HANA’s problem managing memory.

Introduction to HANA’s Problems with Managing Memory

SAP’s database competitors like Oracle, IBM, and Microsoft, have internal groups that focus on memory optimization. Memory optimization (in databases) is how tables are moved into and out of memory. However, SAP tries to push more tables into memory that are necessary (but not as many tables as they state that they do, that is not “all the tables”). However, SAP does not have the optimization capabilities of the other database vendors.

High Memory Consumption with HANA

HANA’s high memory consumption is explained in their SAP HANA Troubleshooting and Performance Analysis Guide where they state the following.

“You observe that the amount of memory allocated by the SAP HANA database is higher than expected. The following alerts indicate issues with high memory usage.”

And..

“Issues with overall system performance can be caused by a number of very different root causes. Typical reasons for a slow system are resource shortages of CPU, memory, disk I/O and, for distributed systems, network performance.”

This is odd for SAP to observe shortages of resources. This is because HANA has the highest hardware specification of any other competing database. Also, the comparison is not even close. This is pointed out again by SAP regarding memory.

“If a detailed analysis of the SAP HANA memory consumption didn’t reveal any root cause of increased memory requirements it is possible that the available memory is not sufficient for the current utilization of the SAP HANA database.”

Conclusion

The same question arises, with so much memory usually part of the initial sizing, why is undersized HANA memory such an issue?

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How to Understand HANA’s High CPU Consumption

Executive Summary

  • HANA has high CPU consumption due to HANA’s design.
  • The CPU consumption is explained by SAP, but we review whether the explanation makes sense.

Introduction to HANA CPU Consumption

A second major issue in addition to memory overconsumption with HANA is CPU consumption. When so much data is loaded into memory, it causes The CPU to spike. This is why CPU monitoring along with memory monitoring is considered so necessary for effectively using HANA. SAP offers a peculiar explanation for CPU utilization.

“Note that a proper CPU utilization is actually desired behavior for SAP HANA, so this should be nothing to worry about unless the CPU becomes the bottleneck. SAP HANA is optimized to consume all memory and CPU available. More concretely, the software will parallelize queries as much as possible to provide optimal performance. So if the CPU usage is near 100% for query execution, it does not always mean there is an issue. It also does not automatically indicate a performance issue”

Does This Statement Make Sense

This entire statement is unusual, and it does not explain why HANA times out. If an application or database is continually consuming all resources, the apparently the likelihood of timeouts increases. This paragraph seems to attempt to explain away the consumption of hardware resources by HANA that in fact, should be a concern to administrators. This statement is also inconsistent with other explanations about HANA’s use of memory, as can be seen from the SAP graphic below.

Notice the pool of free memory.
Once again, notice the free memory in the graphic.

This is also contradicted by the following statement as well.

“As mentioned, SAP HANA pre-allocates and manages its own memory pool, used for storing in-memory tables, for thread stacks, and for temporary results and other system data structures. When more memory is required for table growth or temporary computations, the SAP HANA memory manager obtains it from the pool. When the pool cannot satisfy the request, the memory manager will increase the pool size by requesting more memory from the operating system, up to a predefined Allocation Limit. By default, the allocation limit is set to 90% of the first 64 GB of physical memory on the host plus 97% of each further GB. You can see the allocation limit on the Overview tab of the Administration perspective of the SAP HANA studio, or view it with SQL. This can be reviewed by the following SQL
statement

select HOST, round(ALLOCATION_LIMIT/(1024*1024*1024), 2)
as “Allocation Limit GB”
from PUBLIC.M_HOST_RESOURCE_UTILIZATION”

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Risk Estimation and Calculation

Risk Estimation and Calculation

See our free project risk estimators that are available per application. The provide a method of risk analysis that is not available from other sources.

project_software_risk

HANA’s Time in the Sun Has Finally Come to an End

Executive Summary

  • SAP has been forced to move HANA into the background of its marketing focus for various reasons.
  • Trends shifted away from proprietary databases towards open source databases.
  • Marketing claims about HANA being groundbreaking turned out to be false. HANA had no offerings that gave it an advantage over competing solutions and it proved to have the highest TCO among its competitors.

Introduction: The Real Story on SAP’s HANA Focus

In this article, we will cover the how SAP has finally moved HANA into the background of its marketing focus.

SAP’s Marketing Transition Away from HANA

In 2011 HANA became the primary marketing tentpole for SAP, replacing Netweaver which had been the primary focus at that time.

The official date where HANA was replaced in this position in SAP’s marketing orbit can be marked as June 5th. That is the first day of SAPPHIRE 2018. This is because HANA was noticeably less prominent at SAPPHIRE 2018 than it had been since its introduction.

SAP Thought it Had Cracked Oracle’s Code

SAP’s obsession with HANA reached a fevered pitch from the 2011 and to 2018 timespan. SAP had actually (I believe) convinced itself that it had done something that it had not come close to doing (which is come up with the “killer app” in the database market). SAP thought that combining a column-oriented (partial column oriented it later turned out) database combined with more memory had never been executed as had been done by SAP. As is well known, SAP acquired all of its technology for this design and my analysis which is partially documented in the article Did Hasso Plattner and His Ph.D. Students Invent HANA? 

SAP’s contribution to the combined analytics processing and transaction processing database was to market it. This powerful marketing by SAP caused Oracle and IBM to move resources into developing such functionality in their databases — a move which I think was a misallocation of resources. Research by Bloor Research which I analyzed in the article How Accurate Was Bloor on Oracle In Memory, covered the extra overhead of Oracle’s in-memory offering.

SAP Pays Forrester to Make a New Database Category

SAP paid good money to try to make mixed OLTP and OLAP from one database “a thing” going so far as to pay Forrester to create a new faux database category called the “transanalytical database.”

And surprise surprise, HANA was declared a leader in this new database category! We covered this in the article What is a Transanlytical Database?(It is a new database category, specifically for those that don’t know much about databases.)

This is something that Bill McDermott crowed about on the Q4 2017 earnings call but failed to point out that SAP had paid Forrester for this study.

One wonders how much in market cap was added because of this report, and how much that added to how many stock options that were exercised by the top executives at SAP. Even if it were a very small number of percentage points, it would still make whatever SAP paid Forrester an absolute steal.

The Trend Away from HANA

Databases have become increasingly diversified since HANA was first introduced, and because of IaaS like AWS and Azure, it is now increasingly easy to spin up multiple database types and test them. Moreover, the biggest trend is not patent software databases, but open source databases. Also, since HANA’s initial introduction, the trend towards open source database has only grown. This is offering more different database types than ever before.

There is even now a database focused on something called horizontal scalability and disaster recovery called CockroachDB. There is more opportunity than ever before to access specialized databases with different characteristics. And due to IaaS providers, these databases are far more simple to provision and test than in the past. Open source databases can be spun up and tested and distributed like never before. Yet, the presentation by SAP to customers that there are only two database processing types (analytics and transactions) that they need to worry about and that HANA covers the bases. SAP could not have picked a more incongruous messaging and strategy around databases if they set out to do so from the beginning of HANA’s development.

The best transaction processing database is a row-oriented database. The best analytic database is a database like Redis or Exadata. If one tries to get both out of a single database, compromises quickly ensue, and maintenance costs go up.

What the HANA Experiment Illustrated

For many years since HANA was introduced, I had a quite a few people who had little experience with databases telling me (and anyone who would listen actually) how earth-shattering HANA would be. These bold statements were brought forth by people with often no experience with databases. It became evident to me through many conversations that the person often was simply repeating what they heard from SAP. But the problem was that SAP made statements about HANA and databases in general that were in error.

What this taught me was that a sizable component of the population in enterprise software are willing to not only discuss things but be highly confident in presenting stuff for which they have no way of knowing if are correct. It means that a large component of those that work in SAP are faking knowledge.

The proposals were so off the wall that I was subjected to that they needed their own laugh track. I had partners from Deloitte and CapGemini and several other firms, people who would not know the definition of a database index, who have not been in anything but the Microsoft Office suite in several decades, telling me with great certitude how HANA would change everything.

“You see Shaun, once we columnar in-memory databases are used for transaction systems, the entire BI system goes away.”

Many of the statements by SAP executives or by SAP consulting partners seem very much like the Jimmy Kimmel Live segment called Lie Witness News. I could describe it but see for yourself.

These people have an opinion on the US invasion of a fictional country, without asking “where is Wakanda?” If you ask these same people “do you think SAP’s HANA database is very good and better than all other databases,” what answer would we get? 

Time to Admit You Don’t Know Wakanda Does Not Exist (and that HANA is not GroundBreaking)?

The issue is a combination of dishonesty combined with the assumption that something must be true because it is presented and proposed by a large entity (in this case the US military being in Wakanda). This happens all the time, and the people that propose things without checking are often quite experienced. This is of course doubly a problem when consulting companies see jumping on whatever the new marketing freight train that SAP has as critical to meeting a quota.

The upshot is that a very large number of people that repeated things about HANA without either having the domain expertise to know or without being bothered to check should be highly embarrassed at this point. And these are people in a position to advise companies, which is where the term the “blind leading the blind” seems most appropriate at this moment. Its important for SAP customers to know, if Capgemini, Deloitte, Accenture, Infosys, etc…..were trying to get you to purchase and implement HANA, they had no idea what they were talking about, or did not care what was true. They were, as required by the SAP consulting partnership agreement, and along with the sales quota incentives they have, repeating what SAP told them. And for nearly everything SAP proposed about HANA, they simply made it up. SAP not only made up the benefits offered by HANA, but they made up a fictitious backstory of how HANA was “invented” as we covered in the article Did Hasso Plattner and Ph.D.’s Invent HANA?

The rise of HANA lead to many people with only a cursory understanding of databases talking a lot about databases. Naturally, their statements, promoted by SAP, had a very low accuracy. With HANA moving back in SAP’s enormous deck of cards of SAP products, now the topic of databases can shrink down closer to the group of people that know something about them.

That is a positive development.

HANA Was Going to Change the World?

HANA was supposed to change the world. However, what did it change?

If we take just one example, the idea of loading all of the databases into memory, if one looks at the vendors with far more experience in databases than SAP, no one does this. The reason is that it is wasteful when only a small percentage of the tables are involved in the activity. This is why each database vendor has a group that focuses on memory optimization. And it was eventually determined that SAP says that they load the entire database into memory, but they don’t. Memory optimization still rules the day. This is just one example, I could list more, but outside of marketing, HANA did not change much and either all or nearly all of its projections turned out not to be true.

The Final Outcome of HANA

Customers that implemented HANA now have a higher TCO database, a far buggier database, and they have had to run more databases in parallel. After years of analyzing this topic, I can find no argument for replacing existing databases with HANA, or for beginning new database investments by selecting HANA.

Lets us traverse through the logic because it seems to be tricky.

  1. Does HANA perform better in analytics processing than previous versions of Oracle or DB2 that did not have column capabilities and ran on older hardware? Yes.*
  2. Does HANA outperform or have any other associative capability that gives it an advantage against the major competing offerings? No
  3. Does HANA have the most bugs and highest TCO of any of the offerings it competes against? Yes
  4. Is HANA the most expensive of all the offerings it competes against? Yes.

*(The topic that SAP has routinely tried to get clients to think that HANA on new hardware should be compared against Oracle and DB2 on old and far less expensive hardware is covered in the article How Much Performance is HANA?)

Where are HANA’s Sales Outside of Companies that Already Run SAP Applications?

Is some explanation for why HANA is not purchased outside of SAP accounts. The sales pitch was that HANA was so much more advanced than competing offerings that not only S/4HANA but other SAP applications could not work to their full extent without it. It was going to be so easy to develop on with the HANA Cloud Platform (now SAP Cloud or SAP Cloud Platform) that developers outside of SAP were going to flock to it because of its amazing capabilities. Right? SAP said all these things and many more.

However, if all of this was true, shouldn’t SAP be able to sell HANA to customers that don’t use SAP applications? Vishal Sikka stated that HANA was instrumental to a wide variety of startups.

Where is that market?

The answer is nowhere. That should give us pause regarding SAP’s claims.

Is SAP Dedicated to Breaking the “Dependency” on Oracle?

SAP justified all of the exaggerations in part by convincing themselves they were going to help their customers “break” their dependency on Oracle (and to a lesser degree DB2). However, one has to question how dedicated one is to “breaking a dependency” when the desired outcome is not to switch dependency to SAP. When a customer buys from a competitor, that supposedly is a “dependency.” However, when a customer buys the same item from you, that is a “relationship.” This sounds a bit like the saying that a person can either be a “terrorist” or a “freedom fighter,” depending upon one’s vantage point.

The Logic for the Transition

If we look at the outcome, HANA is not a growing database.

HANA has not grown in popularity since Feb 2017. Moreover, it has not increased significantly since November of 2015. And let us recall, this is a database with a huge marketing push.

It cost SAP significantly to redirect its marketing budget, to emphasize HANA over other things is could be emphasizing. In fact, I have concluded that really most of HANA’s growth was simply due to its connection to SAP. If HANA had to acquire customers as a startup, it would have ceased to exist as a product a long time ago. The product itself is just not that good.

A second point is that HANA was enormously exaggerated in terms of its capabilities.

Point three is that some HANA purchases were made to satisfy indirect access claims, that is they were coerced purchased. I am still waiting for a Wall Street analyst to ask the question.

“How much of the S/4HANA and HANA licenses are related to indirect access claims?”

Apparently, Wall Street analysts have a process where they keep away from actually interesting questions.

Diminishing Returns for Focusing on HANA

SAP was not getting a return from allocating so many of its is promotional resources to showcasing HANA. Furthermore, the customers were becoming “HANA resistant.” The over the top HANA emphasis by SAP had become a point of contention and often ridicule at customers (which I learned through my client interactions)

All of this would not have happened without Hasso Plattner. Hasso bet big on HANA, and Hasso was wrong. HANA was allowed to be promoted on tenuous grounds because its champion was Hasso Platter.

In Hasso Plattner’s Book The In-Memory Revolution he stated the following: 

“At SAP, ideas such as zero response time database would not have been widely accepted. At a university, you can dream, at least for a while. As long as you can produce meaningful papers, things are basically alright.”

The problem? Zero latency has not been achieved by HANA to this day. It was never a reasonable goal. And Hasso illuminated something else with this quote. University students are even less willing to push back on Hasso than career database professionals. A fundamental reason is they have much more experience.

The True Outcome of HANA

SAP is now stuck with a buggy database unable to come close to its performance promises, and which has influenced SAP’s development in other areas in a wasteful manner. For example, many of the changes that were made to S/4HANA to accommodate HANA turn out to have not been necessary and have extended out S/4HANA development timeline. Secondly, the requirement that S/4HANA only use HANA has both restricted the uptake of S/4HANA, and will continue to do so. S/4HANA could be overall much farther along if it was just “S/4” and ran on AnyDB.

Now companies that purchased HANA will try to justify the purchase (no one likes to admit they got bamboozled) because BW runs faster. However, these same executive decision makers entirely leave out the impact of HANA’s far more expensive hardware in the performance analysis. Companies that purchase HANA for BW don’t test or hire out tests to determine how much of the performance benefit is due to hardware. That is how much Oracle or DB2 (and a modern version of Oracle and DB2) would perform on the same hardware.

No, instead they tell me that performance for BW improved over an eight-year-old version of Oracle or DB2 on eight-year-old hardware that because it has much more disk than memory and cost a small fraction of the HANA hardware. So with BW companies can hide HANA’s performance limitations (although not its maintenance overhead). However, HANA has many problems meeting customer expectations for things that SAP said performance would skyrocket, which include…well everything but short SQL queries.

This means that SAP will be fighting fires no HANA performance for S/4HANA for transaction processing and MRP processing for years. SAP would have none of these problems if they simply did what they had always done, allow the companies that really knew databases to provide the database. As pointed out by a colleague..

“The far bigger threat and loss of income and account control to SAP is from IaaS/PaaS providers, not database vendors. “

Conclusion

SAP was never a database company, and now it looks like it won’t be (a significant one) in the future. And, there is nothing wrong with that. In retrospect, SAP’s acquisitions of Sybase and other purchases (that it made very quietly) that ended up being HANA could have been invested to much better effect by merely fixing issues in ECC and upgrading product support.

HANA will still be there, but SAP’s marketing focus is moving on to other things. Right now it’s unclear exactly which it will choose. C/4HANA is the new kid in town. S/4HANA is still a centerpiece. SAP has so many products, toolkits, announcements, and concepts to promote that SAP marketing is a beehive of activity. However, the overhyping of HANA to promote S/4HANA has subsided. S/4HANA will now be more sold on its functionality (as ECC always was).

What the Future Holds for HANA

SAP has shifted to making the compatibility argument to customers, but not in public. Publicly SAP says that the only application for which HANA is a requirement is S/4HANA. However, through the sales reps repeatedly makes the argument that their applications can only work as intended with HANA. We evaluate these compatibility arguments for clients and are always surprised by the new explanations that SAP comes up with to drive customers to HANA. The accuracy of these private statements to customers should never be taken on face value, but need to be evaluated on a case by case basis.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

TCO Book

 

TCO3

Enterprise Software TCO: Calculating and Using Total Cost of Ownership for Decision Making

Getting to the Detail of TCO

One aspect of making a software purchasing decision is to compare the Total Cost of Ownership, or TCO, of the applications under consideration: what will the software cost you over its lifespan? But most companies don’t understand what dollar amounts to include in the TCO analysis or where to source these figures, or, if using TCO studies produced by consulting and IT analyst firms, how the TCO amounts were calculated and how to compare TCO across applications.

The Mechanics of TCO

Not only will this book help you appreciate the mechanics of TCO, but you will also gain insight as to the importance of TCO and understand how to strip away the biases and outside influences to make a real TCO comparison between applications.
By reading this book you will:
  • Understand why you need to look at TCO and not just ROI when making your purchasing decision.
  • Discover how an application, which at first glance may seem inexpensive when compared to its competition, could end up being more costly in the long run.
  • Gain an in-depth understanding of the cost, categories to include in an accurate and complete TCO analysis.
  • Learn why ERP systems are not a significant investment, based on their TCO.
  • Find out how to recognize and avoid superficial, incomplete or incorrect TCO analyses that could negatively impact your software purchase decision.
  • Appreciate the importance and cost-effectiveness of a TCO audit.
  • Learn how SCM Focus can provide you with unbiased and well-researched TCO analyses to assist you in your software selection.
Chapters
  • Chapter 1:  Introduction
  • Chapter 2:  The Basics of TCO
  • Chapter 3:  The State of Enterprise TCO
  • Chapter 4:  ERP: The Multi-Billion Dollar TCO Analysis Failure
  • Chapter 5:  The TCO Method Used by Software Decisions
  • Chapter 6:  Using TCO for Better Decision Making

How Real is The Oracle Automated Database?

Executive Summary

  • Oracle made many ridiculous claims about the autonomous or self-driving database.
  • The reason for the creation of the autonomous database is because Oracle is losing business to the AWS RDS managed database service.
  • Oracle makes is sound as if AI in Oracle’s Automated Database is ARIIA from EagleEye.
  • Oracle upgrades are not free and upgrades have many complications.

Introduction

Oracle has been making great claims related to automation. They introduced something called the “Autonomous Database.” In this article, we will review the claims for the autonomous or automated database.

The Autonomous Database?

Let us begin analyzing the claim being made in the name Oracle has given here. Autonomous means something that runs itself. It would mean that no human intervention would be required to manage the automated database. That is not only is it not an on-premises DBA, but it also does not require management by Oracle or any other entity.

We have been working with databases for years, and we have yet to run into such a database. So it should first be established that this is an enormous claim that Oracle is making.

The Self Driving Database?

Another term used by Oracle in their literature is the term “self-driving.” This seems to imply the same thing as the term automated.

Larry Ellison delivers some preposterous quotes in his explanation of the Oracle Automated Database.

“For a long time people really looked at the promise of AI but it never quite delivered to its promise until very recently. With the advent of the latest version of AI, neural networks with machine learning, we are doing things that hitherto have been considered unimaginable by computers.”

Oracle which is known as being the most expensive database to maintain short of SAP HANA (which has enormous maintenance overhead), Ellison has this to say.

“On an Oracle database running at Amazon, will cost you 5 times what it costs you to run in the Oracle Cloud because it will take you 5 times the amount of computer to do the exact same thing. A Redshift database will cost 10 times more to do the same thing at Oracle Cloud.  And that is not counting the automation of the database function. That is not counting the downtime as Oracle Cloud has virtually no downtime.” 

The Popularity of the Term Automated Database

The following shows Google Trend’s measurement of the popularity of the search term “autonomous database.” Notice the spike in October of 2017. This was when Oracle began a marketing offensive around its autonomous database.

Let us review a some of Oracle’s claims.

“Examples of automation Oracle said it would offer are automated data lake and data prep pipeline creation for data integration; automated data discovery and preparation, with automated analysis for key findings; and automation of identification and remediation of security issues in a developer’s code during application development.”

At OpenWorld in 2017, Larry Ellison claimed

“The new database uses artificial intelligence (AI) and machine learning. It’s fully autonomous, and it’s way better than AWS’s database, Ellison said.”

Understanding what AWS Is

Here it needs to be clarified, while AWS has introduced databases like Aurora and DynamoDB, most of AWS is primarily a PaaS/IaaS vendor. And as such AWS’s revenues in databases come from managing databases, it did not develop. Everything from Oracle to SQL Server to open source databases.

So when Ellison says that their new database is “way better than AWS’s database” what database is Larry referring to?
And better for what database? Remember that AWS offers managed Oracle. AWS’s RDS service offers a fully managed service for Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL, and MariaDB.

For Oracle, AWS’s managed database service offers the following:

  • Pre-configured Parameters
  • Monitoring and Metrics
  • DB Event Notifications
  • Software Patching
  • Provisioned IOPS (SSD)
  • Automated Backups
  • DB Snapshots
  • DB Instance Class
  • Storage and IOPS
  • Automatic Host Replacement
  • Multi-AZ Deployments

So here Larry seems even to be saying that Cloud Oracle DB is better than AWS’s cloud Oracle DB. The distinction between Oracle and AWS is between Oracle Cloud and AWS’, not Oracle DB and AWS’ database(s).

This perplexing claim is repeated by Steve Daheb of Oracle. In this video, Steve Daheb claims that the Amazon databases like Redshift and Aurora are not open and cannot be ported to other IaaS providers. Steve Daheb seems to miss out on the fact that Redshift and Aurora also cannot be hosted at the Oracle Cloud. Secondly for a company with very little cloud business as a percentage of revenues (roughly 16%) the discussion of cloud is out of proportion with Oracle’s business. Secondly, none the Oracle Automated Database only works (for some strange reason) if it is managed by Oracle. Oracle 18c is not autonomous if installed on premsies — which is where the vast majority of Oracle databases reside. 

Furthermore, AWS has both a BYOL, or bring your own license model. This means that whatever a company purchases from Oracle can be run on AWS.

“Bring Your Own License (BYOL): In this licensing model, you can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS. To run a DB Instance under the BYOL model, you must have the appropriate Oracle Database license (with Software Update License & Support) for the DB instance class and Oracle Database edition you wish to run. You must also follow Oracle’s policies for licensing Oracle Database software in the cloud computing environment. DB instances reside in the Amazon EC2 environment, and Oracle’s licensing policy for Amazon EC2 is located here.”

Things Oracle Claims AWS Managed DBs Can’t Do?

Ellison went on to say.

“This level of reliability will require Oracle to automatically tune, patch, and upgrade itself while the system is running, Ellison said, adding: “AWS can’t do any of this stuff.”

Again, the only reason that AWS could not do whatever Oracle DB can do is if Oracle does not release its newest DB (Oracle 18) to AWS.

But secondly, AWS already has a managed database service that is considered superior to the Oracle Cloud. So in fact, AWS has been “doing this stuff” for quite some time, but they have been doing with a managed DB offering.

Ellison is not being merely somewhat inaccurate in this case or engaging in normal puffery; he is misrepresenting what AWS offers as well as misrepresenting what AWS does.

Mark Hurd Doubles Down on the Automated Database Inaccuracy

Now let us check Mark Hurd’s comment in the same vein.

“Oracle CEO Mark Hurd said his company’s database costs less because it automates more. He described AWS’ MySQL-based Aurora database and its open source version, Redshift, as “old fashion technologies.” Oracle’s new database, on the other hand, allows users to “push a button and load your data and you’re done.”

Let’s say that Aurora and Redshift are old-fashioned technologies. Aurora was just developed in the past few years. But we don’t need to address the issue; we are merely left over to the side.

Is Mark Hurd aware that AWS provides managed Oracle? This is known to everyone, and so it should have fallen into both his and Ellison’s frame of reference at this point. Why do Hurd and Ellison repeatedly speak as if AWS is primarily a database vendor rather than an IaaS/PaaS vendor? When one software vendor misrepresents the offering from another software vendor, there has to be a specific reason why.

To reiterate, AWS has offered a managed DB service for quite some time.

If Oracle’s new database allows users to push a button and load your data and you’re done, why is the earth populated with so many Oracle DBAs? There is, no database that does not load data with the push of a button, but the question is the maintenance after the data is loaded. Does Mark Hurd work with databases? How much are the technical people at Oracle sharing with Mark Hurd?

When Did Oracle Begin Emphasizing Automation?

It is also curious that Oracle only began talking about automation after they began losing business to AWS. Is that a coincidence or is there perhaps a deeper meaning there?

We think there might be. In fact, the entire automated database narrative seems to be a reaction to something very specific we will address further on.

AWS’s Automation Verus Oracle’s Explanation of the Automated Database

Oracle is ignoring that AWS also has automated features. See the following quotation from their website and are part of the AWS Systems Manager.

“Systems Manager Automation is an AWS-hosted service that simplifies common instance and system maintenance and deployment tasks. For example, you can use Automation as part of your change management process to keep your Amazon Machine Images (AMIs) up-to-date with the latest application build. Or, let’s say you want to create a backup of a database and upload it nightly to Amazon S3. With Automation, you can avoid deploying scripts and scheduling logic directly to the instance. Instead, you can run maintenance activities through Systems Manager Run Command and AWS Lambda steps orchestrated by the Automation service.”

But AWS’s claims are far more reasonable and Oracle’s. But according to Ellison and Hurd, these automated features don’t seem to exist.

The Validity of Oracle’s Claims on AI & ML

Interwoven within the claims around the automated database are AI and ML. Now at this point, a great swath of vendors is now claiming AI and ML capabilities. However, AI is still quite limited in usage. Let’s take the first, which is AI.

To begin, AI is an enormous claim; it proposes that the software is so close to consciousness that it is nearly undifferentiated from an adult human brain.

Safra Katz discusses the topic as if it is old news. She states…

“We have no AI project; we have AI in every project,”

Being on quite a lot of projects with the Oracle DB (although not Oracle applications), there is no evidence of any AI whatsoever. Not only that, but Oracle is also not known for ML. Where is all that Oracle AI hiding?

It’s on every project according to Safra, but we just can’t see it. A very large number of Oracle customers are running old versions of the Oracle DB and may have minimal Oracle apps. Are these customers also using AI?

AI is contained in Alexa or Google Home, and it does not take very long ask Alexa or Google Home questions and eventually determine that neither are anywhere close being conscious.

AI is mostly a buzzword which works best for people with less technical backgrounds.

Now let us discuss ML or machine learning.

Oracle and Machine Learning’s Input to the Automated Database

Machine learning is a broad category of predictive algorithms that are not particularly new. The great thing about ML for marketers is that you can add ML functionality, without having ML be useful at a customer. That is you can add old algorithms, but they don’t necessarily have to work, and there are tons of public domain ML algorithms can be added quickly to any application, enabling that vendor to state that they do ML.

Here is Google Trends on the interest in ML since 2015. Interest has increased.

Notice the change just in 2017 in the interest in ML. Is this really due to the increases in ML capabilities, or because vendor marketing departments figured out they need to jump on that bandwagon? 

Understanding the Method of Applying ML

The method is that insight is gained from the ML or analysis that did not exist before the ML being performed. However, unlike what Oracle states about its autonomous database, ML is not “self-driving.”

An ML approach or algorithm must be first selected. The most common ML algorithm is linear regression, something with which many people are familiar. It is still the most popular form of ML used by data scientists.
Then set against a dataset (which must also be carefully developed/curated by a human) and then the analysis must also be performed by a human.

ML = EagleEye’ ARIIA?

Vendors propose ML being similar to the computer named ARIIA in the movie EagleEye, which eventually coordinates humans from across the US to assassinate the US President because its analytics observed a violation of the US Constitution. It makes for great fiction, but nothing like the computer in EagleEye has ever existed.

The actual ML processing step of the process is the shortest step, which is why ML capabilities do not scale directly with processing capabilities. Vendors imply a revolution in ML that has never occurred in reality, and that most occur on vendor web pages and PowerPoint presentations.

Oracle’s Automated Database as ARIIA for Data?

Now let us look how Ellison plays directly into this movie script orientation of AI and ML.

“Based on machine learning, this new version of Oracle is a totally automated, self-driving system that does not require a human being either to manage the database or tune the database,” Ellison said, according to a Seeking Alpha transcript of the conference call with investors.

And Larry Ellison’s statement is also found in this video by Oracle. It states clearly that the Oracle Automated Database required no human labor.

That is curious because ML will not enable the automation of a database. Secondly, where have these capabilities been hiding, only to spring forth when every other vendor is also proposing AI and ML capabilities? Oracle addresses this by saying.

“We’ve been developing this for decades,” Loaiza said

If that is true, they have come forward all of a sudden in a peculiar bit of timing. This is addressed by a commenter on an article in The Register.

“They have sure hidden it well then. Oracle DB patches are some of the most painful and complex such exercises I have ever encountered. Versus say SQL Server where it’s click and go! Not to mention having to allow Java to run the installer for Oracle!” – The Register Commenter

After having the highest maintenance database in the industry, how autonomy is all the way through what Oracle is offering, and watch how Oracle combines automation with the cloud.

“The future of tomorrow’s successful enterprise IT organization is in full end-to-end automation,” said Zavery. “We are weaving autonomous capabilities into the fabric of our cloud to help customers safeguard their systems, drive innovation, and deliver the ultimate competitive advantage.”

But According to Oracle None of This Will Impact Jobs?

Notice how Oracle walks back the implications of these supposed changes when it comes to jobs.

“However, the biz has repeatedly emphasized that increased automation will not mean the end of people’s jobs – instead saying it will simply cut out the monotonous yet time consuming day-to-day tasks.”

“This allows administrators to free up their time… do things they were not able to do before,” said Zavery. “They will have to learn some new things beyond what they were doing before.” – The Register

This is also a curious position to take. It also implies omniscience and a lack of bias on the part of Oracle. Oracle developed a video which is designed to make DBAs feel better about this potential loss of jobs.

 Maria Colgan makes the statement that Oracle DBAs will leverage the cloud. However, there is little evidence of Oracle having much cloud business. 

If what Oracle said about their autonomous DB was true, it would allow companies to use fewer DB resources. How does Oracle know how each company would decide to respond to these changes?

*Note to Oracle; companies do like cutting costs.

A More Likely Prediction (If Oracle’s Claims for its Automated Database were True)

It is quite reasonable to expect the work that is taken over by the hypothetical autonomous database to be taken as cost savings. That is for database resources to lose their jobs. Oracle does not know.

All of this seems to be a way for Oracle not to perturb DBAs that it would like to endorse the concept of the autonomous DB.

But there is an extra problem. Ellison contradicted this storyline in a different quotation.

“If you eliminate all human labor, you eliminate human error,” Oracle cofounder and CTO Larry Ellison said during his keynote address today.

So, Ellison seems to be proposing eliminating all human labor related to the Oracle database.

So which is it?

Do Oracle’s automated databases now mean that DBAs will not be performing backups and patches (lower level database functions), but also not focusing on analytics (higher level database functions)?

Ellison appears to be speaking categorically about eliminating labor in the database function. This means that if Oracle customers purchase their automated database, the last task for the unnumerable Oracle DBAs will be to perform the upgrade to this database and then transition to new careers as the database is now fully automated. But at the same time “eliminating all human labor” apparently won’t cost jobs.

Ok Larry.

We gave The Register a low accuracy on the article these quotations are from as they provided zero pushback on these extravagant claims in the quotes from Oracle this article. Yet, in a different article, The Register did push back on Oracle’s claims.

Oracle’s Explanation for the Sudden Appearance of Automation

Here is how Oracle explains this sudden appearance of such extreme levels of automation in their database.

“We’ve seen lots of mention of machine learning this week. But how much of that is new and amazing as opposed to vanilla automation you’ve been working on for a long time, is not clear. There’s an important distinction to be made between a database that has a number of automated processes and one that is fully autonomous. Customers can choose to just use automation, or to take the plunge and hand over all their management to Oracle’s cloud operations for the autonomous option.” – The Register

  • Here The Register pointed out to its readers a potential blurring of the lines between the low-level automation and the new claims that Oracle has made.
  • But at the same time, The Register blurs the definition between the automated database and a managed database.

Look at this bizarre sentence from The Register:

“Customers can choose to just use automation, or to take the plunge and hand over all their management to Oracle’s cloud operations for the autonomous option.”

Is that a well thought out sentence? Let us think about this for a second.

If a database is autonomous, why would it need to be managed?

It seems like if you spend enough time talking to top executives at Oracle, pretty soon you can’t figure out which way is up.

This presentation is stated as the Autonomous Data Warehouse Cloud, but it does not show any autonomous activities. Rather George Lumpkin simply shows analytics that are available within the offering. The things that George Lumpkin demonstrates should not have be performed the way he is performing them if the database were autonomous. Larry Ellison and the Oracle documentation say one thing, but the demo shows something different. 

Automating Lower Level of Higher Level Database Activities?

But in this article, Oracle made a mistake. In previous articles and materials, they have proposed that even analytics would be automated. Then in this quote, they state something very different.

“Less time on infrastructure, patching, upgrades, ensuring availability, tuning. More time on database design, data analytics, data policies, securing data.”

That is the more basic items are automated. However, in this case, it leaves more time for things like analytics. Yet, in other Oracle quotations, they state that both lower level and higher level database activities will be automated.

Oracle cannot keep a consistent storyline as to how much is automated, it changes depending on which Oracle source is speaking.

Inconsistencies like this occur when something is not real, or that is things are being made up.

Secondly, Oracle assumes that the customer always wants the database upgraded. Let us get into some important reasons why automation for things like upgrades is not as straightforward as Oracle is letting on.

Oracle Upgrades are Not Free

Version 12.1.0.2 of Oracle database that brought in-memory capability cost an estimated $23,000 per processor.

This is explained by The Register:

“This means that once the release – which has a naming scheme that is typically associated with straightforward patch and performance distributions – has been downloaded by IT and the internal database systems have been updated, a less careful database administrator could create an in-memory database table with a single command, thereby sticking their organization with a hefty bill next time Oracle chooses to carry out a license fee audit.”

Therefore, there are implications to upgrades; they can’t necessarily be “autonomously upgraded.” Most of the Oracle instances in the world are on 11, so not even 12 much less the most recent version of 12. How will the autonomous database work for these customers? Remember, they don’t want to be upgraded.

“As a recent Rimini Street survey showed, as much as 74 per cent of Oracle customers are running unsupported, with half of Oracle’s customers not sure what they’re paying for. These customers are likely paying full-fat maintenance fees for no-fat support (meaning they get no updates, fixes, or security alerts for that money).” – NZReseller.

There is some reason these companies are not upgrading to the latest. One major one is that many customers do not feel the new features are worth the time, effort or money.

Quite obviously, if Oracle could upgrade all customers instantaneously to 18, they would, it would give them a significant revenue increase.

Upgrade Complications

What if the automatic upgrade interferes with something that the customer has set up in the database?

This pushes control to Oracle that the customer does not necessarily want.

  • AWS is offering a fully managed database, which means they are taking full responsibility for the database.
  • Oracle, on the other hand, is offering (with the automated database) some lower level tasks to be controlled by a machine, but this should not be taken to be the same thing that AWS is offering.

Oracle’s Support Quality and IaaS Success?

Furthermore, Oracle has had significant problems with its support quality, choosing to perform cost-cutting rather than maintaining quality. So if Oracle has such an issue with support, then why would they be able to provide high-quality IaaS support with Oracle Cloud? Being a successful IaaS/PaaS provider means being focused on service. Since when in the past 15 years has this been Oracle’s reputation?

The Lost of Control to Automation

Getting back to the loss of control, this is addressed in the following quotation.

“There’s a lot of concern about giving up control,” said Baer. “The initial uptake will be modest, and a lot will just be getting their feet wet …Organisations like banks, which are highly regulated, will be the last to surrender control. Oracle’s Daheb conceded customers might still want to manage something themselves. “They might say, this is dev/test, go ahead, automate that bad boy… this is core, customer-facing – maybe we don’t want to do that anytime soon.” – The Register

This is an inconsistency. Is everything going to be fully automated? Because this is the message from Oracle, or are there examples, many examples perhaps of where things won’t be automated?

“But, he argued, “the big thing” about the autonomous database is that Oracle is offering customers the choice and ability to “get to it at whatever pace makes sense for them”. – The Register

This is a textbook pivot.

Pivoting Away from Automation When Challenged

Reality is conflicting with Oracle’s messaging. The reason for this is Oracle is overstating the degree to which customers will be able to automate Oracle. When faced with questions about this reality, the response is that now the customer has the “choice.”

But that is not the marketing pitch. The marketing pitch is things are about to become incredibly automated with Oracle DBs.

“If Oracle’s customers’ enthusiasm for that change is anything to go by, we will be waiting some time before its autonomous database is the norm.” – The Register

We congratulate The Register for pushing back on Oracle here.

Without even knowing themselves, they were able to find out what customers thought and include that in the article.

Why Oracle is Selling This Automation Story

This telling execs exactly what they want to hear. There is no nuance to the explanation of the Oracle automated database — such as AWS obtains economies by managing large numbers of DBs and with its all web maintenance of DBs and elastic offering reduces maintenance overhead. Instead, Oracle’s pitch is that what amounts to a magic box called Oracle automation will automate everything.

AWS is making real change happen with its approach, and Oracle is off talking about cutting slices of cheese off the moon.

The Automated Database for Selling Oracle’s Growth Story Wall Street

Our analysis is that there is very little to the autonomous database. However, Oracle is using the autonomous database as a selling point to Wall Street.

“Under Mark Hurd and Safra Catz, who share the chief executive officer title, Oracle has bet its future on a new version of its database software that automates more functions and a growing suite of cloud-based applications. Last quarter’s results were a reminder that the company still faces stiff competition from cloud vendors including Amazon.com Inc., Microsoft Corp. and Salesforce.com Inc.” – Bloomberg

So both the autonomous database story is inaccurate, and Oracle’s cloud story is inaccurate. Oracle is seeing very little growth in its cloud business. (The financial analysts have picked up on this second story.)

And in our review of several analysts comments around the autonomous database, they seem to lack the understanding of how Oracle databases work in practice to validate Oracle’s claims. They assume the Oracle autonomous database will become successful. Secondly, the observation that we have made, that the autonomous database is the opposite of what Oracle has been about is also not present.

Oracle wins our Golden Pinocchio Award for its claims about the Oracle Automated Database.

Conclusion

Oracle’s claims around the autonomous database don’t hold up to scrutiny. In fact, for the claim of Oracle Automated Database wins our award.

Secondly, Oracle is a curious source for the autonomous database as Oracle has throughout its history had what is widely considered as the highest overhead and most complex and difficult to manage databases. The argument was always that the Oracle DB could do things that other databases could not do. However, part of this was based on the fact that Oracle made such exaggerated claims for its database. But the distinction in upper-end capabilities between Oracle and other far less expensive to purchase and maintain databases has declined.

Now that this is becoming a more broadly understood concept, Oracle is marketing against its traditional messaging (and the reality of its database product).

In this way, the automated database marketing strategy looks identical to SAP’s Run Simple marketing program, which attempted to recreate SAP’s image as complicated to run and use, when in fact SAP is without question the most complicated set of applications to run. However, Oracle has not been able to push the claims of the automated database as effectively as SAP pushed the claims of Run Simple because Oracle does not have SAP’s partner ecosystem or its degree of control over the IT media.

Finally, an article on AWS from Silicon Angle has the following to say several months after this article was published.

“But Oracle’s push doesn’t appear to have had much impact on AWS, whose revenue rose 49 percent in the latest quarter, to $5.4 billion — even faster than the previous quarter. Moreover, Vogels noted that AWS has seen 75,000 migrations from other databases to its own in the cloud since the migration service launched in early 2016, up from 20,000 in early 2017.”

A Review of Sources Provided by Oracle in Response to this Article

As a response to this article, a representative from Oracle provided the following documents.

Artice 1: Automated vs. Autonomous (By Oracle)

https://blogs.oracle.com/database/autonomous-vs-automated?

This article is by someone out of product marketing at Oracle and merely serves to repeat the claims made about the autonomous database, comparing it to inventions like the telephone. This is consistent with Larry Ellison’s claim that the Oracle autonomous database will be similar as an innovation as the Internet. Here is the exact quote:

“The Oracle Autonomous Database is based on technology as revolutionary as the internet.” – Larry Ellison

So the author of the Oracle article compared the autonomous database to…

  • The Invention of the Telephone
  • The Dawn of the Personal Computer
  • The Internet
  • The iPhone
  • The Self Driving Car

Some comparisons were left out. For instance the internal combustion engine, the discovery of DNA and the light bulb. But it is not clear why Oracle restricted its claims to only the some of the most important discoveries in human history.

Our Conclusion from the Oracle Article

There was nothing for me to comment on anew as the claims were already evaluated earlier in this article. This article targeted as people who don’t think very deeply about topics.

Article 2: Oracle’s Autonomous Cloud Portfolio Drives Greater Developer Productivity and Operational Efficiency

The second article was from Ovum. We are not familiar with Ovum as a source, but they were introduced to us by the Oracle representative as independent of Oracle.

https://www.oracle.com/us/corporate/analystreports/ovum-autonomous-cloud-4417640.pdf

First, the location of this report is a problem. It is on Oracle’s website. If you check with Consumer Reports, they do not allow companies they rate to place the results on their websites or in any printed material. Gartner allows the same thing, which is one of many reasons Gartner cannot be considered a true research firm — which is covered in the following article How Gartner Research to Compares Real Research Entities.

This immediately should cause one concern for Ovum’s true independence from Oracle. You will not find any Brightwork Research & Analysis report on any vendor’s website. Why would we? We receive no income from any vendor. Something to understand as soon as an entity accepts money from a vendor the study converts from research to marketing propaganda. All of the vendors that have reached out to Brightwork Research & Analysis asking for research to be performed began with the research conclusion they wanted the study to come to. The idea was that you then assemble the information to support the conclusion.

If we take this quotation, it is instructive of the overall approach of the article.

“While, at the top level, the concept of a fully packaged and managed PaaS should ideally include the provisions for the automation of tuning, patching, upgrade, and maintenance tasks, it is the capabilities driving developer productivity and faster time to value that deliver greater value to users. In this context, Oracle has an early-mover advantage and offers a clear differentiation in comparison to its nearest PaaS competitors. This is in line with Oracle’s strategy to embed artificial intelligence (AI) and machine learning (ML) capabilities as a feature to improve the ease-of-use and time-to-value aspects of its software products, and not just focus on directly monetizing a dedicated, extensive AI platform.”

The claims made by Oracle are not so much analyzed in this report as they are assumed to be true. The article does not question how it is that Oracle has appeared with such capabilities so recently after offering such a high maintenance database for decades. The article does not read so much as independent research as “dropped in” Oracle marketing material.

The following is an example of this.

“On the data management side, Oracle offers the ability to rapidly provision a data warehouse, and automated, elastic scaling, with customers paying only for the capacity they use. In the context of security and management, Oracle offers ML-driven analytics on user and entity behavior to automatically isolate and eliminate suspicious users.”

Can it Be Detected if the Sentences are From Ovum or From Oracle?

If this were merely a quotation from Oracle that the Ovum author then analyzed it would be fine. But it isn’t. This is Ovum’s statement regarding the autonomous database.

Notice this paragraph uses the same superlatives that would have been used by Oracle to describe the benefits. There is no outside voice in these explanations. If the source were to be removed, it is impossible to guess whether this quotation is written by Oracle or by someone friendly to Oracle.

Our Conclusion from the Ovum Article

Overall, while we never read a report from Ovum previously, this report damaged our view of this entity, and from this report at least, they can not be said to have performed any research at all. Ovum merely repeated marketing statements made by Oracle.

It is difficult to see the distinction between Oracle having written this report.

Article 3: Oracle’s Autonomous Database: AI-Based Automation for Database Management and Operations

https://idcdocserv.com/US43571317

The third report sent to use by the Oracle representative is from IDC. IDG owns IDC. It breaks down thusly.

  • IDG is the overall conglomerate that runs many IT media websites and takes money from any vendor of any reasonable size to parrot their marketing literature.
  • IDC is the faux research arm of IDG. IDC claims it is the research arm of IDG, but we dispute IDG’s claim that they perform actual research and are quite obviously tightly controlled by entities that pay either IDG or IDC which gives them major conflicts. IDG may have been paid directly by Oracle for this article or may have written it because Oracle is such a large customer of IDC.

IDG is a media conglomerate that is neither a journalistic entity nor a research entity that operates to maximize profits in the current media climate where virtually no income comes from readers, and the media entity must fund itself from industry sources. Neither IDG nor IDC ever disclose this ocean of funding that operates in the background. Masses of IDG ad sales reps are in constant contact with vendors and consulting companies negotiating fees and discussing what industry-friendly article will appear where and at what price.

We have extensive experience analyzing IDG produced material. IDG owns eight of the 20 largest IT media outlets including names like ComputerWeekly and CIO. IDG accepts paid placements and produces large quantities of vendor friendly and inaccurate information and is paid by Oracle both for placements and for advertisements. We covered IDG in the following article.

Normally when we review an IDG article which covers SAP, that article will score between a 1 and 3 out of 10 for accuracy.

Now that we have reviewed the conflicts of interest and credibility problems with IDC/IDG let us move to analyzing the content of the article:

The first quote to catch our attention was the following:

“Databases and other types of enterprise software have had heuristics for years that provide various levels of operations automations. Oracle is no exception to this. What is new is the use of machine learning algorithms that replace the heuristics.There are numerous reasons for this — lack of sufficient amounts of data needed to train an ML model, lack of compute power to train the model effectively and in near real time, and lack of a sufficient variety of data coming from different types of users and use cases that helps to broaden the applicability of the algorithms.”

As with the Ovum study, this appears to be a copy and paste from Oracle’s provided information.

Secondly, its foundational assumption, that ML is always superior to heuristics is untrue. In the book, Rationality for Morals outlines how heuristics can often defeat more complicated models that deploy the analysis of more observations.

In this quote, the article repeats Oracle claims which we disputed the benefits of earlier in our article.

“In addition to providing all tuning and maintenance functions, most of which are automated, this service also provides regular software patching and upgrading, so the user is always running on the latest software, and knows that, for instance, the most recent security patches have been applied.”

As we stated, most customers are not even on Oracle 12. They are running older versions of the Oracle DB. Many companies have dropped Oracle support entirely because it is considered such a poor value.

Moreover, upgrading a database has a number of implications, and it is not a simple matter of upgrading automatically. The authors of this report do not account for or even mention any of this.

In this quotation later in the report, the IDC makes a false claim about ML.

“Although machine learning libraries have been around for decades and have been offered as part of many of the world’s statistical packages, including IBM’s SPSS, SAS, and so forth, the use of machine learning by enterprises hasn’t been widespread until recently because these algorithms require a lot of data and a lot of compute power.”

That is inaccurate. Let us look into why.

ML Has Risen Due to Recent Advancements in Hardware?

Computers for many years now have been of sufficient speed to run ML algorithms. And the majority of time in running ML is in data collection and data munging and then analysis. The actual time spent processing is normally short unless very large number of variables are used. (and using so many variables, while now popular brings up a question of overfitting)

When I run ML routines, the results are returned in less than 10 minutes, and I am using a seven-year-old laptop. We have had gobs and gobs of processing power for many years now.

The reason for the rise in the discussion of ML has not been computer hardware related, but due to marketing departments latching onto ML to help market their products. How can this be proven? Because, according to Google Trends, the most significant rise in the interest in ML was from 2014 to the present. How much did computer hardware increase in speed from 2014 to 2017? Furthermore, the interest in ML was greater in 2004 than in 2014. Were computers faster in 2004 than 2014?

ML/AI is very effective at illustrating value to people without a mathematical background.

Our Conclusion of the IDG Report

Overall the IDG report is a restatement of Oracle’s claims around the autonomous database without any analysis. There is no explanation for the sudden appearance of AI/ML in Oracle’s database, and no questioning of explanations regarding AI/ML.

This is like the Ovum report, not research.

Overall Report Conclusions

None of the sources provided demonstrate any thinking and only serve the demonstrate that Oracle has a lot of money to spend on media entities and faux research entities that will take money to repeat whatever Oracle’s marketing team tells them to write.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

AWS / GCS Question Box

  • Have Questions About Using AWS for SAP & Oracle?

    Our unbiased experts can provide you with honest answers and advice about how you can use Amazon Web Services with SAP and Oracle.

    Just fill out the form below and we'll be in touch.

References

https://www.theregister.co.uk/2017/09/08/ellison_and_cos_equity_now_relies_on_a_80_oracle_stock_price_and_cloud_success/

Unrelated article that shows Oracle’s focus on executive compensation. Executive compensation (overcompensation) driven off of stock prices is a primary reason for the release of false information to the public. Lying in public announcements can also be seen as a way of communicating loyalty to the company.

https://aws.amazon.com/rds/oracle/

https://siliconangle.com/blog/2018/06/21/amazon-cto-cloud-offers-database-need/

https://www.theregister.co.uk/2017/10/08/oracle_openworld_2017_analysus/
In this article The Register pushes back on Oracle’s claims for the automated database.

https://www.forbes.com/sites/oracle/2018/03/30/larry-ellison-oracle-is-revolutionizing-the-database-and-it-service-delivery/#4813b9e87a4d

The Chinese publication Forbes has this article by a Jeff Erickson, who is listed as an “Editor at Large for Forbes.” Its title is Larry Ellison is Revolutionizing Database and IT Service Delivery. One wonders if the author has any conflicts of interest by declaring this? Did Forbes consider this potential conflict? Or did Oracle paying them to publish the article at Forbes assuage these concerns?

This article repeats outlandish claims by Larry Ellison, ensuring Jeff Erickson a good annual review it would seem. Claims include:

“This technology changes everything,” he said. “The Oracle Autonomous Database is based on technology as revolutionary as the internet.”

“To set up, provision, and use Oracle Autonomous Data Warehouse Cloud, a user simply answers a few short questions to determine how many CPUs and how much storage the data warehouse needs. Then the service configures itself typically in less than a minute and is ready to load data.

Once the data warehouse is up and running, its operation also is autonomous, delivering all of the analytic capabilities, security features, and high availability of Oracle Database without any of the complexities of configuration, tuning, and administration—even as warehousing workloads and data volumes change.”

This article written by Oracle comes to a surprising conclusion about AWS. Can you guess what it is before reading it?

“AWS Comes Up Short – At the launch event at company headquarters in Redwood City, California, Ellison showed how Oracle Autonomous Data Warehouse Cloud can run faster than comparable database offerings from Amazon Web Services, while being more scalable, and costing less.”

That is curious. I would have expected an article written by Oracle to praise AWS. How odd.

“In addition to running faster and thus costing less, Oracle Autonomous Data Warehouse Cloud is truly elastic, Ellison said, while the Amazon Elastic Compute Cloud, ironically, is not. With the AWS service, “you pay for a fixed configuration” and when you want to add CPUs, you have to take the database down and wait, he said.”

Well, AWS’s service sounds truly useless. Probably no purpose in investigating it now is there.

In the following article also by Jeff Erickson…

https://www.forbes.com/sites/oracle/2018/03/27/how-oracles-new-autonomous-data-warehouse-works/#7cf519de5c7f

Titled How Oracle’s New Autonomous Data Warehouse Works

Oracle claims that an Autonomous Data Warehouse Cloud allows a data warehouse to be setup in less than a minute.

“set up a high-powered data warehouse in less than a minute by answering just five questions:

How many CPUs do you want?
How much storage do you need?
What’s your password?
What’s the database name?
What’s a brief description?

“And that’s it,” says Keith Laker, an Oracle lead product manager for the company’s autonomous data warehouse. “Twenty-five seconds and you’ve got a high-performance data warehouse that’s ready to go.”

And once the data warehouse is running, its operation also is autonomous, using the world’s most advanced database platform and machine learning to operate without human intervention, tuning and optimizing itself for top performance and patching itself without taking the system offline.”

Truly amazing. If Oracle has not yet, they should be recommended to Nobel Society for consideration for a Noble Prize.

Finally, after decades, people now have a place to put their data as evidence by the following quotation.

“And that’s it,” says Keith Laker, an Oracle lead product manager for the company’s autonomous data warehouse. “Twenty-five seconds and you’ve got a high-performance data warehouse that’s ready to go.”

And once the data warehouse is running, its operation also is autonomous, using the world’s most advanced database platform and machine learning to operate without human intervention, tuning and optimizing itself for top performance and patching itself without taking the system offline.”

And without a hint of the potential for overstatement, Jeff Erickson finishes off the article thusly.

“Autonomous Data Warehouse Cloud Service is the next-generation cloud service for the whole organization, with high performance and reliability and vastly reduced labor costs because it’s autonomous. The service runs as little as $1.68 per CPU hour, with storage as low as $148 per terabyte per month. Oracle customers can also bring their existing on-premises licenses to take advantage of Oracle’s BYOL program for PaaS services. Get details on the pricing page.”

https://www.oracle.com/database/autonomous-database/feature.html
Very little information is provided about the autonomous or automated database at the Oracle website.

https://www.sdxcentral.com/articles/news/oracles-ellison-touts-totally-automated-self-driving-oracle-database/2017/09/
SDX Central simply repeats Oracle’s claims verbatim in this article.

https://read.acloud.guru/why-amazon-dynamodb-isnt-for-everyone-and-how-to-decide-when-it-s-for-you-aefc52ea9476

*https://www.amazon.com/Rationality-Mortals-Uncertainty-Evolution-Cognition/dp/0199747091

Oracle’s new database uses machine learning to automate administration


VentureBeat simply repeats Oracle’s claims for the autonomous or automated database verbatim in this article.

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

https://forums.theregister.co.uk/forum/1/2017/10/08/oracle_openworld_2017_analysus/
Good comments on Oracle’s claims.

https://www.theregister.co.uk/2017/08/11/number_off_oracle_rounds_up_major_database_release_cycle_numbers/
Explains the jump for Oracle 12… to Oracle 18

Not on the automated database but on new the new versioning. Included to explain to readers confused about the jump from 12 to 18.

“So what would have been Oracle Database 12.2.0.2 will now be Oracle Database 18; 12.2.0.3 will come out a year later, and be Oracle Database 19.

The approach puts Oracle only about 20 years behind Microsoft in adopting a year-based naming convention (Microsoft still uses years to number Windows Server, even though it stopped for desktop versions when it released XP).”

https://www.theregister.co.uk/2014/07/24/oracle_in_memory_database_feature/
Describing costs of upgrading to Oracle In Memory

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Did SAP Simply Reinvent the Wheel with HANA?

Executive Summary

  • SAP made many claims about HANA, but upon analysis, what SAP actually did with HANA is reinvent the wheel.
  • SAP made these claims in order to push customers into replacing Oracle with HANA.
  • We cover SAP proposals ranging from code pushdown to in-memory computing to the fictitious backstory for HANA.

Introduction to HANA as a Derivative Product

Innovation has been a critical selling point of HANA. You will learn how the claims around HANA’s innovation checks out.

Understanding SAP’s History with HANA

When SAP first introduced HANA, which was begun with enormous fanfare the idea was that SAP had created a whole new database. In fact, in as recent as the Q4 2017 analyst call, Bill McDermott stated the following:

“Back in 2010, we set bold ambitions for SAP. We focused on our customers to be a truly global business software market leader. We set out to reinvent the database industry.

Forrester has now defined the new market for translitical data platforms, and of course, they ranked SAP HANA as the clear number one. We led the market with intelligent ERP, built on an in-memory architecture.”

In this article, we will analyze how much of what SAP created with HANA is new and how much is simply copied from other database vendors and then claimed as innovation.

Important Information About HANA

We have performed extensive detailed analysis of HANA. The more we look, the less we can find that is unique or innovative.

Let us review some of the major points of inconsistencies with the innovation story around HANA.

The Performance of HANA

SAP has vociferously proposed that HANA is faster than any other database. However, they have provided no evidence that this is true. We performed the necessary research into this topic and concluded that SAP’s claims of superiority versus the competing offerings of Oracle, IBM and Microsoft are untrue. We have explained this in the article What is the Actual Performance of S/4HANA? 

The Column Oriented Design of HANA

SAP has proposed that they essentially invented column-oriented databases. Column-Oriented databases go back as far as row oriented (often referred to as relational), and SAP acquired Sybase in 2010 before HANA was introduced. And Sybase already had IQ, now SAP IQ, that is was a column-oriented database.

Furthermore, SAP made other acquisitions very silently, to give the impression that they “invented” the technologies that underpin HANA. At the same time, SAP’s marketing documentation was intended to give prospects the impression that SAP had invented a new category of databases. Notice Bill McDermott’s statement around “reinventing the database industry.”

That is reinventing it with a database design that had been around for decades? Furthermore, this comment was not made in 2011 or 2013 when HANA had yet to be challenged. This comment was made in 2018 after plenty of time has passed to verify the marketing statements about HANA with both real implementations, benchmarking and the HANA technical documentation.

SAP has a long history of faking innovation. Faking innovation is a major strategy in both software and patent drug industries which covers for a process whereby innovations are taken from the public domain (or from competitors) and repackaged as something developed internally. 

Calling All New HANA Development “Innovations”

SAP has claimed that because HANA is being actively developed, that each development is innovation.

Yet, the items that HANA is developing already exist in other databases. The definition of innovation is that the item needs to be new. Not new to the software vendor, but new to the world.

While not discussed, the innovation should also be beneficial. Areas where SAP has done things that are new, such as reducing aggregates, have not been demonstrated to be beneficial. Reducing database size is really an issue if a company is somehow constrained in the size of their databases, but with the very low cost of modern storage, this is not the case.

SAP has proposed through surrogates like John Appleby of Bluefin Solutions that hard disks “take up a lot of space,” and that companies cannot afford the storage space to locate disks – which is either absurd or insulting depending upon your perspective. One has to question the innovation of any company that has a spokesman like John Appleby (which we cover in the article Why John Appleby Was so Wrong About His HANA Predictions), Vishal Sikka or Hasso Plattner that are repeatedly found in hindsight to have knowingly released false information into the marketplace. It is normally the case that truly innovative entities do not find it necessary to lie about their innovations.

The Code PushDown of HANA (Innovation or Innovative Terminology?)

A stored procedure is the established term for when the code is moved from the application layer to the database layer, normally for performance reasons. However, SAP decided to come up with a new term, code pushdown. 

Why?

Well as our colleague points out.

“But by using the code pushdown term and not “stored procedures + DB views”, they not only have an innovative term for real “stored procedures” but also obscure that classic ABAP views are extremely far behind REAL Views that exist for decades and that this is one reason why the database is kept so stupid in classical “ERP on AnyDB”.”

This is why analyzing the terminology that SAP uses is so important. SAP uses specialized and often inaccurate terminology in order to to lie to customers. This is found the way that SAP called HANA “in memory,” which we cover in the next section. When a false term is used, it is just the starting point. It can be considered the sound of a gate opening for what will be a torrent of false information.

SAP’s presented logic for code pushdown is performance, but when SAP had no database to sell, they were in favor of performing processing in the application layer. The code pushdown is what has served as justification for SAP to keep S/4HANA exclusive to HANA. That is curious, an ERP system, which has relatively low-performance requirements must have the code pushed down to the HANA database, but other applications that SAP offers, that SAP has less account control over, still work on AnyDB.

Here the obvious factor in the determination of which applications are exclusive to HANA and which does not have to do with leverage, not the technical requirements.

The CDSs of HANA

Core Data Services are a type of code pushdown that is a database view. SAP has introduced CDSs as something new, when in fact they are copying the idea of the dictionaries that have been available in AnyDB databases for decades.

SAP has stated that AnyDB can also “use” CDSs, but the question is why they would want to do so. SAP is giving the impression that what is really just catching up with other database vendors, it is actually coming up with something new that does not already exist for AnyDB database. Here again, SAP’s innovation claims do not pass the smell test.

Caching of Queries

In the document Boost Performance for CDS Views in SAP HANA, SAP states that it needs to cache queries for performance.

It further states:

“Keep CDS views simple (in particular serviceQuality A and B = #BASIC views)
In transactional processing, only use simple CDS views accessed via CDS key
Expose only required fields define associations to reach additional fields when requested”

This is odd. For an in-memory zero latency database like HANA, why would these limitations need to be put into place?

“Perform expensive operations (e.g. calculated fields) after data reduction (filtering, aggregation)
Avoid joins and filters on calculated fields
Test performance of CDS views. Test with reasonable (= realistic) test data”

This speaks to the need to limit the consumption of computing resources. Again, it should not apply to HANA.

“Stay tuned on caching possibilities of SAP HANA and Fiori apps.”

Caching for Both HANA and Fiori?

Caching for both HANA and Fiori? Impossible! A foundational proposal of SAP since HANA was first introduced was that there should be no caching.

Everything, literally everything is supposed to be in memory. Caching makes no sense with the presented HANA design. The people working at SAP on HANA and who presented this at TechEd 2017 clearly do not understand Hasso’s vision.

According to Hasso Plattner, HANA is and forever will zero latency. But the techniques that are described in the actual HANA technical documentation show a much more complicated picture with SAP performing caching in several locations.

Not only can HANA not provide zero latency (surprise, surprise), but testing even optimized demo boxes shows that Fiori running on HANA underperforms open source databases and server technologies like MySQL and Apache as explained in the article Why is the Fiori Cloud so Slow?

Furthermore, the hardware specs that SAP has for HANA are extremely large. The column-oriented store combined with the large quantities of RAM is supposed to be so incredible, that these types of techniques should not be necessary. But HANA underperforms other databases even though it has far more hardware. The Oracle benchmark shows that HANA was only able to come close to Oracle 12c performance with far more hardware. This is, of course, a benchmark produced by Oracle. However, other private benchmarks that have been made available to Brightwork show the same thing.

Everything In Memory and In-Memory Computing?

When HANA was first introduced, SAP stated that the entirety of the database would have to be loaded into memory. However, the technical documentation on HANA shows clearly that only some tables are loaded into memory. Neither the large tables nor the column-oriented tables are immediately loaded into memory. This is peculiar, as it was supposed to be the relationship between column-oriented and tables and high-speed memory that were to provide HANA with its analytical advantage. However, either way, HANA uses memory optimization….surprise, just as all of the other database vendors that SAP is copying its solution from. As we covered in the article How to Understand Why In-Memory Computing is a Myth, all databases have their tables placed into memory.

However, a database is much more than whether a larger percentage of the tables are placed into memory or whether it uses a column store for more of the tables. This is, by the way, another detail that has come to light as time has passed. Originally, SAP stated the entire database was a column store (which would not have made any sense by the way), then it is determined that many of the tables in HANA are in fact rows.

Here again, one thing is stated about how HANA works, implying that all other database vendors are backward for using memory optimization, and then once the technical details are read, HANA just does the same thing other databases do. This gets back to the central point that almost nothing that a salesperson or HANA marketing literature says about HANA can be trusted.

The Obvious Conclusion

Increasingly it simply appears that purchased some database products, and then reverse engineered existing databases, while putting an extra emphasis on placing more of the database in memory.

Innovation or Copying while Throwing in Confusing Terminology?

When discussing this topic with several other people investigating HANA, the following insight was given to us.

They recreate the wheel as an octagon because anyWheel is round and then sell you a cycloid molded road to drive smoothly – but only on their roads.

What is Truly New in HANA?

It seems like a lot of this is just recreating the wheel, but with the blue SAP bow on top.

This is how Hasso Plattner, SAP and SAP partners would like to present the genesis of HANA. Not as a series of technologies that SAP purchased more than year before HANA’s development, not based upon databases that had been developed more than a decade before HANA, but as divine inspiration by Hasso and his brilliant PhDs. Hasso has repeatedly been referred to as a genius. A genius who “discovered” something that he directed SAP to purchase, and then after purchasing, immediately invented. This false storyline is laid out very carefully in the book The In Memory Revolution. 

HANA, The Only Database With a Purpose Built Fictitious Backstory

In the article Did Hasso Plattner and His Ph.D. Students Invent HANA?, we uncovered (with some helpful hints from someone who reached out to us) that unlike what was stated by SAP and Hasso Plattner. And unlike what has been repeated ad nauseam by compliant IT media entities and SAP consulting partners, the underlying technology for HANA was purchased not invented by SAP. Furthermore, Hasso Plattner and his Ph.D. students added nothing to these technologies except developing rather impractical ideas such as a database having no aggregates.

SAP did not innovate with HANA. Their primary contribution was to promote the idea of dual-purpose databases, that is a database that can equally well perform transaction process and perform analytics. Yet there is no evidence that this strategy is worthwhile. While doing this SAP has both massively overstated the benefits to such a design while at the same time glossing over all of the downsides to such a design, one of which being higher overhead. Furthermore, as we covered in the article HANA as a Mismatch for S/4HANA and ERP, it is clear that SAP has not mastered the ability to perform both OLTP and OLAP equally well from a single database.

Through four books, which are littered with falsehoods, and serve more as marketing collateral for HANA than “books” in the traditional sense “written” by Hasso Plattner was meant to storm the consciousness of prospects with how amazing HANA would be. It is one of the first books written that describes the invention of something that had already been invented. 

Ding Ding Ding!

SAP receives our Golden Pinocchio Award for first purchasing the technologies that underpin HANA, then reverse engineering other databases and calling it innovation. HANA should be considered a case study in innovation fakery. Why is this not publicly known? Due to the partnership agreements that SAP maintains with other vendors, this has prevented SAP for being called out for its innovation fakery by vendors that know but are censored due to their partnership agreement with SAP. The only entity that could cover this story would have to have complete independence from SAP, which also rules out IT media entities that cover SAP. 

Conclusion

HANA is consistent with what is becoming an established history with SAP of exaggerating its innovations and making it appear that ideas and techniques that it took from other places were developed inside of SAP. HANA does not run 100,000 times faster than all competitive offerings (Bill McDermott). It is not an innovative database.

The primary thing that is innovative about HANA is that SAP tells customers that it is innovative. Once you look under the hood what you have is a far less mature database than other offerings, and a desire of SAP to push competitors out if “its” accounts by using a falsified storyline about how innovative HANA is to customers that are soft targets. That is the less database knowledge prospects have; the more SAP can gain traction in those accounts by propagating very large disruptions to their customers while greatly increasing the TCO of the databases used by them.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://seekingalpha.com/article/4141369-saps-sap-ceo-william-mcdermott-q4-2017-results-earnings-call-transcript

https://www.sap.com/documents/2018/02/2e6393af-f47c-0010-82c7-eda71af511fa.html

https://www.zdnet.com/article/sap-acquires-sybase-for-5-8-billion-but-why/

How to Understand What is a Transalytical Database

Executive Summary

  • SAP has made a number of bizarre comments regarding transalytical databases.
  • Media entities are being paid by SAP to support SAP’s messaging, Forrester being one example.
  • Convergent IS also provides terrible quality and 100% promotional information on transalytical databases.

Introduction to the Transalytical Database (???)

SAP has made some amazing claims regarding a “new database category.” You will learn how a powerful software vendor can create a “new database category” when they have enough money to give a major brand in the research business.

SAP’s Transalytical Database Announcement

In the opening comments on the Q4 2017 SAP analyst call, Bill McDermott made the following statement.

“Back in 2010, we set bold ambitions for SAP. We focused on our customers to be a truly global business software market leader. We set out to reinvent the database industry.

Forrester has now defined the new market for translitical data platforms, and of course, they ranked SAP HANA as the clear number one. We led the market with intelligent ERP, built on an in-memory architecture.”

This simply means databases that are good for both transaction processing and analytics. However, it should be noted that there is far less of a need for this in the market than SAP states and than SAP predicted when they first came up with HANA in 2011.

Paid to Comply

But of course, TechTarget, like Forrester, another media entity that SAP pays to get the word out, has also written on transalytical databases. TechTarget is not an actual normal media entity but controls a series of outlets that have no other function than to capture email addresses to feed a giant marketing automation backend, as we covered in the article How ComputerWeekly is a Front for Marketing Automation.

“Transactional data and analytics can now interact in near-real time, opening up a wealth of new possibilities.

The vehicles for this digital business transformation are called translytical data platforms, according to a recent Forrester report, “The Forrester Wave: Translytical Data Platforms, Q4 2017.” The report defines translytical data platforms as emerging technologies that can “deliver faster access to business data to support various workloads and use cases,” which can then enable new business initiatives. These initiatives are driven by the availability of real-time data from transactional systems, like ERP, and analytical systems in the same platform.”

Forrester is Available for Whatever….

Forrester has a history of writing up research on command when paid the right amount of money. For instance, when SAP wanted an entity to find that HANA, which at the time had no go lives had a lower TCO than any alternative, as we covered in the article How Accurate was Forrester’s Study into HANA’s TCO.

“The report assessed 12 vendors — Aerospike, DataStax, GigaSpaces, IBM, MemSQL, Microsoft, NuoDB, Oracle, Redis Labs, SAP, Splice Machine and VoltDB — that currently have translytical data platforms available, with SAP and Oracle identified as tops in the Leaders category. This was based on assessments of the strengths and weaknesses of the vendors’ current offerings, their overall strategy and their market presence.”

Notice what Forrester says about HANA.

“SAP HANA, which is the core of SAP’s translytical platform, according to the report, “crushes translytical workloads” and supports a variety of use cases, including real-time applications, analytics, translytical apps, systems of insight and advanced analytics.”

Looking Suspicous

Is that the way a supposed research entity should write about test results — that they “crush” workloads? Also, how would Forrester know this? Forrester does not employ technical resources and would not have any idea either way. Secondly, which workload? Transaction or analytic? It makes a difference, as our research into the area indicates that HANA is very poor at processing transactions, and is somewhat poor in long SQL queries, which we cover in the article HANA as a Mismatch for S/4HANA and ERP.

At this point in the TechTarget article, they go out to an “independent source” which is Convergent IS which is an SAP consulting partner. Therefore, nothing that Convergent IS says about SAP can be published without the approval of SAP, as explained in the SAP partnership agreement, as we covered in the article The Control on Display with the SAP Partnership Agreement.

Convergent IS Used to Provide Some Inaccurate and Highly Promotional Quotes

SAP as a translytical data platform opens up new business possibilities, according to Shaun Syvertsen, managing partner at Convergent IS, a firm based in Calgary, Alta., that provides consulting services for SAP systems, including SAP Fiori and S/4HANA. Convergent IS not only provides these SAP-related services but also runs its business on S/4HANA.

“We moved our business onto S/4HANA about two years ago, and what really appealed to me was that you have a database that you could ask a more difficult question to and you get the answer much more quickly,” Syvertsen said. “This effectively opens the door to asking questions that you could not previously ask and having access to that information more timely than previously possible.”

The question to ask is why did a tiny consulting company, that had 35 employees at the time of the S/4HANA implementation choose this application? Did Convergent IS need S/4HANA? Of course not. Did they implement S/4HANA to use themselves as a reference account so they could get S/4HANA business?

Now we are getting warmer.

We called out Convergent IS’s  S/4HANA implementation as a fake case study in the article Convergent IS Case Study, and it was one of many SAP consulting partners who are listed as case studies on SAP’s website.

Secondly, Convergent IS is a small consulting company. How complex are the questions that Convergent IS has to ask about its data?

Conclusion

Translytical is a made up term by Forrester that was most likely prompted by SAP asking them to start up the category. This was facilitated by SAP paying Forrester money to do so. There is still little evidence of the real need for databases that combine transactions and analytics in one database, and in fact, one cannot perform equally well in both in a single database due to the inherent trade-offs that come with having to try to design for both. It also means accepting more maintenance overhead. Something that both Forrester and SAP are sure never to bring up.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://searchsap.techtarget.com/feature/Translytical-data-platforms-emerge-with-SAP-HANA-as-a-leader

https://seekingalpha.com/article/4141369-saps-sap-ceo-william-mcdermott-q4-2017-results-earnings-call-transcript

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How Accurate are SAP’s Arguments on Code Pushdown and CDSs

Executive Summary

  • SAP on has made bizarre proposals regarding code pushdown (aka stored procedures) and Core Data Services.
  • SAP explains Core Data Services as something innovative when SAP is just using new terminology to something quite old.
  • As with Oracle, SAP is using CDSs to restrict the options of its customers to better lock them into buying SAP.

Introduction: How Truthful is SAP Being on Code Pushdown?

SAP has made a lot of noise about how important its code pushdown (aka stored procedures) in HANA are. However, an issue that has begun to arise in how SAP has begun to use the term to describe things that do not actually code pushdown. You will learn what is true regarding code pushdown.

What is Code Pushdown?

The first place to begin of course is “what is code pushdown.”

Code pushdown is an SAP term for putting code that was previously in the application layer into the database layer.

How SAP is Now Using the Term

As stated by a colleague.

“Code that was pushed down to the database becomes even more pushed down by “filter push down”, “limit push down” and “aggregation push down” This is plain old basic “anyDb” SQL wisdom. Reduce the amount of data as early as possible, especially before you join subqueries, that is at the “most inner” or “lowest” level of your SQL. “Filter push down”: filter each result set to be joined BEFORE you join them, do not join large sets and filter afterwards. I learned this in an “anyDb” administration and performance course about 15 years ago, and even then it wasn’t new.”

Multilevel Push-Down?

The first question that should arise when reading this is how is it that code has to be pushed down multiple levels in the database?

If the code is “pushed,” it is removed from the application layer and inserted into a table. Is there a level below a table? What is this quantum physics?

What SAP means is simply the reduction of data, not actually pushing code.

SAP on Core Data Services

SAP has published a document on ABAP Core Data Services of CDSs that can be found at this link.

But some of the things written in the document are required further elaboration.

For example, it states that Fiori apps are easy to create.

“Based on the OData exposure of CDS described above, it is then rather straightforward to create an SAP Fiori app using the development framework SAP WEB IDE (either locally or within SAP Cloud Platform). As depicted in Figure 4 the SAP Fiori User Interface connects to SAP Gateway using the OData services.”

The idea being the Fiori app will connect to the CDS view. The CDS is a database view with stored database functions that enable the retrieval of data. The idea is to improve on the limited ABAP dictionary views versus anyDatabaseViews have had for decades.

But instead of presenting CDSs as “catching up” with competing offerings, SAP decided to provide a deceptive explanation for the logic of CDSs. This is taken from the document ABAP Core Data Services on AnyDB.

“The CDS framework was introduced to leverage the computational power of HANA DB. Nevertheless, it can also be used with all other databases that support SAP NetWeaver (called anyDB in the following). This guide gives hands-on information on how to implement, run and optimize CDS based applications on anyDB.”

Reinventing the Wheel?

That makes it sound like SAP is bringing something that did not exist before — both for HANA and for AnyDB. The clear statement here is that other DBs can benefit from CDSs — making it appear as if CDSs are an innovation, not that they are bringing HANA to par (or attempting to do so) with AnyDB libraries.

That is one problem. The second is the phrasing that

“CDSs were designed to leverage the computation power of HANA, but nevertheless they can be used for other databases.”

This implies that other databases do not have HANA’s computational power. But let us leave that to the side for now.

The point we are emphasizing in is the presentation by SAP of CDSs as if other databases do not already have what CDSs offer. Clearly, it will take years for CDSs to reach parity with anyDatabaseViews.

Cost to Create Fiori Apps

Normally Fiori apps are extremely expensive to create, as was even pointed out in the Forrester study that SAP paid for, where Forrester states that Fiori apps should be used standard. This was addressed in the article What is Actually in the Fiori Box?

And we have numerous stories of the costs of making custom Fiori apps. Basically, this is not happening on projects to any significant degree. Customers that are sufficiently misinformed to customize Fiori apps normally soon run out of money. But these are custom Fiori apps with some complexity. Other Fiori apps are much easier to create than others, as observed by our colleague.

“A simple Fiori app consisting of a drill-down table (filterable, sortable, groupable) and navigation to a detail form is quite easy to build as long as you do not need anything extra. Fiori provides “SmartTables” and “SmartForms” widgets, their behavior can be controlled by annotations/properties in the metadata of OData services (e.g. filterable, sortable, label). These annotations can be defined at the level of the CDS View already (@Filterable, @Sortable, @Label) and are propagated to the OData service that can be generated from CDS Views by some SAP magic. This works quite well for Fiori Apps with almost no UI logic (especially drill down tables with detail views, no edit) that are built once and never touched again. However if you modify the data model you usually are best of when you rebuild your Fiori App from scratch.”

Laying Out Intelligent Fiori Usage

So if Fiori is laid out in terms of usage it is:

  • Use the standard Fiori apps, although those Fiori apps are quite limited (See the article Strange Changes to the Number of Fiori Apps to find out how SAP has exploded and exaggerated the number of Fiori apps.
  • Potentially use the SmartTable or SmartForm Fiori widgets to develop reporting apps.
  • Do not customize any Fiori apps that have any complexity, so business logic, data complexity etc.

The vision being pitched is to use the CDS and then do what you want in Fiori. There is a big question as to whether that will actually happen. If SAP was happy with customers using an efficient non-SAP app development environment then maybe, but then, SAP will dissuade that from happening.

Secondly, another observation about the CDSs is from a colleague.

“This way offers you a clean layered application and the chance to write unit tests without database persistence. SAP are throwing away accepted architecture paradigms. CDS are OK to look at and analyze (join, aggregate, partly filter) data. That’s it. CDS is a framework, not new technology.”

This brings up the topic of whether CDS will ever really catch on, or if they will end up being just another item that SAP introduces that just falls to the wayside.

The Benefit to Stored Procedures / Code Pushdown

CDSs as with stored procedures are a form of code pushdown. However, overall, we are still lost as to why SPs are a good thing. We keep pointing out that HANA is not addressing the hardware it runs on very well. So if you aren’t even addressing the hardware properly, why are we worried about SPs? Furthermore, SP’s bring up more portability overhead for different DBs.

This gets into the topic if how most of SAP’s proposals, particularly since the introduction of HANA, where SAP took an abrupt turn from Netweaver as their primary marketing tentpole and transitioned to HANA, have been focused on making architecture arguments around performance. This is incongruous, as the performance was never an SAP selling point. Today the majority of SAP software performs worse in terms of speed and usability of that speed versus competing applications.

Previous points of emphasis on SAP were reliability and business process/functionality coverage. But first with Netweaver (which focused on integration) and then with HANA, SAP essentially changed its historical message and value proposition.

Restricting Stored Procedures for Competitive Reasons

In fact, SAP restricting SPs of S/4HANA is the primary limitation to being able to port S/4HANA to AnyDB. SAP is allowing the CDSs to be ported, but not other SPs.

Code that was taken from the application layer in ECC and was pushed into the SPs used to belong to the customer when they purchased the ECC license. But with S/4HANA that code did not transition to the customer. were taken from the application layer in ECC and were pushed into the SPs used to belong to the customer when they purchased the ECC license. But with S/4HANA that code did not transition to the customer.

SAP has proposed that this is all new code, but it could not be. Else, how did ECC do these things before?

The question that no one seems to be asking is why SAP has control over code that was previously in the application layer but was migrated to the DB layer as a stored procedure. These are codes that SAP supposes to deliver for their customers who paid for their software. This is code that the customer should be able to use as they wish under the license. That is they should not be restricted by code that is in SPs that SAP controls how is run.

The Proposed Logic of the Importance of the Code Pushdown

Now let us switch from one type of code pushdown (SPs) back to another type of code pushdown (CDSs)

Let’s look at SAP’s logic for the CDSs. Here was a recent argument made in favor of CDSs.

Without CDS (labeled as “Classic Approach” in Figure 1), intensive calculations are done on the application layer avoiding costly computations in the database. This results in rather simple SQL queries between application and database layer. The drawback is however that lots of data need to be transferred back and forth between those two layers. Often, this is very time-consuming. “

Is it? With 2018 hardware? (or 2014 hardware as not all hardware is new).

What does SAP think its applications do? SAP does not compete in high-performance computing applications. These are not scientific applications, massively parallel scientific processing or even Big Data.

f there is a problem with transferring data between the two layers, then the entire application should be put in the database. Or most of it in the database. Is that such a good idea? We have software that performs much better than SAP on smaller hardware, and they do it with the standard division with application logic in the application layer and just storing data in the database.

Conclusion

SAP needs to police its use of the term pushdown or it will cease to have any meaning.

Performance is not what SAP has emphasized the importance of HANA. The evidence is how HANA has so vastly underperformed its performance hype as is covered in the article HANA as a Mismatch for S/4HANA and ERP.

What SAP cares about is using the idea of performance to push customers down the rat maze in a way that benefits SAP the most. SAP can use arguments about performance to make them accept that logic of the lack of S/4HANA portability to different databases because SAP’s help restrict that portability. That is at least to the laymen — if SAP wanted, they could share the S/4HANA SPs with Oracle, IBM, and Microsoft, and all of these companies would pay the cost of performing the porting of the SPs to their respective databases.

But SAP still will not release the SPs because they don’t want S/4AHANA ported, because they want to use S/4HANA as a wedge to get database sales that they would not otherwise get in an open competition.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.sap.com/documents/2018/02/de1db6cd-ee7c-0010-82c7-eda71af511fa.html

**https://experience.sap.com/fiori-design-web/smart-table/

https://www.bluefinsolutions.com/insights/john-appleby/september-2012/building-the-business-case-for-sap-bw-on-hana

The Suspicious Timing of SAP’s Flip Flop on Processing in the Database

Executive Summary

  • Relational databases are oversimplified by the ERD diagram.
  • Complexities in relational databases mean that they are often not fully leveraged by applications.
  • SAP targets low information buyers and provides a steady stream of inaccurate information around databases.

The graphic above is a detailed explanation of SAP’s internal process of how SAP determines what is true, and what technical viewpoints they will hold. 

Introduction: How Amazing Are Databases

This article began as a discussion with some colleagues around the question of how much the application leverages the capabilities of a modern relational database. I observed that while developing a new application and going through the data modeling process required a discussion of how the database would process the data that we stored, and therefore how that would be represented. Some of the things that were explained to me regarding what the relational database could do, and what we could take advantage of seemed like magic.

If you are like me and you spend most your time in flat files and configuring SAP applications its easy to forget how amazing databases are. Let us begin there.

Working in the SAP space is like being caught in a time capsule. And SAP does not leverage the capabilities of modern relational databases, all of their promotion about HANA does not change this.

Multi-Tenancy: The Foundational SaaS Functionality

For example, one of the amazing features of modern relational databases is multi-tenancy. However, how many SAP applications that were not acquisitions are multitenant? How much experience does SAP have, in its homegrown products, in multitenancy? Not S/4HANA, while marketed to the hilt, it S/4HANA in the cloud not only has hugely few customers, it will just be multi-tenant for “baby customers,” (as is covered in the article is S/HANA Designed for the Cloud).

There is a segment of people who understand databases at the right level and can leverage their capabilities for new application development. That is I am not talking about tuning an existing database or creating structures to support reporting; I am talking about building entirely new applications.

This is a minority of the overall market for database skill. Naturally, most the database work is for administrators, whose job it is primarily to maintain existing databases.

Do Relational Databases Work as an ERD Diagram?

No. A relational database is far more complicated than can be represented by ERD diagrams. Our touchstone concerning how relational databases work is the ERD diagram; the ERD diagramming does not do a good enough job of expressing what is going on in the DB itself.

ERD is just what we have today. It allows us to organize the tables and the fields and tables and the relationships in the data model. But it only captures a few dimensions of what the relational database does.

Are Modern Relational Databases Well Leveraged by Applications?

It occurred to me that there is some type of drop off between application sophistication and database sophistication. That is most of the applications that I have evaluated don’t leverage enough of the capabilities of the RDBMS. And here I am referring to an open source RDBMS, not Oracle, which has more functionality of course that what I am developing my application for.

Let me give on example of an application that does an excellent job of leveraging what the database to offer so you can see what I mean.

Oh did you want something out of SAP BW? Well, roll up a sleeve. On SAP projects working in SAP BW is a bit like getting your blood drawn. 

SAP came out with the idea of placing the BW on HANA to improve performance, but lost in the hoopla around HANA was that I had been using an application for years that was able to blow BW and DP (DP uses the same data workbench as BW) away in performance. I could load up as many attributes as I wanted, and create any hierarchy that I wanted. I was and am still able to perform forecast testing without any applying any of the complicated rules about data setup that are required in BW and DP. And my performance on a laptop was far higher than the company could attain with their more massive server.

Leveraging the Database

That is called leveraging the database. My hardware was tiny. My database was open source. It slew the performance and usability of SAP. SAP BW that I competed against was on Oracle. So no problem with the database. The problem is that SAP BW was unable to properly take advantage of Oracle. This must be well known within Oracle’s database development group, but SAP is quite poor at developing applications that leverage Oracle’s database. SAP’s primary product is still ERP, and ERP is not an unusually heavy performance application. ERP’s most intensive process is running MRP and DRP.

And let us get into that topic a bit as the details are important.

Is MRP Processing a Big Deal?

MRP was first developed to run on systems that were pre-hard disk. That is right; the first MRP systems ran on tape-based systems!

“For those that are old enough, remember the introductory sequence to the 1970s TV show the 6 Million Dollar Man, where there is a brief shot of a 1970s tape storage system in action.”

So MRP is not only a “pre-advanced database,” mathematical routine, it is a “pre random access” mathematical routine. And once again, I can run MRP faster on a laptop with a specialized application than huge companies can run MRP in SAP ECC.

So while David versus Goliath was a fable, it does exist in systems. And it exists when the designers of one “team” have an advantage over the designers of the other “team.” 

**DRP is a method similar to MRP, but less processing intensive as only stock transfers are created, and by that stage the production process is complete.

Most of the rest of what ERP systems to is transaction processing (updating that financial account, posting goods issue, bing bing bing — small database updates, all day long).

The modern ERP systems have its origins in banking systems. After the military, banks were the next major area to be computerized. And of course, when it comes to financial transactions, the movement of money into accounts must be 100% reliable. The focus on transaction processing is reliability, not performance.

How the Applications Beat SAP’s BW

This is because the application was so well written and knew how to leverage the database. SAP’s development team, with all of its resources, lacked the development capability of a small vendor with no more than five people working for them. CIO and ComputerWeekly don’t cover that kind of story — its bad for their funding. If you want coverage in the major IT media outlets, don’t forget to bring a big wad of cash. 

So that is just leveraging the MySQL or PostgreSQL.

It occurred to me if in the database community there is a discussion and sentiment that goes something like this……

Can you believe that that is all the application was able to leverage from what we have to offer?”

To this point, one of my colleagues responded…

“Yep. The programmers will only use what he knows. That’s why we make it transparent to the programmers. The DBA will analyze which tables should be in memory and do it.”

Being Misinformed by SAP

Concerning SAP, we are going through a period of excellent database miseducation courtesy of SAP. Much of what SAP says about databases is for competitive reasons and is not true.

“Oh, my database compresses.”
“Oh, I can run the database faster.”(“100,000 times faster” – Bill McDermott,

“People will work between 10 and 10,000 faster because of HANA” – Steve Lucas.)(turns out not to be true, but was database performance a problem for SAP’s customers before they introduced HANA?), as covered in What is the Actual HANA Performance?

This was the problem when a company placed profit maximization before communicating what is true. Furthermore, what SAP is telling companies to focus on concerning databases is just plain wrong.

“Simplified data models lead to simplified business processes.”(covered in Does SAP S/4HANA Actually Have a Simplified Data Model & Faster Financial Reconciliation?)

There is far more exciting stuff in databases to emphasize. For example, multitenant functionality is fascinating. And all manner of decisions, with trade-offs to be made.

HANA’s Multitenancy?

Why doesn’t SAP emphasize HANA’s multitenancy capabilities? Because no software vendor would be stupid enough to develop with HANA.

**Actually, that is a question customers of SAP should ask. If HANA is so great, if HANA’s performance is so amazing, if its TCO is so low (lower than MySQL even?, so low that SAP pays you to use it?) why can’t you find software vendors that use HANA to develop anything? (Hint, to SAP customers, software vendors know more about software development than you do.)

How SAP Customizes its Messaging for Low Information Buyers

Let us face facts, HANA is for low information buyers, which means buyers that don’t work in software and which can be coerced and tricked into buying their database through their allegiance to SAP through career reasons, etc. That is those companies that allow themselves to be victimized……excuse me, I mean to write, “take guidance” from Deloitte or Accenture.

The upshot being, SAP does not emphasize multitenancy as multitenancy is functionality for software vendors. SAP can’t make money on the functionality, so who cares. One might expect an article on SAP as to why multitenancy is completely overrated.

So Much False Information Being Spread

And the problem with having SAP spread so much false information about databases is that most of the people that repeat what SAP says have no idea if it is true. Many are..

  • Just looking for a good job with health insurance, and they don’t rock the boat. They have families and do not have time to research things for themselves.
  • Many will work in sales, who’s only real job requirement regarding information, is to relay what SAP says. If they don’t, they are in dereliction of their duty.
  • Deloitte and Accenture are consulting arms of SAP, and repeat what they say. They don’t care if it is true but has determined it is profit maximizing to do so (see the case study of how the major SAP consulting companies humiliated themselves lending their support for SAP’s ludicrous Run Simple campaign as we covered in the article Is SAP’s Running Simple Real?
  • The IT media system employes mostly journalists with no technical background, and no editorial leeway to question what SAP says (SAP is the customer of IT media, the reader is the audience and increasingly contributes nothing financially to the content development process.)

Therefore, the entire system is based on the elite opinion which is created by SAP, and then it is unthinkingly propagated throughout that system. These entities that what SAP says is true. Therefore, what SAP says is true, forms the basis of what is true for a large number of people and organizations.

Should One Process in the Application or the Database?

During this discussion, another colleague brought up the following point.

“Just an anecdote: at the beginning of this millennium I worked in an PL/SQL Oracle project. The whole business logic in PL/SQL stored procedures. Some UI logic in Oracle Forms. Our leading company SAP ABAP architect told us that letting the database do the job is a bad idea. Let the powerful highly scalable application server do the work for you. Load the database table records into an internal ABAP table, sort, sum, do work, etc. and write the result back to the database. Now SAP has its “Code Pushdown” and sells it like their “innovation” that business logic is (partly – when needed) running in the database layer. Great to know that I was working with 2017 bleeding edge technology 15 years ago.”

I thought this was a great anecdote.

Getting A Real Understanding of How SAP Formulates Technical Opinions

SAP does not make decisions based upon technical reasons. So if they have all applications and no DB, then their “technical” opinion is that the application should do the work, and the DB just a container for data. But then when they develop a DB, and they can use the DB for marketing reasons, and if by using stored procedures they have an excuse not to port to other DBs, then they are in favor of doing the work in the DB!

SAP’s Flip Flop on Advanced Planning Systems

I witness this same flip-flop in that late 1990s. At that time I was working for a supply chain planning vendor.

SAP’s position was that all supply chain planning should be done in its ERP system. Then SAP developed APO. And at that exact point, SAP began to extoll the virtues of performing planning outside of ERP.

Its all about “what is good for us,” and has zero to do with what is true technically!

The present economic theory holds that technical accuracy is a distant second to profit maximization. This means that the technology must follow profit maximization, and to not do so is a great insult to shareholders. And according to the US court system, there lying in commercial settings is not lying, it is “puffery,” as explained in court decision of San Marin County versus Deloitte.

Additionally, the promotion of what is true over what is profit maximizing will cause Tinkerbell’s light to go out.

Conclusion

It is complicated to learn something from an entity which is completely lacking any integrity, and merely intent on pushing you down a path that makes them money.

This is what taking information from SAP is like.

One must listen with extreme skepticism and verify every statement, particularly given SAP’s track record for accuracy, which we have evaluated in the article A Study into SAP’s Accuracy.

Brightwork Disclosure

Financial Bias Disclosure

This article and no other article on the Brightwork website is paid for by a software vendor, including Oracle and SAP. Brightwork does offer competitive intelligence work to vendors as part of its business, but no published research or articles are written with any financial consideration. As part of Brightwork’s commitment to publishing independent, unbiased research, the company’s business model is driven by consulting services; no paid media placements are accepted.

HANA & S/4HANA Question Box

  • Have Questions About S/4HANA & HANA?

    It is difficult for most companies to make improvements in S/4HANA and HANA without outside advice. And it is close to impossible to get honest S/4HANA and HANA advice from large consulting companies. We offer remote unbiased multi-dimension S/4HANA and HANA support.

    Just fill out the form below and we'll be in touch.

References

https://www.bluefinsolutions.com/insights/john-appleby/september-2012/building-the-business-case-for-sap-bw-on-hana

The Real Story on ERP Book

ERPThe Real Story Behind ERP: Separating Fiction From Reality

How This Book is Structured

This book combines a meta-analysis of all of the academic research on the benefits of ERP, coupled with on project experience.

ERP has had a remarkable impact on most companies that implemented it. Unplanned expenses for customization, failed implementations, integration, and applications to meet the business requirements that ERP could not–have added up to a higher Total Cost of Ownership for ERP were all unexpected, and account control, on the part of ERP vendors — is now a significant issue affecting IT performance.

Break the Bank for ERP?

Many companies that have broken the bank to implement ERP projects have seen their KPIs go down— but the question is why this is the case. Major consulting companies are some of the largest promoters of ERP systems, but given the massive profits they make on ERP implementations — can they be trusted to provide the real story on ERP? Probably not, however, written by the Managing Editor of SCM Focus, Shaun Snapp — an author with many years of experience with ERP system. A supply chain software expert and well known for providing authentic information on the topics he covers, you can trust this book to provide all the detail that no consulting firm will.

By reading this book you will:

  • Examine the high failure rates of ERP implementations.
  • Demystify the convincing arguments ERP vendors use to sell ERP.
  • See how ERP vendors take control of client accounts with ERP.
  • Understand why single-instance ERP is not typically feasible.
  • Calculate the total cost of ownership and return on investment for your ERP implementation.
  • Understand the alternatives to ERP.

Chapters

  • Chapter 1: Introduction to ERP Software
  • Chapter 2: The History of ERP
  • Chapter 3: Logical Fallacies and the Logics Used to Sell ERP
  • Chapter 4: The Best Practice Logic for ERP
  • Chapter 5: The Integration Benefits Logic for ERP
  • Chapter 6: Analyzing The Logic Used to Sell ERP
  • Chapter 7: The High TCO and Low ROI of ERP
  • Chapter 8: ERP and the Problem with Institutional Decision Making
  • Chapter 9: How ERP Creates Redundant Systems
  • Chapter 10: How ERP Distracts Companies from Implementing Better Functionality
  • Chapter 11: Alternatives to ERP or Adjusting the Current ERP System
  • Chapter 12: Conclusion