How to Evaluate the Proposed Benefits of SAP HANA

Executive Summary

  • HANA is normally discussed by those that promote HANA.
  • Why it is important to separate the generalized statements from the specific improvements.

Introduction

As anyone who works in SAP knows, HANA has been the largest SAP marketing point of emphasis for several years now. However, much of the information comes from a one-dimensional perspective, one that is more promotional in nature than focused on enhancing the understanding of how and when HANA is appropriate to be deployed.

This article is intended to help readers apply a framework for asking the right questions about HANA.

What is SAP HANA?

If one could sum up HANA briefly, it is accurate to say that it is essentially a combination of moving the application and data to solid state device drives (SSDs) from hard drives combined with attempting to move clients that currently use Oracle databases over to Sybase (SAP’s acquisition). SAP uses the standard argument that they are better integrated and will lead to higher performance. This part of the program is false, as Oracle has been providing effective databases for SAP applications for decades – and is nothing more than a sales program to increase sales of Sybase. As for the first part of HANA, software vendors like Teradata have been moving to SSDs for some time – without developing an enormous marketing program, the SAP has for HANA. We began only purchasing SSD based computers around four years ago, but we did not put out a press release.

A History of False Assertions

SAP has a long habit of misinforming its customers on technology, and HANA is merely another example of this. It also has a dangerous aspect to it that is described in the following bullet points:

  1. Future Selling: HANA will be used (in fact is being used current) to provide a false hope to many problematic SAP implementations. As in “If you are concerned right now, don’t worry because HANA is coming.”
  2. Confusion as to HANA Benefits: HANA, which is isolated to the hardware layer, is generalized to improve the actual application – such as its business logic. HANA has nothing to do with business logic, but SAP is implying that it is, and KPMG is not educating its clients as to why this isn’t true.

Item #1: Working Out HANA’s Expense

HANA is a solution, which is more expensive than other alternatives. There are several reasons for this. First, HANA uses more expensive hardware. In fact, much of the performance boost in analytics is due to hardware upgrades, a fact which we cover in the article How Much is Hardware Responsible for HANA’s Performance. In fact, outside of hardware, it is difficult to determine any performance improvement with HANA, and HANA underperforms competing databases even at a far lower cost, one area of performance issues is what we cover in the article How to Understand Why HANA is a Mismatch for S/4HANA and ERP. 

Secondly, it is more expensive because of the premium that SAP charges for HANA software. This is covered in the article How to Understand the Pricing of S/4HANA and HANA.

Thirdly it is more expensive because HANA is still being worked out under the covers, and this means more implementation complications than other SAP alternatives that have had more implementations.

Fourthly, in cases where HANA is implemented for an already live system, it means making a change and changes cost money. Something which I have yet to see elaborated upon in print is something that I think is quite obvious and that is that anyone can gain more performance by increasing the input or investment into a domain, but it is important to mention that the input/investment is different when making statements regarding improvement. As an example, I can say that a Maserati performs better than a Honda.

However, if we leave the price out of the equation, then it’s not really a fair comparison. Too many articles and conference presentations give little attention to costs and instead are entirely focused on explaining the benefits of HANA. One recent article that I read showed the cost savings that come from HANA from the improved capabilities that a company would obtain, but nowhere in the article were costs discussed. It almost appeared as if HANA along with its implementation were simply free.

Item #2: Different Benefits Per Application

There is no doubt that business intelligence applications benefit greatly from HANA. But outside of this single processing type, HANA underperforms the other alternatives. And SAP is recommending HANA for all applications, whether their primary processing type is analytics or not.

And this means that more implementations have occurred for SAP BI than for other SAP applications, and using HANA for SAP BI is much easier because of how business intelligence applications work versus transaction processing applications like SAP’s ERP system, makes the effort involved in migrating to HANA far easier. Secondly, the benefits differ per the type of application.

For this issue analyzed from multiple dimensions, see our article archive on HANA on BI/BW.

Search Our Other BI on HANA Content

Item #3: Part of HANA’s Benefit is Simply Buying Faster Hardware

Part of the benefits that SAP is isolating to HANA is a natural transition to simpler data structures that interoperated with improvements and cost reductions in random access memory and solid state memory. But SAP has massively oversold the changes to actual usage. Even before HANA was introduced, the biggest bottleneck in analytics systems was not processing time, but the problems around the complexity of report or analytics creation. HANA does not address this. It reduces the work in creating star schemas, but that was also not the bottleneck in report creation.

SAP decided to make HANA their central marketing pillar by focusing on this change –, replacing the previous marketing platform of “NetWeaver.” However, we can learn quite a bit by analyzing the history of technological improvements in information technology. The servers that are presently being purchased for business intelligence applications may have 500 gigabytes of RAM in addition to multi-terabytes of solid-state storage along with the traditional hard storage. However, even servers with these types of specifications are not particularly expensive anymore. With this type of capacity, it makes less sense to place data into cube structures to enhance query performance. Therefore, much of what SAP is doing with HANA is just taking credit for the work of hardware developments, that are available to all software vendors.

Whenever hardware makes a great leap forward, it takes time for the software to unwind the complexity that was built into it in order to accommodate the previous generation of hardware. So companies that no longer need the cube structures they have built will continue to use them – simply because it takes more time to pull the data out of cubes that they have already built and place them into simpler structures. Hardware can change a lot faster than software.

HANA does not Mean Benefits to All Application Elements

As a person that works both the sales side and the implementation side, I have reservations about how HANA is described as improving every facet of SAP’s applications. For salespeople, it is generally desirable to be able to make these statements, but it sets up implementation problems because implementation must manage highly-generalized expectations for improvements in too many areas. While it may be considered visionary to generalize the benefits of HANA, applications are a series of moving parts that each has their own heritage, current state of development and impact on the function of the application. HANA is an infrastructure element, a good analogy being the suspension of a car. To take this analogy further, most people would intuitively understand that improving the suspension of a car does not improve every other element of the car’s performance. Quite obviously, it does not make the engine more powerful or improve the radio or the visibility from the driver’s seat.

Some other areas may improve – for instance, a better suspension can have a positive effect on the steering/controllability of the car, but the specific improvement to other elements should be explained in terms of the cause and effect and not simply assumed. However, statements regarding HANA’s speed improvement are quite frequently generalized to improve all other areas of the application without any specific explanation as to why this is the case. In the same way, HANA has no impact on the business logic of SAP or its user interface.

HANA puts infrastructure at the center of the conversation, but the infrastructure cannot continually be the center of the conversation, because it is only one piece of the puzzle. And so far, SAP has presented no compelling evidence that their database is superior to the databases of competitors, which is not all that surprising as SAP is a database novice.

A Specific Example of Falsehoods About HANA for Supply Chain Planning

In addition to providing false information around the technology capabilities of HANA, SAP and SAP consulting companies have been providing false information around business benefits. This example will demonstrate how SAP’s claims can be deconstructed.

Here is the quotation from an SAP sourced blog article:

“Our customers adopted HANA for its ability to deliver significant net new business value.  For example, a customer used HANA as an agile datamart to predict potential out of stock situations during promotional periods by analyzing several millions of rows of point of sale data.  The business benefit of this use case is improved forecast accuracy and more precise replenishment.  For this customer, the POS analysis helped them successfully predict potential out of stock situations thereby, increasing customer satisfaction and revenue.  More importantly they were also able to reduce their replenishment lead time from 5 days to 2 days resulting in lowering inventory levels in their supply chain.  What is it worth to you if you can remove a day’s worth of inventory from your supply chain?  What is the impact on your working capital?  There are numerous examples like the one above across our customer base where HANA was able to unlock significant net new business value.”

This story does not hold up to scrutiny, but let us break it down to specific points:

  • Promotion Planning
  • Promotions Come from Sales/Marketing
  • What Happened to Replenishment Lead Times Again?
  • What Benefits Accrued?

Promotion Planning

The datamart that was created is referred to as promotional forecasting.

Most companies perform promotional forecasting very poorly, but there is little reason to create a data mart to do this because promotion forecasting functionality exists in forecasting applications. I have a soon to be published book on promotions forecasting, and if someone, were to propose creating a data mart to perform promotions forecasting the first question I would ask, is “why.” Applications already exist to manage promotions. Although it is true that neither ECC nor DP is the right tools for the task.

There is no law that states that a company must use SAP applications where there are more suitable applications. Why was this company creating a data mart to review data that is not useful for promotional forecasting when there are so many other tools available for the job that is designed to do this.

Secondly, this type of analysis is not very processing intensive. Even if the company wanted to build a data mart, they would not need an analytical database like HANA to do so.

This is a very poor example of how to leverage an analytical database.

Promotions Come from Sales/Marketing

Promotions are known within the company — as they are creating the promotion (unless the intent here was to determine the promotional effect of competitor promotions — which is unlikely). The promotional effect of previous promotions — which can then be used to create uplifts is already in the demand history.

How analyzing POS data helped this company determine stock out likelihoods is extremely difficult to understand.

What Happened to Replenishment Lead Times Again?

Lowering inventory levels in the supply chain does not reduce replenishment lead times. The two just don’t have very much to do with one another.

If the supply chain was so filled with inventory, requiring extra trucks to be kept in the warehouse yard such that it interfered with getting product through the supply network, or material was stacked in aisles so that workers could not get through the aisles then that is a broader issue of incompetence on the part of supply chain operations. This is an SAP resource who is guessing on topics which they don’t understand.

What About SAP’s Page on HANA?

How much accuracy can be found on SAP’s own web page on HANA?

Lets take a look.

Quotes from the SAP’s Main HANA Page

What is HANA?

Deployable on premise or in the cloud, SAP HANA is an in-memory data platform that lets you accelerate business processes, deliver more business intelligence, and simplify your IT environment. By providing the foundation for all your data needs, SAP HANA removes the burden of maintaining separate legacy systems and siloed data, so you can run live and make better business decisions in the new digital economy.

Almost all HANA instances are on premises. So while SAP often presents the idea of such broad choice among its offerings, customers are not choosing to use the cloud option SAP’s non-acquired offerings.

It is not at all clear that HANA simplifies the IT environment. The proposal is that all applications can reside on a single HANA instance. However, even if this were desirable and feasible, HANA is too expensive to manage this way. HANA is the most expensive database in the market (at Brightwork we maintain the ability to price HANA accurately, unlike how SAP will price HANA using some inaccurate assumptions. Part of this is covered in the article The Secret to Not Talk About HANA Pricing.) But even if HANA were lower price, it still would not necessarily make the IT environment more simple. In fact, the change-over from the current databases being used adds complexity to the IT environment. And information from HANA implementations is that HANA has a lot more maintenance costs than competing databases.

SAP said this also about their ERP system in the 1980’s and none of those predictions came true. The concept was that SAP’s ERP system would eliminate the need for all legacy systems, as is covered in the article How SAP Used and Abused the Term Legacy.

Key Benefits of HANA

Reduce Complexity

Simplify IT with ONE platform for trans-analytic applications.  Use SAP HANA to analyze live data to support real-time business, while reducing data redundancy, footprint, hardware, and IT operations.

No there is no evidence of this occurring. There are no case studies where customers have moved most of their applications to HANA. Secondly, HANA is a single purpose database, it is designed for analytics. AWS, a PaaS that provides many databases does not offer HANA type databases for all applications. Rather they offer specific databases for specialized application. Transactions, Analytics, Big Data, are not optimally served from the same database type.

Run Anywhere

Modernize your data center with flexible SAP HANA deployment options – public or private

cloud, tailored data center, or 1000+ certified appliance configurations from 13 leading vendors.   

HANA is in almost all instances deployed as an on-premises delivery model. SAP has a lot of options, but those options are not particularly relevant because they are not exercised but customers.

Real Results

Achieve better business outcomes with SAP HANA.  Learn how companies are seeing 575% five-year ROI by using SAP HANA to increase innovation, while decreasing data management costs.

SAP has never provided actual evidence for any ROI claims. HANA is the most expensive database that can be purchased, so having this type of eye-popping ROI on such an expensive database would be peculiar.

Key Capabilities of SAP HANA

SAP HANA transforms database management. It processes transactions and analytics in-memory on a single data copy – to deliver real-time insights from live data. And simplify operations with modern tools and a secure, rock-solid foundation.

Actually, evidence from the field indicates that HANA has performance issues with transactions, which is most of what any ERP database does. SAP stopped performing the transaction processing benchmark, as is covered in the article, What it the Actual Performance of HANA?

SAP HANA transforms data management. Access quality data wherever it best resides using data virtualization, integration or replication.  Manage data across multi-tier storage to achieve best performance and total cost of ownership.

SAP has no studies that show that HANA provides a better TCO. Forrester produced a study, sponsored by SAP that stated that HANA may reduce TCO, but it was not based in any evidence of actual HANA implementations. It is considered by Brightwork to be an illegitimate study.

SAP HANA transforms analytic intelligence. Use advanced data processing for business, text, spatial, graph, and series data in one system to gain unprecedented insight. And deliver deeper insights using powerful machine learning and predictive analytics capabilities.

This is again unproven by SAP. Transforms is a significant word that SAP uses quite frequently, but which it does not provide evidence for. SAP is very frequently using the term digital transformation, which is illogical for most IT implementations as is covered in the article The Problem with Using the Term Digital Transformation for Modern IT Projects. 

Simplify application development with in-memory computing

IT and software development professionals view in-memory computing as a viable option to increase simplicity and real-time performance. Discover how in-memory computing can spur the creation of custom-built applications to meet distinctive needs and enable integration across the enterprise.

SAP seems to be commingling a number of concepts here. The include the following:

  • Simplicity
  • Real-time performance
  • In-memory computing
  • Custom build applications,
  • Integration across the enterprise

However, the only terms that have anything to do with HANA are real-time performance and in-memory computing. The rest as simply being added willy-nilly to the sentence. This is a common tactic by Hasso Plattner, which is to simply throw so many terms at the reader that they become overwhelmed.

SAP S/4HANA

Built on SAP HANA, SAP S/4HANA is a next-generation business suite designed to help customers thrive in the digital era. Digitize and simplify your processes – and provide a personalized user experience with the SAP Fiori UX.

SAP’s accuracy on S/4HANA is covered in the following article.

SAP Vora

SAP Vora is an in-memory, massively distributed data processing engine that allows you to analyze Big Data stored in Hadoop. Use it to gain real-time, relevant insights that support faster and more informed decisions. 

  • Process and analyze Big Data in Hadoop
  • Correlate Hadoop and SAP HANA data
  • Organize massive volumes of unstructured data

This will be covered in a future article dedicated to Vora.

SAP BW/4HANA

SAP BW/4HANA is an integrated data warehouse solution optimized to fully leverage the SAP HANA in-memory platform. SAP BW/4HANA dramatically simplifies development, administration and user interface of your data warehouse resulting in enhanced business agility.  

  • Simplify your data warehouse architecture
  • Integrate SAP and non-SAP apps and data into one logical data warehouse
  • Built for on-premise and the cloud 

This will be covered in a future article dedicated to SAP BW/4HANA.

This SAP article has a Brightwork Accuracy Score of 2 out of 10.

Conclusion

In order to get value from HANA it is necessary for companies to move past the generalities that are often applied to HANA to the specifics. The specifics include the price of HANA for each area that it is applied. Not all applications are going to be moved to HANA, certainly not in the next few years. So which applications should, and what is the cost-benefit analysis for doing this, is an important question to ask. For most companies, the first place to implement HANA will be in their analytics.

Secondly, statements that generalize the benefits of HANA to every aspect of the application should be disregarded and emphasis given to statements that show specifically both how and why HANA will improve specific areas of the application, which should then be estimated in terms of business benefits. Being able to improve query performance by a factor of 1000 sounds great, but it may not be the primary problem that company has, and other shortcomings, such as the configuration of the system or the understanding of the functionality within the application may be far more pressing concerns. Technology improvements are much easier to estimate that business improvements. But technological improvements do not necessarily lead to proportional business improvements. As an example, increasing the horsepower of a car from 150 horsepower to 300 horsepower has little effect on the average speed that car is driven, because other constraints – ranging from speed limits to traffic, to physical laws regarding how a car can be driven without crashing into something still exist after the engine has been replaced. By purchasing HANA and investing more resources, queries can no doubt be performed more quickly, however, there are many things that an implementing company wants to improve, and this is only one area of focus. Investments in HANA must be traded off against other investments that improve other areas of the technology landscape that are also important. This logic applies not only to HANA but to any improvement that a company would make to its systems.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Performance Content

References

https://blogs.saphana.com/2019/04/17/sap-hana-native-storage-extension-a-cost-effective-simplified-architecture-for-enhanced-scalability/

https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.04/en-US/c71469e026c94cb59003b20ef3e93f03.html

Is SAP’s Warm Data Tiering for HANA New?

Executive Summary

  • SAP is telling customers that they can use data tiering in HANA to use a new innovation in database design.
  • How accurate is SAP data tiering being new?

Introduction

SAP has communicated a new item for SAP HANA. It is an amazing technological breakthrough where apparently the data is processed in the computer’s memory, but then written to the disk for something called “persistence.”

We cover this amazing development in this article.

Our Analysis

Let us being by reviewing the quotes from SAP.

“SAP HANA native storage extension is a general-purpose, built-in warm data store in SAP HANA that lets you manage less-frequently accessed data without fully loading it into memory. It integrates disk-based or flash-drive based database technology with the SAP HANA in-memory database for an improved price-performance ratio.”

This is amazing.

Translated for the laymen it means that with SAP’s super-advanced technology enables companies to use both memory and disks and even flash drives in something called a “server.”

We have a top secret image shown below. These photos of a prototype of the device used by HANA were smuggled out of SAP at great risk to our agents.

Apparently, the server is a piece of hardware, that has the memory, the disk drives and the flash drives inside of a metal box. 

But the story gets even better. Apparently, some of the data does not need to be accessed all of the time. This is what SAP calls “warm data.” Let us review SAP explanation of this new data classification.

“Between hot and cold is “warm” data — which is less frequently accessed than hot data and has relaxed performance constraints. Warm data need not reside continuously in SAP HANA memory, but is still managed as a unified part of the SAP HANA database — transactionally consistent with hot data, and participating in SAP HANA backup and system replication operations, and is stored in lower cost disk-backed columnar stores within SAP HANA. Warm data is primarily used to store mostly read-only data that need not be accessed frequently.”

While certainly impressive, it is different from SAP’s earlier statements about HANA, where HANA was to be better than all other databases because all of the data was to be loaded into memory. And through shrinking the database size, you could..

“Run an ERP system off of a smartphone.”

In an article published in 2013, John Appleby stated..

“HANA was the only in-memory database with full HA.

The SAP HANA Native Storage Extension?

SAP calls this SAP HANA Native Storage Extension. However, it is actually nothing. It is merely a restatement of what all the competing databases have been doing for decades.

Its called memory optimization.

Let us look at SAP’s graphic where they describe their contraption.

We congratulate SAP on this observation.

What is described above is a standard memory-optimized database. The same design used by other database vendors decades ago and still used to this day. Without being able to load data into memory, it would be impossible to perform a calculation or any type of processing actually. This can be tested by taking any computer or server and removing the memory modules and then rebooting the computer or server. But according to SAP, this design is perfect for replacing “legacy databases,” as the following quotation attests.

The Perfect Architecture to Replace Legacy DBMS Technologies

SAP has a general sizing guidance for HOT vs. WARM data ratio and Buffer Cache size. However, it is ultimately the application performance SLA that drives the decisions on HOT vs. WARM data ratio and Native Storage Extension Buffer Cache size.

For the hardware to implement Native Storage Extension, simply add necessary disk capacity to accommodate WARM data and memory to accommodate Buffer Cache requirements as per the SAP sizing guidance.

The combination of full in-memory HOT data for mission-critical operations complemented by less frequently accessed WARM data is the perfect and simple architecture to replace legacy DBMS technologies. This also eliminates the need for legacy DBMS add-on in-memory buffer accelerators for OLAP read-only operations requiring painful configuration and management.”

But now the question arises,

“If SAP has such a similar design to other databases, why are those databases legacy?”

To this, we have the answer. But before we get to that we would like to take this opportunity to introduce our innovation.

Introduction the Brightwork Piston Gas/Diesel System

This diagram describes something we have named a “piston.” It comes in two types, an energy source we call “gas” and a second which uses something called “diesel.” Our proprietary design deploys a four-stroke design, which is as follows, Intake, Compression, Combustion, and Exhaust. (Patent pending, so please respect our IP)

We are thinking of deploying this technology in some type of propulsion system.

Stay tuned for more details!

Conclusion

SAP’s warm data tiering is nothing new. Furthermore, it contradicts everything SAP said about HANA years ago. HANA is now increasingly backtracking on earlier pronouncements that were designed to make it appear different from other databases. 

From the beginning of HANA’s introduction as a product SAP was told you cannot place everything in memory. SAP told everyone that they did not “understand” and that with their proprietary technology they had achieved zero latency because of their special ability to manage memory in a way that Oracle or Teradata and others could not. This meant loading all data into memory with zero swapping. We had to investigate the HANA documentation to find it was actually memory optimized, and SAP lied about this.

SAP is not an all-in-memory database, nor has it ever been. 

SAP receives a Golden Pinocchio Award for making something that all databases have sound innovative.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search our Other In Memory Computing Content

References

https://blogs.saphana.com/2019/04/17/sap-hana-native-storage-extension-a-cost-effective-simplified-architecture-for-enhanced-scalability/

How Accurate Was John Appleby on What In-Memory Database for SAP BW?

https://help.sap.com/viewer/42668af650f84f9384a3337bcd373692/2.0.04/en-US/c71469e026c94cb59003b20ef3e93f03.html

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Is SAP Correct that Customers Should Use BW4HANA Instead of Other BI Tools for Integration to HANA?

Executive Summary

  • SAP has told customers that they must use BW as a BI solution for HANA.
  • How accurate is SAP on this topic?

Introduction

SAP has been communicating to customers that they should buy BW4HANA for BI under the premise that SAP is not designed to integrate well with 3rd party solutions.

Our Analysis

  • Using BW means very serious productivity issues long term.
  • BW is not effective where it is implemented.
  • The idea that an RDBMS can’t connect effectively to anything but a specific BI tool is very odd. This brings up the question of what is an RDBMS. Overall, this topic needs to be analyzed. This is not to say it is not more difficult, but adapters can be written to any database.

This is pointed out in the following quote.

“HANA is widely SQL92 compliant, CDS views are an extension of SQL92. It may be an issue that third party BI vendors do not yet invest in HANA extensions and optimizations. Anyway, there is no CDS view that can only exist in S/4 or BW/4HANA. In the end it are native database artifacts that can be created via DDL statement.– Dr. Rolf Paulsen

Could SAP DP or BW Survive as Independent Products from SAP?

From a fair competition perspective, the basic concept of an efficient software market is that a company makes a product which it then sells to other companies that find the product of value. The enterprise software market is different from most other markets, including the consumer software market in that the capabilities and performance of the product are not immediately apparent at the time of purchase.

In fact, companies that buy enterprise software don’t typically understand fully what they have until far after the time of purchase, past the design phase of the implementation and close to and sometimes after go-live. The enterprise software market is marked by a great deal of lock-in. That is once a purchase decision has been made, the company will typically not move away from the system for 5 or 6 years. The actual price paid for the license and support is a small component of the overall costs of implementing and supporting enterprise software (called total cost of ownership) and is enumerated in the post.

What The Successful Sales of SAP DP and SAP BW or SAP BI Prove in the Larger Context

Both SAP DP and SAP BW or SAP BI cause many issues with the companies that buy them. These products exist and thrive because they are tied to other products within a software conglomerate called SAP. These are not the only SAP products that have these issues. SAP Solution Manager, XI, SPP, the list is quite lengthy.

Both SAP DP and SAP BW or SAP BI use the same Data Workbench. However, the efficiency of these twin products is extremely poor. Both SAP DP and SAP BW or SAP BI have tremendous overhead, and when one looks at what is done in each application, the same activities can be performed in other applications much more easily.

  • SAP DP is covered at Brightwork at this link.
  • SAP BW or SAP BI is covered at Brightwork at this link. 

Both of these applications are rated as the worst in class. They are the top applications recommended by the major IT consulting companies. Does this have anything to do with SAP DP and SAP BW or SAP BI taking the longest to implement and therefore driving the most consulting dollars to the largest consulting companies? TCO calculators for SAP DP and SAP BW or SAP BI are also available at the Brightwork site, and once again SAP DP and SAP BW or SAP BI have the highest TCO in each of their software categories.

Integration Based Competition in the Consumer Software Market

This is very similar to Microsoft gaining market share for its Internet Explorer (IE) browser, which was not a competitive product. At one time had over 90% market share because Microsoft had a monopoly in the operating system area, and bundled the browser with the operating system, and has made, it impossible to uninstall. For the longest time, Microsoft’s IE held a very high market share of the browser market and proposed to the Department of Justice that they were not engaging in monopoly behavior. However, one of the greatest proofs is the success of a product when restrictions are removed or reduced. People have come to know there are other and better browsers and have responded by moving away from IE.

I would be curious to hear what Microsoft’s argument would be now, that they never behaved like a monopoly software vendor. Microsoft has repeatedly used a monopoly in a bad operating system to bring out other software which would never have obtained the success that it did, such as SharePoint and SQL Server, without being connected to Microsoft.

Conclusion

SAP’s arguments are based upon integration and the logic that customers must use what they provide. That says nothing about what is actually effective or good.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other BI on HANA Content

References

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How to Understand AWS’s Multibase Versus SAP’s Singbase Approach

Executive Summary

  • SAP has been proposing that all companies should use a single database type and that they should buy HANA.
  • AWS’s CTO explains the benefits of the multi-base approach.

Introduction to Multibase

What is often left out of the analysis of database advice from commercial software vendors is how biased it and self-centered it is. Commercial database vendors don’t provide any information to a customer that is not in some way designed to get the customer to invest more deeply in the vendor’s commercial products. As bad as Oracle’s “advice” to companies has been, Oracle at least has respected, although highly self-centered, knowledge of databases. SAP’s rather insane advice to their customers has been far worse, and far more self-centered. For years SAP has been telling customers that they need to perform multiple

For years SAP has been telling customers that they need to perform multiple types of database processing from a single database. This is wholly false but has not stopped either SAP or their partner network for saying its true. We have covered in detail how SAP’s proposals about HANA have ended up being proven incorrect in article ranging from What is HANA’s Actual Performance?, A Study into HANA’s TCO, to How Accurate Was Bloor on Oracle In-Memory.

In this article, we will expand into a topic which shows how wrong SAP is. This perspective we will address is not brought forward by either SAP, Oracle, IBM or Microsoft, but by the entity providing thought leadership on the future of how databases being used…which is AWS.

Verner Vogels on Multiple Database Types

In an excellent article by Verner Vogels who is the CTO of AWS. Let us begin with how he starts the article.

“A common question that I get is why do we offer so many database products? The answer for me is simple: Developers want their applications to be well architected and scale effectively. To do this, they need to be able to use multiple databases and data models within the same application.”

Notice the last part of this paragraph, where Verner describes using “multiple databases and data models within the same application.” Wait what was that? We all know that applications have a single database right? How does a single application use multiple databases? What is Verner talking about?

Well, it turns out Verner is describing software development that is different than the monolithic environment. Verner goes on to say this..

“developers are now building highly distributed applications using a multitude of purpose-built databases.”

That is the application that we think of is one way of developing, but this is giving way to distributed applications that can access multiple databases. It is an unusual way of thinking about applications, for those of us who came up under the monolithic model.

The Limitations of the Relational Database

Verner goes on to describe the limitations of the relational database.

“For decades because the only database choice was a relational database, no matter the shape or function of the data in the application, the data was modeled as relational. Is a relational database purpose-built for a denormalized schema and to enforce referential integrity in the database? Absolutely, but the key point here is that not all application data models or use cases match the relational model.”

This we have seen in the rapid growth of databases like MongoDB and Hadoop that specialize in either unstructured data or data with lower levels of normalization. Verner describes how Amazon ran into the limitations of using the relational database.

“We found that about 70 percent of our operations were key-value lookups, where only a primary key was used, and a single row would be returned. With no need for referential integrity and transactions, we realized these access patterns could be better served by a different type of database (emphasis added). This ultimately led to DynamoDB, a nonrelational database service built to scale out beyond the limits of relational databases.”

Let us remember, AWS has a very fast growing relational database service in RDS. However, they also have fast-growing non-relational databases like DynamoDB.

The Different Database Types According to Verner

Below we have provided a synopsis of the different database types, their intended usage, and the database that reflects them by Verner.

  • Relational: Web and Mobile Applications, Enterprise Applications, Online Gaming (e.g., MySQL)
  • Key Value: Gaming, Ad Tech, IoT (DynamoDB)
  • Document: When data is to be presented as a JSON document (DynamoDB)
  • Graph: For applications that work with highly connected datasets (Amazon Neptune)
  • In Memory: Financial Services, Ecommerce, Web, Mobile Applications (Elasticache)
  • Search: Real-time visualizations and analytics generated by indexing, aggregating, and searching semi-structured logs and metrics. (Elastisearch Service)

And actually, it is a bit more complex than even Verner is letting on. This is because some databases that AWS releases or releases access to, end up being used differently than first intended. This is described in a comment on Verner’s article.

“It turns out that your products are so good that people do end up using them for a different purpose. Take Amazon Redshift. I remember when Amazon Redshift was launched, a question came from the audience if you can use Redshift as an OLTP database, even though it’s OLAP. Turns out using Redshift in an OLTP scenario is one of the major use cases, to build analytical applications. We are one of those use cases, we’ve built an analytical app on top of Redshift. The OLTP use case stretches Redshift once you start putting a serious number of users on it. Even with the best WLM configuration.

To solve for that, we’ve used a combination of Amazon RDS, Amazon Redshift and dblink plus Lambda and Elasticsearch. Detailed write-up on how we did it here:”

The Multi-Application Nature of Solutions Distributed by AWS

The multi application nature of solutions is explained as follows by Verner.

“Though to a customer, the Expedia website looks like a single application, behind the scenes Expedia.com is composed of many components, each with a specific function. By breaking an application such as Expedia.com into multiple components that have specific jobs (such as microservices, containers, and AWS Lambda functions), developers can be more productive by increasing scale and performance, reducing operations, increasing deployment agility, and enabling different components to evolve independently. When building applications, developers can pair each use case with the database that best suits the need.”

But what are packaged solutions offering? Well, monolithic applications that are the exact opposite of this. And as SAP is a perfect example of a monolithic application provider, SAP wants customers to use a single database, and further, they want customers to use “their” single database as in HANA. Which according to SAP can do all the processing as well as all the different database types described by Verner above? The one problem being, HANA can’t.

The AWS Customers Using Multibase Offerings

  • Airbnb: DynamoDB, ElastiCache, MySQL
  • Capital One: RDS, Redshift, DynamoDB
  • Expedia: Aurora, Redshift, ElastiCache, Aurora MySQL
  • Zynga: DynamoDB, ElastiCache, Aurora
  • Johnson and Johnson: RDS, DynamoDB, Redshift

Verner goes on to say.

“purpose-built databases for key-value, document, graph, in-memory, and search uses cases can help you optimize for functionality, performance, and scale and—more importantly—your customers’ experience. Build on.”

The Problem with SAP and Oracle Cloud and Leveraging the Multibase Approach

SAP and Oracle have been touting their cloud. However, with SAP and Oracle, the cloud is only a pathway to lead to SAP and Oracle’s products. This is as much true of databases. SAP and Oracle are closed systems. They dabble in connecting to non-SAP and non-Oracle, but only to co-opt an area so they can access markets. AWS and Google Cloud are quite different. Notice the variety of databases available at Google Cloud.

There are over 94 databases out at Google Cloud, and far more out at AWS. These databases can be brought up and tested very quickly. Selecting one of the databases brings up the configuration screen. Furthermore, the number of database and database types is only increasing with AWS and Google Cloud. 

Right after this is launched, one can bring up a different database type, (say NoSQL, or Graphic) and immediately begin testing. Under the on-premises model, this would not be possible. Instead of testing, one would go through a sales process, a commitment would be made, the customer would be stuck with (and feel the need to defend) whatever purchase had been made. We have entered a period of multi-base capabilities, and AWS and Google Cloud are the leaders in offering these options. This will transform or is transforming how databases are utilized. And the more open source databases are accessed, the worse commercial databases look by contrast. 

Conclusion

Packaged solutions ruled the day for decades. After the 1980s, custom coded solutions were for losers. They were to be replaced by “fantastic ERP” systems that would make your dreams come true. And who agreed to this? Well vendors and consulting companies with packaged software and packaged services to sell. Consulting companies became partners with packaged software companies, parroting everything they said, without evidence. Even to the point where almost no one in IT is even aware that packaged ERP systems have a negative ROI as we cover in the book The Real Story on ERP. ERP proved to be a false God and delivering both a negative ROI (but a positive ROI for vendors and consulting firms) while saddling companies with systems that put the ERP vendors in the driver’s seat of account control to extract more and more out of their “customers.”

Now as I read about distributed applications accessing multiple databases, are we entering a period where the pendulum switches to custom coding again. Under the SAP or Oracle paradigm, you accepted the databases that were “approved” by SAP and Oracle. All competition was driven out of the process. Oracle applications worked with the Oracle database. SAP finally decided to introduce HANA to push the Oracle DB out of their accounts. SAP now thinks that all SAP applications should sit on a SAP HANA database.

Verner is describing a combination of components that are selected and stitched together. Most of these databases are open source. And one can choose from a wide variety offered by AWS. This is inherently contradictory to packaged applications, because the packaged application uses one DB, and works in a particular and defined way.

While this is little-discussed AWS/GCP can be viewed as opposed to packaged applications. Sure, leveraging AWS/GCP will start with the migration of packaged applications, but once companies get a taste of freedom, it will begin breaking down the rules enforced by the packaged software vendors. And who will tell this story? Will it be Gartner? No. Gartner receives 1/3 of it is multi-billion dollar per year revenues from packaged software vendors, and it is doubtful that AWS or GCP will pay Gartner to sing their praises the way that the package software vendors have. Gartner presents SAP Cloud, Oracle Cloud, AWS and GCP as if they offer basically the same thing, but that AWS is simply “ahead” of SAP Cloud and Oracle Cloud. Gartner has no interest in educating their customers as to the reality of AWS and Google Cloud, as it cuts against their own corrupt revenue model.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other MultiBase Versus SingBase Content

References

https://www.allthingsdistributed.com/2018/06/purpose-built-databases-in-aws.html

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How True is SAP’s Motion to Dismiss Teradata’s Complaint?

Executive Summary

  • SAP filed a motion to dismiss Teradata’s complaint.
  • How accurate are the statements proposed in the motion to dismiss?

Introduction to the SAP vs Teradata Lawsuit

Teradata filed a complaint against SAP in June of 2018 asserting many things that Brightwork Research & Analysis has been saying for several years. (although our research does not agree with all of Teradata’s allegations.)

Naturally, SAP said they did nothing wrong, and filed a motion to have the complaint dismissed. The reporting of the contents of the motion is from The Register. We looked for but were not able to find the actual motion to dismiss. We evaluate SAP’s statements against our own research.

Our Disclosure

We have any financial or non-financial relationship with either SAP or Teradata.

Now let us get to the quotes from the motion.

SAP Changing the Topic on the Teradata’s Complaint

“It (the complaint) also made antitrust allegations claiming SAP had attempted to edge Teradata out of the market by locking customers into its tech, noting the German giant’s ERP suite S/4HANA can only run on HANA.

However, SAP slammed these claims in a motion to have the case dismissed for once and for all, which was filed with the District Court of Northern California at the end of last month.

It argued the joint venture, known as the Bridge Project, started because Teradata “had a limited customer base” and wanted to appeal to SAP’s users – but SAP painted Teradata’s push as wildly unsuccessful, saying that just one customer signed up.”

This is not related to at least the heart of Teradata’s complaint. When SAP partners with a vendor it is never (from SAP’s perspective) to improve that vendor’s ability to sell into SAP’s customers. SAP uses partnerships to neuter competitors and to copy intellectual property, but normally work against the competing vendor’s interests. As we covered in the article How SAP’s Partnership Agreement Blocks Vendors from Fighting Indirect Access, partnerships with SAP have helped to keep competing vendors from publicly complaining about indirect access.

This is of course not to deny that Teradata wanted to appeal to SAP’s users/customers. They certainly did. That is always the motivation for vendors to engage in partnerships with SAP. However, Teradata had been doing this for decades…..that is before SAP introduced HANA and began deliberately blocking out other database vendors. Teradata’s complaint is that SAP effectively blocked them out of accounts that they shared by using HANA and the restrictions around HANA to do so. We covered this topic in the article The HANA Police.

Therefore, in this argument, SAP is attempting misdirection.

HANA is Innovative?

“SAP had been working on its own database product for years before that deal, it said, and branded “the assertion that HANA is the result of anything but SAP’s technological innovation, investment, and development is factually groundless”.

Teradata was only bringing the lawsuit because it has “fallen behind” the competition, SAP claimed.”

SAP did not “work on its own database product for years before the deal.” SAP had several databases for years, and they also acquired Sybase, but those are not related to this topic. What ended up becoming HANA were two small acquisitions purchased roughly a year before HANA was released. We covered in the article Did Hasso Plattner and His Ph.D. Students Invent HANA? That while SAP falsified a story around Hasso Plattner and his students creating HANA from scratch, the supporting technologies for HANA was purchased with the intent of making them into HANA. SAP’s big addition to the design was to remove aggregates and indexes. Hasso Plattner nor Vishal Sikka, nor Hasso Plattner’s Ph.D. students ever contributed anything that could be called intellectual property to the exercise.

How do we know?

We analyzed what was claimed by Hasso Plattner in his books and in the SAP marketing/sales material where these contentions were made.

Hasso Plattner’s books aren’t so much books as marketing pitches. Riddled with exaggerations and inaccuracies, part of what Hasso Plattner’s books do is create a narrative where Hasso and SAP created some superlative innovation in column-oriented databases. None of Hasso’s claims regarding innovation hold up to scrutiny. All of Hasso’s books (4 in total) have one purpose….not to inform, but to sell HANA. 

There is little doubt that Teradata had superior database knowledge and that SAP did seek to learn from Teradata and to use the partnership to do so. Furthermore, SAP has a history of doing exactly that with other software vendors. SAP’s xApp program was really an extensive competitive intelligence gathering operation designed to extract IP from vendors so that it could be placed into SAP’s products. We covered the xApp program in 2010 in the article Its Time for the xApp Program to End.

HANA’s design is highly problematic and cannot meet SAP’s statements about it — except in analytics, where it is only better than older versions of competitive databases and only when using far larger hardware footprints as covered in How Much of HANA’s Performance is Hardware? SAP’s statements about HANA’s superiority are false.

The Mystery of HANA’s Lack of Use Outside of SAP Accounts

HANA is not purchased outside of SAP accounts; it is only purchased by accounts controlled by SAP where the IT customers failed to perform their research into SAP’s claims. If the outlandish claims around HANA are true, why aren’t non-SAP customers using it? No other database fits this profile.

  • Oracle sells databases to everyone not just to customers that buy Oracle’s applications and where they have account control.
  • SQL Server is found everywhere, not only on accounts where Microsoft sells their ERP system.

Bill McDermott stated that HANA works “100,000 times faster than any competing technology.” If that is true, why do only SAP customers buy it?

Teradata’s IP Puffery

Through the original complaint, Teradata overstates its intellectual property implying that they have some secret sauce no one else has. The designs that are similar to HANA are all over the place. AWS has Redshift which is similar in design to HANA. And both Google Cloud and AWS have Redis, which is also similar to HANA (although in a different dimension). Reading Teradata’s complaint is symptomatic of commercial software companies perpetually overstating how unique their software is. However, the claim is axiomatic, declarations of uniqueness and innovation are known to correlate with commercial software sales positively. Teradata’s complaint also exclaims how employees are made to sign NDAs so that Teradata’s technology secrets are not distributed outside of Teradata, but neglects to mention how much Teradata benefits from those same employees add to Teradata’s IP. Apparently, by inference, all of the Teradata IP was created by executives, and not employees. And where did Teradata originally develop its database from? That is right, from using database concepts that were in the public domain.

As with pharmaceutical companies which commercialize research that performed by universities and is funded by taxpayers through the National Institutes of Health, as soon as a software vendor wants to sell software, the public domain very conveniently recedes into the background, and the narrative of “their IP” is wheeled out front and center.

Big Money Equals More IP Protection?

As readers can tell, we find the IP theft argument made by Teradata to be the least persuasive part of their complaint. Other vendors have far greater claims regarding SAP stealing their IP than does Teradata. But Teradata is a rich software vendor and has the money to bring a case like this. Therefore, their IP concerns are considered relevant, whereas a smaller software vendor’s IP concerns are considered less relevant (perhaps irrelevant?).

Teradata Has Fallen Behind…….SAP’s Marketing Department?

SAP states that Teradata has “fallen behind” SAP. However, in technical circles, SAP is still not a respected database vendor. Teradata, although they are known to charge far too much for what they offer and to overpromise, are technically respected. The only place that Teradata has fallen behind SAP in databases is in marketing.

Teradata Cannot Compete with S/4HANA?

SAP goes on to make an assertion that is so absurd, that SAP must believe the judge will make zero effort to fact check the statement.

“Teradata has not been able to compete effectively with S/4HANA because it only focuses on its flagship analytical database and has failed to offer innovative and relevant compelling products,” the filing stated.

Teradata does not compete with S/4HANA. They compete with HANA.

The reason Teradata has not been able to compete in SAP customers with S/4HANA is that SAP made it a requirement that HANA only copy data to a second instance of HANA. This made Teradata uncompetitive as it would massively increase the cost (HANA is an exorbitant database in its TCO, which we estimated in the Brightwork Study into SAP HANA’s TCO) This is not merely a Teradata issue, SAP is using these rules against all the database competitors and using them against SAP customers. Reports of these abuses come in from different places around the world to us.

Therefore, SAP’s statement about failing to offer innovative and “relevant competing products” rings hollow. This is particularly true since HANA is not an innovative product, but as we covered, in Did SAP Simply Reinvent the Wheel with HANA.

SAP backward engineered other databases combined with its acquisitions of other database components. To hide this backward engineering, and to seem innovative, SAP has renamed items that already had generally accepted names. For instance, what SAP calls “code pushdown” is simply the same old stored procedure as we covered in How Accurate are SAP’s Arguments on Code Pushdown and CDSs.

Teradata Must Develop an ERP System to Compete?

The sentence related to Teradata only focusing on its “flagship analytical database” by SAP contains an important assumption that should be fleshed out by the judge during the case. The assumption made clear by this statement is that Teradata should not offer only analytical/database products to compete with SAP, but needs to develop its own ERP system.

This fits within the construct that SAP finds appealing, which is that the ERP vendor should control the entire account. And it is an inherently anti-competitive assumption. What is most curious, is that SAP does not even appear to realize how this exposes them as monopolistic in their thought processes. That is not supposed to be the assumption of ERP systems. ERP vendors are entitled to offer the customer more products, but selling the ERP system to a customer does not entitle that vendor to all of the IT spend of that company.

SAP Lacks Power in Its Own Customers?

One has to really stand in awe of SAP’s next proposal to the judge. SAP would like the judge to think that SAP lacks influence in……SAP accounts.

“SAP said Teradata’s allegations that it was monopolising the enterprise data analytics and warehousing market also fell flat, arguing it had failed to even identify SAP’s power in that market.

“The [complaint] alleges nothing more than that Teradata now has to compete in its favored marketplace,” SAP said.”

Here SAP’s attorneys try another sleight of hand. Rather than addressing anything true, SAP’s (what must be very highly paid) attorneys prefer to change the subject to see if the judge will notice.

Can judges be hypnotized? If so, SAP has a chance with this argument. 

Teradata’s allegation is that SAP is blocking them out of SAP accounts. And that this is anti-competitive. SAP has created false technical proposals including the incredible bizarre restrictions that HANA must be copied only to HANA and not to Teradata. I have discussed these limitations with people with decades of database experience, and none of us can make any sense of the restrictions. They are unprecedented and designed merely to capture market share. Those are real impediments to Teradata, and they are meant to be.

Furthermore, these restrictions are costing SAP customers in a major way. SAP wants customers to upgrade to S/4HANA, which comes with HANA, and as soon as they do, they will find themselves subject to all manner of restrictions that did not exist with the previous database they were using (Oracle, DB2, SQL Server). SAP plans to use these restrictions to push out from ERP making HANA mandatory and “making the customer’s choice for them.”

Teradata need not identify SAP’s “power in the analytics market,” as SAP has enormous and undisputed power in their clients. Anyone who has worked in SAP consulting knows this. Now those clients previously were happy to use Teradata and SAP side by side and did for many years. But SAP, through these restrictions made it difficult for Teradata to continue to do business in SAP accounts. In fact, according to the Teradata complaint, many of their customers they shared with SAP gave them ultimatums that they would have the previous levels of interoperability with SAP, or their customers would leave them. This is quite believable, as SAP greatly reduced the value of Teradata in SAP accounts by making the integration to Teradata so much more expensive.

The entirety of SAP’s restrictive policies is to injure competitors and to absorb more income from customers. SAP is in a particularly weak position here now that all of their claims regarding HANA’s superiority have been pierced as we covered in Articles that Exaggerate HANA’s Benefits and How to Deflect That Your Were Wrong About HANA.

S/4HANA and HANA are the Same Product?

“Regarding antitrust claims, SAP said Teradata “does not plausibly allege that SAP coerces its customers into purchasing HANA”. It added assertions that S/4HANA unlawfully ties HANA to ERP software are misguided, as they aren’t separate products.

Rather, it is one integrated product sold to customers as so, compared to separate ERP and database wares.”

SAP’s attorneys should have checked this with the technical resources at SAP because these two paragraphs are unsupportable and make it plain that the attorneys mean to trick the judge.

First S/4HANA is unlawfully tied to HANA because…

  • a) There is no technical reason to restrict S/4HANA to HANA. The evidence is that HANA underperforms the competing database alternatives as we covered in What is the Actual Performance of HANA. and..
  • b) Products that are tied together in order block out competitors are illegal under the tying arrangement clause of the US antitrust law. This is the exact clause of our antitrust law used by the DOJ to win a judgment against Microsoft back in the 1990s.

Something else which will be difficult for SAP to explain — how is an application like S/4HANA and a database like HANA a single integrated product? Can SAP name another ERP system that is “integrated as a product with its database?” Here is another question, if S/4HANA and HANA are the same product, why are they priced separately and listed as different products in the SAP price list? A third question. Is HANA now integrated to the BW also? As BW can be deployed on HANA, they must also be a single fused product!

Teradata’s Real Complaint is SAP Would Not Integrate with Teradata’s DB?

“Teradata’s real complaint is that SAP chose to offer this integrated system with HANA, rather than integrating with Teradata’s database; the antitrust laws, however, are designed to prevent injury to competition, rather than injury to competitors,” SAP said.”

This is a very strange wording by SAP. However, the issue that SAP is hoping the judge is confused by is that the restrictions are not technical. Teradata has been integrating to (often Oracle) databases on customers with SAP applications for decades. The issue is not technical; it is how SAP setup the charges and used indirect access to cut off their database from being accessed by Teradata. Indirect access is a violation of the tying arrangement discussed previously and covered in the article SAP’s Indirect Access Violates US Anti Trust Law. Notice Teradata’s use of the specific term tying arrangement in this quote from the complaint.

“On information and belief, SAP has also begun significantly restricting Teradata’s ability to access customers’ SAP ERP data stored in HANA (which is necessary for the functional use of Teradata’s EDAW products), thereby ensuring the success of its tying arrangement in coercing customers to adopt HANA.”

The second part of the paragraph from the SAP quotation regarding “designed to prevent injury to competition, rather than injury to competitors seems to be some type of wordplay. This would be like saying laws against murder are designed to protect society in general but are not designed to protect the murder of any one particular person.

Teradata is being blocked because of SAP’s unwarranted tying arrangement between S/4HANA and HANA. Teradata is a competitor, and SAP is not competing with them by offering customers to choose between HANA and Teradata. They are using the SAP ERP system, previous versions which did not have these restrictions, to be restricted.

This is stated in the Teradata complaint.

“Moreover, and on information and belief, SAP has begun significantly restricting Teradata’s ability to access customers’ SAP-derived data. Through this conduct, SAP has deliberately sought to exploit its large, existing ERP customer base to the detriment of Teradata and its customers. Given the extremely high costs of switching ERP providers, SAP’s ERP customers are effectively locked-in to using SAP’s ERP Applications, and SAP is now attempting to lock them into using only HANA in the EDAW market as well.”

This is the exact reason we have argued against ERP systems; they are continually used to take control of the customer’s IT spend through account control, as covered in the article How ERP Systems Were a Trojan Horse. 

The strategy by SAP’s attorneys here is called “muddying the water.”

SAP Requires More Explanation as to Inefficiencies?

“For instance, SAP said the US-based Teradata was vague about the “inefficiencies” it claims to have identified in SAP’s systems it offered; failed to precisely identify what trade secrets were stolen; and failed to allege that SAP breached the contracts drawn up for the Bridge Project.

“To the contrary, much of what the [complaint] alleges as purported misconduct (which SAP denies) is expressly permitted by the relevant provisions of the Bridge Project Agreements,” the ERP giant said.”

Here we agree with SAP on the trade secret allegation.

Conclusion

What can be taken from this motion to dismiss? The arguments related to trade secrets seem correct. SAP most likely did benefit from Teradata’s advice and expertise related to how to improve HANA. SAP would have naturally tried to learn things from Teradata. SAP was very unsophisticated regarding databases, particularly back when they were cobbling together HANA from acquisitions and with ideas gleaned from other databases vendors. But this backward engineering was not restricted to Teradata. And we have yet to see evidence that Teradata offered a substantial portion of IP that eventually became HANA.

Furthermore, HANA is not a competitive product. Therefore, whatever SAP may have taken from Teradata was either not particularly good, or SAP screwed up the implementation of the concept. HANA’s power comes from its association with SAP, not from HANA’s capabilities as a product.

SAP’s arguments against Teradata’s claims regarding anti-competitive behavior go beyond anything reasonable, dance in the area of insulating and make one wonder about the attorneys used by SAP. Any person who would make these arguments to me would so ruin their credibility; I would never listen to them again.

The impression given is that SAP hopes to find a weak judge who would believe such arguments. A motion to dismiss is automatic, but if this is what SAP came up with (assuming there was no miscommunication with the attorneys and SAP) this case appears to be a substantial risk for SAP. Teradata is asking for SAP to change the way they do business essentially. Teradata’s request is entirely consistent with demanding the SAP follow the normal rules of competition. SAP is asking that the US courts allow them to use tricks and deception to push vendors out of “their” customers. SAP claims to own them because they have sold them an ERP system. Teradata is asking for the courts to bar some of SAP’s behaviors as covered in the following quotation in the Teradata complaint.

“Teradata therefore is entitled to an injunction barring SAP’s illegal conduct, monetary damages, and all other legal and equitable relief available under law and which the court may deem proper.”

US courts are not the best place for anticompetitive enforcement. One question might be, why is the FTC not investigating SAP? The exact issues listed by Teradata in their complaint have been reported to us for years. But as the FTC no longer is interested in enforcing antitrust law, this is Teradata’s only option. The US economy is increasingly dominated by larger and larger entities, something which reduces competition and depresses wages.

Other vendors should show an interest in this case because SAP is claiming the vendor selling the ERP system has the right to push the other vendors from the account. If the US courts allow them to do it to Teradata, which is a vendor with large amounts of resources, they can do it to anyone.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other IT Lawsuits Content

References

https://www.theregister.co.uk/2018/09/03/sap_response_teradata_lawsuit/

https://www.businesstoday.in/current/corporate/day-after-teradata-filed-ip-theft-suit-against-sap-vishal-sikka-terms-charges-baseless-outrageous/story/279442.html

https://assets.teradata.com/News/2018/2018-06-19-Complaint.pdf

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How HANA Takes 30 to 40 Times the Memory of Other Databases

Executive Summary

  • HANA takes enormous levels of memory compared to competing databases.
  • HANA has continual timeout issues that are in part due to HANA’s problem managing memory.

Introduction to HANA’s Problems with Managing Memory

SAP’s database competitors like Oracle, IBM, and Microsoft, have internal groups that focus on memory optimization. Memory optimization (in databases) is how tables are moved into and out of memory. However, SAP tries to push more tables into memory that are necessary (but not as many tables as they state that they do, that is not “all the tables”). However, SAP does not have the optimization capabilities of the other database vendors.

High Memory Consumption with HANA

HANA’s high memory consumption is explained in their SAP HANA Troubleshooting and Performance Analysis Guide where they state the following.

“You observe that the amount of memory allocated by the SAP HANA database is higher than expected. The following alerts indicate issues with high memory usage.”

And..

“Issues with overall system performance can be caused by a number of very different root causes. Typical reasons for a slow system are resource shortages of CPU, memory, disk I/O and, for distributed systems, network performance.”

This is odd for SAP to observe shortages of resources. This is because HANA has the highest hardware specification of any other competing database. Also, the comparison is not even close. This is pointed out again by SAP regarding memory.

“If a detailed analysis of the SAP HANA memory consumption didn’t reveal any root cause of increased memory requirements it is possible that the available memory is not sufficient for the current utilization of the SAP HANA database.”

Conclusion

The same question arises, with so much memory usually part of the initial sizing, why is undersized HANA memory such an issue?

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Performance Content

How to Understand HANA’s High CPU Consumption

Executive Summary

  • HANA has high CPU consumption due to HANA’s design.
  • The CPU consumption is explained by SAP, but we review whether the explanation makes sense.

Introduction to HANA CPU Consumption

At Brightwork we have covered the real issues with HANA that are censored by SAP, SAP consulting firms and IT media. As a result of this information being censored, many SAP customers have been hit with many surprises related to implementing HANA.

In this article, we will address HANA’s CPU consumption or overconsumption.

HANA CPU Overconsumption

A second major issue in addition to memory overconsumption with HANA is CPU consumption.

HANA’s design is to load data that is not actually planned to be used in the immediate future into memory. SAP has decreased it rather unrealistic position on “loading everything into memory,” but it still loads quite a lot into memory.

The reason for the CPU overconsumption is this is how a server reacts when so much data is loaded into memory. The CPU spikes. This is also why CPU monitoring along with memory monitoring is considered so necessary for effectively using HANA, while this is not an issue normally with competing databases from Oracle, IBM or others.

SAP’s Explanation for Excessive CPU Utilization by HANA

SAP offers a peculiar explanation for CPU utilization.

“Note that a proper CPU utilization is actually desired behavior for SAP HANA, so this should be nothing to worry about unless the CPU becomes the bottleneck. SAP HANA is optimized to consume all memory and CPU available. More concretely, the software will parallelize queries as much as possible to provide optimal performance. So if the CPU usage is near 100% for query execution, it does not always mean there is an issue. It also does not automatically indicate a performance issue”

Why Does SAP’s Statement Not Make Sense

This entire statement is unusual, and it does not explain several issues.

Why HANA Times Out if the CPU is Actually Being Optimized?

Timing out is an example of HANA not being able to manage and throttle its resource usage. Timing out requires manual intervention to reset the server. If a human is required to intervene or the database ceases to function, this obviously means that resources are not being optimized. We have common reports at some companies of their HANA server needing to be reset multiple times a week.

This is supposed to be a standard capability that comes with any database that one purchases. Free open-source databases do not have the problem that HANA has.

If an application or database is continually consuming all resources, the apparently the likelihood of timeouts increases. It also presents a false construct that HANA has optimized the usage of the CPU, which is not correct. It seeks to present what is a bug in HANA as a design feature.

Why HANA Leaves A High Percentage of Hardware Unaddressed

Benchmarking investigations into HANA’s utilization of hardware indicate clearly that HANA is not addressing all of the hardware that it uses. While we don’t have both related items evaluated on the same benchmark, it seems very probable that HANA is timing out without addressing all of the hardware. Customers that purchase the hardware specification recommended by SAP often do not know that HANA leaves so much hardware unaddressed.

The Overall Misdirection of SAP’s Explanation

This paragraph seems to attempt to explain away the consumption of hardware resources by HANA that in fact, should be a concern to administrators. This statement is also inconsistent with other explanations about HANA’s use of memory, as can be seen from the SAP graphic below.

Notice the pool of free memory.
Once again, notice the free memory in the graphic.

This is also contradicted by the following statement as well.

“As mentioned, SAP HANA pre-allocates and manages its own memory pool, used for storing in-memory tables, for thread stacks, and for temporary results and other system data structures. When more memory is required for table growth or temporary computations, the SAP HANA memory manager obtains it from the pool. When the pool cannot satisfy the request, the memory manager will increase the pool size by requesting more memory from the operating system, up to a predefined Allocation Limit. By default, the allocation limit is set to 90% of the first 64 GB of physical memory on the host plus 97% of each further GB. You can see the allocation limit on the Overview tab of the Administration perspective of the SAP HANA studio, or view it with SQL. This can be reviewed by the following SQL
statement

select HOST, round(ALLOCATION_LIMIT/(1024*1024*1024), 2)
as “Allocation Limit GB”
from PUBLIC.M_HOST_RESOURCE_UTILIZATION”

Introduction to HANA Development and Test Environments

Hasso Plattner has routinely discussed how HANA simplifies environments. However, the hardware complexity imposed by HANA is overwhelming to many IT departments. This means that to get the same performance of other databases, HANA requires not only far more hardware but far more hardware maintenance.

Development and testing environments are always critical, but with HANA the issue is of particular importance. The following quotation explains the alterations within HANA.

“Taking into account, that SAP HANA is a central infrastructure component, which faced dozens of Support Packages, Patches & Builds within the last two years, and is expected to receive more updates with the same frequency on a mid-term perspective, there is a strong need to ensure testing of SAP HANA after each change.”

The Complications of So Many Clients and Servers

HANA uses a combination of clients and servers that must work in unison for HANA to work. This brings up complications, i.e., a higher testing overhead when testing any new change to any one component. SAP has a process for processing test cases to account for these servers.

“A key challenge with SAP HANA is the fact, which the SAP HANA Server, user clients and application servers are a complex construct of different engines that work in concert. Therefore, SAP HANA Server, corresponding user clients and applications servers need to be tested after each technical upgrade of any of those entities. To do so in an efficient manner, we propose the steps outlined in the subsequent chapters.”

Leveraging AWS

HANA development environments can be acquired through AWS at reasonable rates. This has the extra advantage that because of AWS’s elastic offering, volume testing can be performed on AWS’s hardware without having to purchase the hardware. Moreover, the HANA AWS instance can be downscaled as soon as the testing is complete. The HANA One offering allows the HANA licenses to be used on demand. This is a significant value-add as sizing HANA has proven to be extremely tricky.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

This is an example of a statement by SAP that is designed to make a bug and design issue, something that occurred very early in the development of HANA, but which SAP never addressed (or thought they could address by simply adding more memory) into a positive feature. Other databases do not have this problem. There is no open-source database that we have ever heard of having this problem. It is just a very odd problem. HANA projects and implementations run into issues that they can’t get straight answers from SAP or from their SAP consulting firm.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice completely outside of SAP or your SAP consulting firm, reach out to us at the form below.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Content

References

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Risk Estimation and Calculation

Risk Estimation and Calculation

See our free project risk estimators that are available per application. The provide a method of risk analysis that is not available from other sources.

project_software_risk

HANA’s Time in the Sun Has Finally Come to an End

Executive Summary

  • SAP has been forced to move HANA into the background of its marketing focus for various reasons.
  • Trends shifted away from proprietary databases towards open source databases.
  • Marketing claims about HANA being groundbreaking turned out to be false. HANA had no offerings that gave it an advantage over competing solutions and it proved to have the highest TCO among its competitors.

Introduction: The Real Story on SAP’s HANA Focus

In this article, we will cover how SAP has finally moved HANA into the background of its marketing focus.

SAP’s Marketing Transition Away from HANA

In 2011 HANA became the primary marketing tentpole for SAP, replacing Netweaver which had been the primary focus at that time.

The official date where HANA was replaced in this position in SAP’s marketing orbit can be marked as June 5th. That is the first day of SAPPHIRE 2018. This is because HANA was noticeably less prominent at SAPPHIRE 2018 than it had been since its introduction.

SAP Thought it Had Cracked Oracle’s Code

SAP’s obsession with HANA reached a fevered pitch from the 2011 and to 2018 timespan. SAP had actually (I believe) convinced itself that it had done something that it had not come close to doing (which is come up with the “killer app” in the database market). SAP thought that combining a column-oriented (partial column oriented it later turned out) database combined with more memory had never been executed as had been done by SAP. As is well known, SAP acquired all of its technology for this design and my analysis which is partially documented in the article Did Hasso Plattner and His Ph.D. Students Invent HANA? 

SAP’s contribution to the combined analytics processing and transaction processing database was to market it. This powerful marketing by SAP caused Oracle and IBM to move resources into developing such functionality in their databases — a move which I think was a misallocation of resources. Research by Bloor Research which I analyzed in the article How Accurate Was Bloor on Oracle In Memory, covered the extra overhead of Oracle’s in-memory offering.

SAP Pays Forrester to Make a New Database Category

SAP paid good money to try to make mixed OLTP and OLAP from one database “a thing” going so far as to pay Forrester to create a new faux database category called the “transanalytical database.”

And surprise surprise, HANA was declared a leader in this new database category! We covered this in the article What is a Transanlytical Database?(It is a new database category, specifically for those that don’t know much about databases.)

This is something that Bill McDermott crowed about on the Q4 2017 earnings call but failed to point out that SAP had paid Forrester for this study.

One wonders how much in market cap was added because of this report, and how much that added to how many stock options that were exercised by the top executives at SAP. Even if it were a very small number of percentage points, it would still make whatever SAP paid Forrester an absolute steal.

The Trend Away from HANA

Databases have become increasingly diversified since HANA was first introduced, and because of IaaS like AWS and Azure, it is now increasingly easy to spin up multiple database types and test them. Moreover, the biggest trend is not patent software databases, but open source databases. Also, since HANA’s initial introduction, the trend towards open source database has only grown. This is offering more different database types than ever before.

There is even now a database focused on something called horizontal scalability and disaster recovery called CockroachDB. There is more opportunity than ever before to access specialized databases with different characteristics. And due to IaaS providers, these databases are far more simple to provision and test than in the past. Open source databases can be spun up and tested and distributed like never before. Yet, the presentation by SAP to customers that there are only two database processing types (analytics and transactions) that they need to worry about and that HANA covers the bases. SAP could not have picked a more incongruous messaging and strategy around databases if they set out to do so from the beginning of HANA’s development.

The best transaction processing database is a row-oriented database. The best analytic database is a database like Redis or Exadata. If one tries to get both out of a single database, compromises quickly ensue, and maintenance costs go up.

What the HANA Experiment Illustrated

For many years since HANA was introduced, I had a quite a few people who had little experience with databases telling me (and anyone who would listen actually) how earth-shattering HANA would be. These bold statements were brought forth by people with often no experience with databases. It became evident to me through many conversations that the person often was simply repeating what they heard from SAP. But the problem was that SAP made statements about HANA and databases in general that were in error.

What this taught me was that a sizable component of the population in enterprise software are willing to not only discuss things but be highly confident in presenting stuff for which they have no way of knowing if are correct. It means that a large component of those that work in SAP are faking knowledge.

The proposals were so off the wall that I was subjected to that they needed their own laugh track. I had partners from Deloitte and CapGemini and several other firms, people who would not know the definition of a database index, who have not been in anything but the Microsoft Office suite in several decades, telling me with great certitude how HANA would change everything.

“You see Shaun, once we columnar in-memory databases are used for transaction systems, the entire BI system goes away.”

Many of the statements by SAP executives or by SAP consulting partners seem very much like the Jimmy Kimmel Live segment called Lie Witness News. I could describe it but see for yourself.

These people have an opinion on the US invasion of a fictional country, without asking “where is Wakanda?” If you ask these same people “do you think SAP’s HANA database is very good and better than all other databases,” what answer would we get? 

Time to Admit You Don’t Know Wakanda Does Not Exist (and that HANA is not GroundBreaking)?

The issue is a combination of dishonesty combined with the assumption that something must be true because it is presented and proposed by a large entity (in this case the US military being in Wakanda). This happens all the time, and the people that propose things without checking are often quite experienced. This is of course doubly a problem when consulting companies see jumping on whatever the new marketing freight train that SAP has as critical to meeting a quota.

The upshot is that a very large number of people that repeated things about HANA without either having the domain expertise to know or without being bothered to check should be highly embarrassed at this point. And these are people in a position to advise companies, which is where the term the “blind leading the blind” seems most appropriate at this moment. Its important for SAP customers to know, if Capgemini, Deloitte, Accenture, Infosys, etc…..were trying to get you to purchase and implement HANA, they had no idea what they were talking about, or did not care what was true. They were, as required by the SAP consulting partnership agreement, and along with the sales quota incentives they have, repeating what SAP told them. And for nearly everything SAP proposed about HANA, they simply made it up. SAP not only made up the benefits offered by HANA, but they made up a fictitious backstory of how HANA was “invented” as we covered in the article Did Hasso Plattner and Ph.D.’s Invent HANA?

The rise of HANA lead to many people with only a cursory understanding of databases talking a lot about databases. Naturally, their statements, promoted by SAP, had a very low accuracy. With HANA moving back in SAP’s enormous deck of cards of SAP products, now the topic of databases can shrink down closer to the group of people that know something about them.

That is a positive development.

HANA Was Going to Change the World?

HANA was supposed to change the world. However, what did it change?

If we take just one example, the idea of loading all of the databases into memory, if one looks at the vendors with far more experience in databases than SAP, no one does this. The reason is that it is wasteful when only a small percentage of the tables are involved in the activity. This is why each database vendor has a group that focuses on memory optimization. And it was eventually determined that SAP says that they load the entire database into memory, but they don’t. Memory optimization still rules the day. This is just one example, I could list more, but outside of marketing, HANA did not change much and either all or nearly all of its projections turned out not to be true.

The Final Outcome of HANA

Customers that implemented HANA now have a higher TCO database, a far buggier database, and they have had to run more databases in parallel. After years of analyzing this topic, I can find no argument for replacing existing databases with HANA, or for beginning new database investments by selecting HANA.

Lets us traverse through the logic because it seems to be tricky.

  1. Does HANA perform better in analytics processing than previous versions of Oracle or DB2 that did not have column capabilities and ran on older hardware? Yes.*
  2. Does HANA outperform or have any other associative capability that gives it an advantage against the major competing offerings? No
  3. Does HANA have the most bugs and highest TCO of any of the offerings it competes against? Yes
  4. Is HANA the most expensive of all the offerings it competes against? Yes.

*(The topic that SAP has routinely tried to get clients to think that HANA on new hardware should be compared against Oracle and DB2 on old and far less expensive hardware is covered in the article How Much Performance is HANA?)

Where are HANA’s Sales Outside of Companies that Already Run SAP Applications?

Is some explanation for why HANA is not purchased outside of SAP accounts. The sales pitch was that HANA was so much more advanced than competing offerings that not only S/4HANA but other SAP applications could not work to their full extent without it. It was going to be so easy to develop on with the HANA Cloud Platform (now SAP Cloud or SAP Cloud Platform) that developers outside of SAP were going to flock to it because of its amazing capabilities. Right? SAP said all these things and many more.

However, if all of this was true, shouldn’t SAP be able to sell HANA to customers that don’t use SAP applications? Vishal Sikka stated that HANA was instrumental to a wide variety of startups.

Where is that market?

The answer is nowhere. That should give us pause regarding SAP’s claims.

Is SAP Dedicated to Breaking the “Dependency” on Oracle?

SAP justified all of the exaggerations in part by convincing themselves they were going to help their customers “break” their dependency on Oracle (and to a lesser degree DB2). However, one has to question how dedicated one is to “breaking a dependency” when the desired outcome is not to switch dependency to SAP. When a customer buys from a competitor, that supposedly is a “dependency.” However, when a customer buys the same item from you, that is a “relationship.” This sounds a bit like the saying that a person can either be a “terrorist” or a “freedom fighter,” depending upon one’s vantage point.

The Logic for the Transition

If we look at the outcome, HANA is not a growing database.

HANA has not grown in popularity since Feb 2017. Moreover, it has not increased significantly since November of 2015. And let us recall, this is a database with a huge marketing push.

It cost SAP significantly to redirect its marketing budget, to emphasize HANA over other things is could be emphasizing. In fact, I have concluded that really most of HANA’s growth was simply due to its connection to SAP. If HANA had to acquire customers as a startup, it would have ceased to exist as a product a long time ago. The product itself is just not that good.

A second point is that HANA was enormously exaggerated in terms of its capabilities.

Point three is that some HANA purchases were made to satisfy indirect access claims, that is they were coerced purchased. I am still waiting for a Wall Street analyst to ask the question.

“How much of the S/4HANA and HANA licenses are related to indirect access claims?”

Apparently, Wall Street analysts have a process where they keep away from actually interesting questions.

Diminishing Returns for Focusing on HANA

SAP was not getting a return from allocating so many of its is promotional resources to showcasing HANA. Furthermore, the customers were becoming “HANA resistant.” The over the top HANA emphasis by SAP had become a point of contention and often ridicule at customers (which I learned through my client interactions)

All of this would not have happened without Hasso Plattner. Hasso bet big on HANA, and Hasso was wrong. HANA was allowed to be promoted on tenuous grounds because its champion was Hasso Platter.

In Hasso Plattner’s Book The In-Memory Revolution he stated the following: 

“At SAP, ideas such as zero response time database would not have been widely accepted. At a university, you can dream, at least for a while. As long as you can produce meaningful papers, things are basically alright.”

The problem? Zero latency has not been achieved by HANA to this day. It was never a reasonable goal. And Hasso illuminated something else with this quote. University students are even less willing to push back on Hasso than career database professionals. A fundamental reason is they have much more experience.

The True Outcome of HANA

SAP is now stuck with a buggy database unable to come close to its performance promises, and which has influenced SAP’s development in other areas in a wasteful manner. For example, many of the changes that were made to S/4HANA to accommodate HANA turn out to have not been necessary and have extended out S/4HANA development timeline. Secondly, the requirement that S/4HANA only use HANA has both restricted the uptake of S/4HANA, and will continue to do so. S/4HANA could be overall much farther along if it was just “S/4” and ran on AnyDB.

Now companies that purchased HANA will try to justify the purchase (no one likes to admit they got bamboozled) because BW runs faster. However, these same executive decision makers entirely leave out the impact of HANA’s far more expensive hardware in the performance analysis. Companies that purchase HANA for BW don’t test or hire out tests to determine how much of the performance benefit is due to hardware. That is how much Oracle or DB2 (and a modern version of Oracle and DB2) would perform on the same hardware.

No, instead they tell me that performance for BW improved over an eight-year-old version of Oracle or DB2 on eight-year-old hardware that because it has much more disk than memory and cost a small fraction of the HANA hardware. So with BW companies can hide HANA’s performance limitations (although not its maintenance overhead). However, HANA has many problems meeting customer expectations for things that SAP said performance would skyrocket, which include…well everything but short SQL queries.

This means that SAP will be fighting fires no HANA performance for S/4HANA for transaction processing and MRP processing for years. SAP would have none of these problems if they simply did what they had always done, allow the companies that really knew databases to provide the database. As pointed out by a colleague..

“The far bigger threat and loss of income and account control to SAP is from IaaS/PaaS providers, not database vendors. “

This was not at all the SAP’s vision for the outcome of HANA.

Conclusion

SAP was never a database company, and now it looks like it won’t be (a significant one) in the future. And, there is nothing wrong with that. In retrospect, SAP’s acquisitions of Sybase and other purchases (that it made very quietly) that ended up being HANA could have been invested to much better effect by merely fixing issues in ECC and upgrading product support.

HANA will still be there, but SAP’s marketing focus is moving on to other things. Right now it’s unclear exactly which it will choose. C/4HANA is the new kid in town. S/4HANA is still a centerpiece. SAP has so many products, toolkits, announcements, and concepts to promote that SAP marketing is a beehive of activity. However, the overhyping of HANA to promote S/4HANA has subsided. S/4HANA will now be more sold on its functionality (as ECC always was).

What the Future Holds for HANA

SAP has shifted to making the compatibility argument to customers, but not in public. Publicly SAP says that the only application for which HANA is a requirement is S/4HANA. However, through the sales reps repeatedly makes the argument that their applications can only work as intended with HANA. We evaluate these compatibility arguments for clients and are always surprised by the new explanations that SAP comes up with to drive customers to HANA. The accuracy of these private statements to customers should never be taken on face value, but need to be evaluated on a case by case basis.

SAP’s Inaccurate Messaging on HANA as Communicated in SAP Videos

Fact-Checking SAP’s HANA Information

This video is filled with extensive falsehoods. We will address them in the sequence they are stated in this video.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA is a Platform
0%
HANA is not a platform, it is a database.How to Deflect You Were Wrong About HANA
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
S/4HANA Simplifies the Data Model
0%
HANA does not simplify the data model from ECC. There are significant questions as to the benefit of the S/4HANA data model over ECC.Does HANA Have a Simplified Data Model?
Databases that are not HANA are legacy.
0%
There is zero basis for SAP to call all databases that are not HANA legacy.SAP Calling All Non-HANA DBs Legacy.
Aggregates should be removed and replaced with real time recalculation.
0%
Aggregates are very valuable, and all RDBMS have them (including HANA) and they should not be removed or minimized in importance.Is Hasso Plattner Correct on Database Aggregates?
Reducing the number of tables reduces database complexity.
0%
Reducing the number of tables does not necessarily decrease the complexity of a database. The fewer tables in HANA are more complicated than the larger number of tables pre-HANA.Why Pressure SAP to Port S/4HANA to AnyDB?
HANA is 100% columnar tables.
0%
HANA does not run entirely with columnar tables. HANA has many row-oriented tables, as much as 1/3 of the database.Why Pressure SAP to Port S/4HANA to AnyDB?
S/4HANA eliminates reconciliation.
0%
S/4HANA does not eliminate reconciliation or reduce the time to perform reconciliation to any significant degree.Does HANA Have a Simplified Data Model and Faster Reconciliation?
HANA outperforms all other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.

The Problem: A Lack of Fact-Checking of HANA

There are two fundamental problems around HANA. The first is the exaggeration of HANA, which means that companies that purchased HANA end up getting far less than they were promised. The second is that the SAP consulting companies simply repeat whatever SAP says. This means that on virtually all accounts there is no independent entity that can contradict statements by SAP.

Being Part of the Solution: What to Do About HANA

We can provide feedback from multiple HANA accounts that provide realistic information around HANA — and this reduces the dependence on biased entities like SAP and all of the large SAP consulting firms that parrot what SAP says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. SAP and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for a database which is exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain. When SAP or their consulting firm are asked to explain these discrepancies, we have found that they further lie to the customer/client and often turn the issue around on the account, as we covered in the article How SAP Will Gaslight You When Their Software Does Not Work as Promised.

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

The major problem with companies that bought HANA is that they made the investment without seeking any entity independent of SAP. SAP does not pay Gartner and Forrester the amount of money that they do so these entities can be independent as we covered in the article How Accurate Was The Forrester HANA TCO Study?

If you need independent advice and fact-checking that is outside of the SAP and SAP consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Inaccurate Messaging on HANA as Communicated in SAP Consulting Firm Videos

For those interested in the accuracy level of information communicated by consulting firms on HANA, see our analysis of the following video by IBM. SAP consulting firms are unreliable sources of information about SAP and primarily serve to simply repeat what SAP says, without any concern for accuracy. The lying in this video is brazen and shows that as a matter of normal course, the consulting firms are happy to provide false information around SAP.

SAP Video Accuracy Measurement

SAP's Statement
Accuracy
Brightwork Fact Check
Link to Analysis Article
HANA runs more "in-memory" than other databases.
10%
HANA uses a lot of memory, but the entire database is not loaded into memory.How to Understand the In-Memory Myth
HANA is orders of magnitude faster than other databases.
0%
Our research shows that not only can competing databases do more than HANA, but they are also a better fit for ERP systems.How to Understand the Mismatch Between HANA and S/4HANA and ECC.
HANA runs faster because it does not use disks like other databases.
0%
Other databases also use SSDs in addition to disk.Why Did SAP Pivot the Explanation of HANA In Memory?
HANA holds "business data" and "UX data" and "mobile data" and "machine learning data" and "IoT data."
0%
HANA is not a unifying database. HANA is only a database that supports a particular application, it is not for supporting data lakes.
SRM and CRM are part of S/4HANA.
0%
SRM and CRM are not part of S/4HANA. They are separate and separately sold applications. SAP C/4HANA is not yet ready for sale. How Accurate Was Bluefin Solutions on C-4HANA?
Netweaver is critical as a platform and is related to HANA.
0%
Netweaver is not relevant for this discussion. Secondly Netweaver is not an efficient environment from which to develop.
HANA works with Business Objects
10%
It is very rare to even hear about HANA and Business Objects. There are few Buisness Objects implementations that use HANA.SAP Business Objects Rating
Leonardo is an important application on SAP accounts.
0%
Leonardo is dead, therefore its discussion here is both misleading and irrelevant.Our 2019 Observation: SAP Leonardo is Dead
IBM Watson is an important application on SAP accounts.
0%
Watson is dead, therefore its discussion here is both misleading and irrelevant.How IBM is Distracting from the Watson Failure to Sell More AI and Machine Learning
Digital Boardroom is an important application on SAP accounts.
0%
SAP Digital Boardroom is another SAP item that has never been implemented many places.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other HANA Popularity Content

References

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

How Real is The Oracle Automated Database?

Executive Summary

  • Oracle made many ridiculous claims about the autonomous or self-driving database.
  • The reason for the creation of the autonomous database is because Oracle is losing business to the AWS RDS managed database service.
  • Oracle makes is sound as if AI in Oracle’s Automated Database is ARIIA from EagleEye.
  • Oracle upgrades are not free and upgrades have many complications.

Introduction

Oracle has been making great claims related to automation. They introduced something called the “Autonomous Database.” In this article, we will review the claims for the autonomous or automated database.

The Autonomous Database?

Let us begin analyzing the claim being made in the name Oracle has given here. Autonomous means something that runs itself. It would mean that no human intervention would be required to manage the automated database. That is not only is it not an on-premises DBA, but it also does not require management by Oracle or any other entity.

We have been working with databases for years, and we have yet to run into such a database. So it should first be established that this is an enormous claim that Oracle is making.

The Self Driving Database?

Another term used by Oracle in their literature is the term “self-driving.” This seems to imply the same thing as the term automated.

Larry Ellison delivers some preposterous quotes in his explanation of the Oracle Automated Database.

“For a long time people really looked at the promise of AI but it never quite delivered to its promise until very recently. With the advent of the latest version of AI, neural networks with machine learning, we are doing things that hitherto have been considered unimaginable by computers.”

Oracle which is known as being the most expensive database to maintain short of SAP HANA (which has enormous maintenance overhead), Ellison has this to say.

“On an Oracle database running at Amazon, will cost you 5 times what it costs you to run in the Oracle Cloud because it will take you 5 times the amount of computer to do the exact same thing. A Redshift database will cost 10 times more to do the same thing at Oracle Cloud.  And that is not counting the automation of the database function. That is not counting the downtime as Oracle Cloud has virtually no downtime.” 

The Popularity of the Term Automated Database

The following shows Google Trend’s measurement of the popularity of the search term “autonomous database.” Notice the spike in October of 2017. This was when Oracle began a marketing offensive around its autonomous database.

Let us review some of Oracle’s claims.

“Examples of automation Oracle said it would offer are automated data lake and data prep pipeline creation for data integration; automated data discovery and preparation, with automated analysis for key findings; and automation of identification and remediation of security issues in a developer’s code during application development.”

At OpenWorld in 2017, Larry Ellison claimed

“The new database uses artificial intelligence (AI) and machine learning. It’s fully autonomous, and it’s way better than AWS’s database, Ellison said.”

Understanding what AWS Is

Here it needs to be clarified, while AWS has introduced databases like Aurora and DynamoDB, most of AWS is primarily a PaaS/IaaS vendor. And as such AWS’s revenues in databases come from managing databases, it did not develop. Everything from Oracle to SQL Server to open source databases.

So when Ellison says that their new database is “way better than AWS’s database” what database is Larry referring to?

And better for what database? Remember that AWS offers managed Oracle. AWS’s RDS service offers a fully managed service for Amazon Aurora, Oracle, Microsoft SQL Server, PostgreSQL, MySQL, and MariaDB.

For Oracle, AWS’s managed database service offers the following:

  • Pre-configured Parameters
  • Monitoring and Metrics
  • DB Event Notifications
  • Software Patching
  • Provisioned IOPS (SSD)
  • Automated Backups
  • DB Snapshots
  • DB Instance Class
  • Storage and IOPS
  • Automatic Host Replacement
  • Multi-AZ Deployments

So here Larry seems even to be saying that Cloud Oracle DB is better than AWS’s cloud Oracle DB. The distinction between Oracle and AWS is between Oracle Cloud and AWS’, not Oracle DB and AWS’ database(s).

 

This perplexing claim is repeated by Steve Daheb of Oracle. In this video, Steve Daheb claims that the Amazon databases like Redshift and Aurora are not open and cannot be ported to other IaaS providers. Steve Daheb seems to miss out on the fact that Redshift and Aurora also cannot be hosted at the Oracle Cloud. Secondly for a company with very little cloud business as a percentage of revenues (roughly 16%) the discussion of cloud is out of proportion with Oracle’s business. Secondly, none the Oracle Automated Database only works (for some strange reason) if it is managed by Oracle. Oracle 18c is not autonomous if installed on premises — which is where the vast majority of Oracle databases reside. 

Furthermore, AWS has both a BYOL, or bring your own license model. This means that whatever a company purchases from Oracle can be run on AWS.

“Bring Your Own License (BYOL): In this licensing model, you can use your existing Oracle Database licenses to run Oracle deployments on Amazon RDS. To run a DB Instance under the BYOL model, you must have the appropriate Oracle Database license (with Software Update License & Support) for the DB instance class and Oracle Database edition you wish to run. You must also follow Oracle’s policies for licensing Oracle Database software in the cloud computing environment. DB instances reside in the Amazon EC2 environment, and Oracle’s licensing policy for Amazon EC2 is located here.”

Things Oracle Claims AWS Managed DBs Can’t Do?

Ellison went on to say.

“This level of reliability will require Oracle to automatically tune, patch, and upgrade itself while the system is running, Ellison said, adding: “AWS can’t do any of this stuff.”

Again, the only reason that AWS could not do whatever Oracle DB can do is if Oracle does not release its newest DB (Oracle 18) to AWS.

But secondly, AWS already has a managed database service that is considered superior to the Oracle Cloud. So in fact, AWS has been “doing this stuff” for quite some time, but they have been doing with a managed DB offering.

Ellison is not being merely somewhat inaccurate in this case or engaging in normal puffery; he is misrepresenting what AWS offers as well as misrepresenting what AWS does.

Mark Hurd Doubles Down on the Automated Database Inaccuracy

Now let us check Mark Hurd’s comment in the same vein.

“Oracle CEO Mark Hurd said his company’s database costs less because it automates more. He described AWS’ MySQL-based Aurora database and its open source version, Redshift, as “old fashion technologies.” Oracle’s new database, on the other hand, allows users to “push a button and load your data and you’re done.”

Let’s say that Aurora and Redshift are old-fashioned technologies. Aurora was just developed in the past few years. But we don’t need to address the issue; we are merely left over to the side.

Is Mark Hurd aware that AWS provides managed Oracle? This is known to everyone, and so it should have fallen into both his and Ellison’s frame of reference at this point. Why do Hurd and Ellison repeatedly speak as if AWS is primarily a database vendor rather than an IaaS/PaaS vendor? When one software vendor misrepresents the offering from another software vendor, there has to be a specific reason why.

To reiterate, AWS has offered a managed DB service for quite some time.

If Oracle’s new database allows users to push a button and load your data and you’re done, why is the earth populated with so many Oracle DBAs? There is, no database that does not load data with the push of a button, but the question is the maintenance after the data is loaded. Does Mark Hurd work with databases? How much are the technical people at Oracle sharing with Mark Hurd?

When Did Oracle Begin Emphasizing Automation?

It is also curious that Oracle only began talking about automation after they began losing business to AWS. Is that a coincidence or is there perhaps a deeper meaning there?

We think there might be. In fact, the entire automated database narrative seems to be a reaction to something very specific we will address further on.

AWS’s Automation Verus Oracle’s Explanation of the Automated Database

Oracle is ignoring that AWS also has automated features. See the following quotation from their website and are part of the AWS Systems Manager.

“Systems Manager Automation is an AWS-hosted service that simplifies common instance and system maintenance and deployment tasks. For example, you can use Automation as part of your change management process to keep your Amazon Machine Images (AMIs) up-to-date with the latest application build. Or, let’s say you want to create a backup of a database and upload it nightly to Amazon S3. With Automation, you can avoid deploying scripts and scheduling logic directly to the instance. Instead, you can run maintenance activities through Systems Manager Run Command and AWS Lambda steps orchestrated by the Automation service.”

But AWS’s claims are far more reasonable and Oracle’s. But according to Ellison and Hurd, these automated features don’t seem to exist.

The Validity of Oracle’s Claims on AI & ML

Interwoven within the claims around the automated database are AI and ML. Now at this point, a great swath of vendors is now claiming AI and ML capabilities. However, AI is still quite limited in usage. Let’s take the first, which is AI.

To begin, AI is an enormous claim; it proposes that the software is so close to consciousness that it is nearly undifferentiated from an adult human brain.

Safra Katz discusses the topic as if it is old news. She states…

“We have no AI project; we have AI in every project,”

Being on quite a lot of projects with the Oracle DB (although not Oracle applications), there is no evidence of any AI whatsoever. Not only that, but Oracle is also not known for ML. Where is all that Oracle AI hiding?

It’s on every project according to Safra, but we just can’t see it. A very large number of Oracle customers are running old versions of the Oracle DB and may have minimal Oracle apps. Are these customers also using AI?

AI is contained in Alexa or Google Home, and it does not take very long ask Alexa or Google Home questions and eventually determine that neither are anywhere close being conscious.

AI is mostly a buzzword which works best for people with less technical backgrounds.

Now let us discuss ML or machine learning.

Oracle and Machine Learning’s Input to the Automated Database

Machine learning is a broad category of predictive algorithms that are not particularly new. The great thing about ML for marketers is that you can add ML functionality, without having ML be useful at a customer. That is you can add old algorithms, but they don’t necessarily have to work, and there are tons of public domain ML algorithms can be added quickly to any application, enabling that vendor to state that they do ML.

Here is Google Trends on the interest in ML since 2015. Interest has increased.

Notice the change just in 2017 in the interest in ML. Is this really due to the increases in ML capabilities, or because vendor marketing departments figured out they need to jump on that bandwagon? 

Understanding the Method of Applying ML

The method is that insight is gained from the ML or analysis that did not exist before the ML being performed. However, unlike what Oracle states about its autonomous database, ML is not “self-driving.”

An ML approach or algorithm must be first selected. The most common ML algorithm is linear regression, something with which many people are familiar. It is still the most popular form of ML used by data scientists.
Then set against a dataset (which must also be carefully developed/curated by a human) and then the analysis must also be performed by a human.

ML = EagleEye’ ARIIA?

Vendors propose ML being similar to the computer named ARIIA in the movie EagleEye, which eventually coordinates humans from across the US to assassinate the US President because its analytics observed a violation of the US Constitution. It makes for great fiction, but nothing like the computer in EagleEye has ever existed.

The actual ML processing step of the process is the shortest step, which is why ML capabilities do not scale directly with processing capabilities. Vendors imply a revolution in ML that has never occurred in reality, and that most occur on vendor web pages and PowerPoint presentations.

Oracle’s Automated Database as ARIIA for Data?

Now let us look how Ellison plays directly into this movie script orientation of AI and ML.

“Based on machine learning, this new version of Oracle is a totally automated, self-driving system that does not require a human being either to manage the database or tune the database,” Ellison said, according to a Seeking Alpha transcript of the conference call with investors.

And Larry Ellison’s statement is also found in this video by Oracle. It states clearly that the Oracle Automated Database required no human labor.

That is curious because ML will not enable the automation of a database. Secondly, where have these capabilities been hiding, only to spring forth when every other vendor is also proposing AI and ML capabilities? Oracle addresses this by saying.

“We’ve been developing this for decades,” Loaiza said

If that is true, they have come forward all of a sudden in a peculiar bit of timing. This is addressed by a commenter on an article in The Register.

“They have sure hidden it well then. Oracle DB patches are some of the most painful and complex such exercises I have ever encountered. Versus say SQL Server where it’s click and go! Not to mention having to allow Java to run the installer for Oracle!” – The Register Commenter

After having the highest maintenance database in the industry, how autonomy is all the way through what Oracle is offering, and watch how Oracle combines automation with the cloud.

“The future of tomorrow’s successful enterprise IT organization is in full end-to-end automation,” said Zavery. “We are weaving autonomous capabilities into the fabric of our cloud to help customers safeguard their systems, drive innovation, and deliver the ultimate competitive advantage.”

But According to Oracle None of This Will Impact Jobs?

Notice how Oracle walks back the implications of these supposed changes when it comes to jobs.

“However, the biz has repeatedly emphasized that increased automation will not mean the end of people’s jobs – instead saying it will simply cut out the monotonous yet time consuming day-to-day tasks.”

“This allows administrators to free up their time… do things they were not able to do before,” said Zavery. “They will have to learn some new things beyond what they were doing before.” – The Register

This is also a curious position to take. It also implies omniscience and a lack of bias on the part of Oracle. Oracle developed a video which is designed to make DBAs feel better about this potential loss of jobs.

Maria Colgan makes the statement that Oracle DBAs will leverage the cloud. However, there is little evidence of Oracle having much cloud business. 

If what Oracle said about their autonomous DB was true, it would allow companies to use fewer DB resources. How does Oracle know how each company would decide to respond to these changes?

*Note to Oracle; companies do like cutting costs.

A More Likely Prediction (If Oracle’s Claims for its Automated Database were True)

It is quite reasonable to expect the work that is taken over by the hypothetical autonomous database to be taken as cost savings. That is for database resources to lose their jobs. Oracle does not know.

All of this seems to be a way for Oracle not to perturb DBAs that it would like to endorse the concept of the autonomous DB.

But there is an extra problem. Ellison contradicted this storyline in a different quotation.

“If you eliminate all human labor, you eliminate human error,” Oracle cofounder and CTO Larry Ellison said during his keynote address today.

So, Ellison seems to be proposing eliminating all human labor related to the Oracle database.

So which is it?

Do Oracle’s automated databases now mean that DBAs will not be performing backups and patches (lower level database functions), but also not focusing on analytics (higher level database functions)?

Ellison appears to be speaking categorically about eliminating labor in the database function. This means that if Oracle customers purchase their automated database, the last task for the unnumerable Oracle DBAs will be to perform the upgrade to this database and then transition to new careers as the database is now fully automated. But at the same time “eliminating all human labor” apparently won’t cost jobs.

Ok Larry.

We gave The Register a low accuracy on the article these quotations are from as they provided zero pushback on these extravagant claims in the quotes from Oracle this article. Yet, in a different article, The Register did push back on Oracle’s claims.

Oracle’s Explanation for the Sudden Appearance of Automation

Here is how Oracle explains this sudden appearance of such extreme levels of automation in their database.

“We’ve seen lots of mention of machine learning this week. But how much of that is new and amazing as opposed to vanilla automation you’ve been working on for a long time, is not clear. There’s an important distinction to be made between a database that has a number of automated processes and one that is fully autonomous. Customers can choose to just use automation, or to take the plunge and hand over all their management to Oracle’s cloud operations for the autonomous option.” – The Register

  • Here The Register pointed out to its readers a potential blurring of the lines between the low-level automation and the new claims that Oracle has made.
  • But at the same time, The Register blurs the definition between the automated database and a managed database.

Look at this bizarre sentence from The Register:

“Customers can choose to just use automation, or to take the plunge and hand over all their management to Oracle’s cloud operations for the autonomous option.”

Is that a well thought out sentence? Let us think about this for a second.

If a database is autonomous, why would it need to be managed?

It seems like if you spend enough time talking to top executives at Oracle, pretty soon you can’t figure out which way is up.

This presentation is stated as the Autonomous Data Warehouse Cloud, but it does not show any autonomous activities. Rather George Lumpkin simply shows analytics that are available within the offering. The things that George Lumpkin demonstrates should not have be performed the way he is performing them if the database were autonomous. Larry Ellison and the Oracle documentation say one thing, but the demo shows something different. 

Automating Lower Level of Higher Level Database Activities?

But in this article, Oracle made a mistake. In previous articles and materials, they have proposed that even analytics would be automated. Then in this quote, they state something very different.

“Less time on infrastructure, patching, upgrades, ensuring availability, tuning. More time on database design, data analytics, data policies, securing data.”

That is the more basic items are automated. However, in this case, it leaves more time for things like analytics. Yet, in other Oracle quotations, they state that both lower level and higher level database activities will be automated.

Oracle cannot keep a consistent storyline as to how much is automated, it changes depending on which Oracle source is speaking.

Inconsistencies like this occur when something is not real, or that is things are being made up.

Secondly, Oracle assumes that the customer always wants the database upgraded. Let us get into some important reasons why automation for things like upgrades is not as straightforward as Oracle is letting on.

Oracle Upgrades are Not Free

Version 12.1.0.2 of Oracle database that brought in-memory capability cost an estimated $23,000 per processor.

This is explained by The Register:

“This means that once the release – which has a naming scheme that is typically associated with straightforward patch and performance distributions – has been downloaded by IT and the internal database systems have been updated, a less careful database administrator could create an in-memory database table with a single command, thereby sticking their organization with a hefty bill next time Oracle chooses to carry out a license fee audit.”

Therefore, there are implications to upgrades; they can’t necessarily be “autonomously upgraded.” Most of the Oracle instances in the world are on 11, so not even 12 much less the most recent version of 12. How will the autonomous database work for these customers? Remember, they don’t want to be upgraded.

“As a recent Rimini Street survey showed, as much as 74 per cent of Oracle customers are running unsupported, with half of Oracle’s customers not sure what they’re paying for. These customers are likely paying full-fat maintenance fees for no-fat support (meaning they get no updates, fixes, or security alerts for that money).” – NZReseller.

There is some reason these companies are not upgrading to the latest. One major one is that many customers do not feel the new features are worth the time, effort or money.

Quite obviously, if Oracle could upgrade all customers instantaneously to 18, they would, it would give them a significant revenue increase.

Upgrade Complications

What if the automatic upgrade interferes with something that the customer has set up in the database?

This pushes control to Oracle that the customer does not necessarily want.

  • AWS is offering a fully managed database, which means they are taking full responsibility for the database.
  • Oracle, on the other hand, is offering (with the automated database) some lower level tasks to be controlled by a machine, but this should not be taken to be the same thing that AWS is offering.

Oracle’s Support Quality and IaaS Success?

Furthermore, Oracle has had significant problems with its support quality, choosing to perform cost-cutting rather than maintaining quality. So if Oracle has such an issue with support, then why would they be able to provide high-quality IaaS support with Oracle Cloud? Being a successful IaaS/PaaS provider means being focused on service. Since when in the past 15 years has this been Oracle’s reputation?

The Lost of Control to Automation

Getting back to the loss of control, this is addressed in the following quotation.

“There’s a lot of concern about giving up control,” said Baer. “The initial uptake will be modest, and a lot will just be getting their feet wet …Organisations like banks, which are highly regulated, will be the last to surrender control. Oracle’s Daheb conceded customers might still want to manage something themselves. “They might say, this is dev/test, go ahead, automate that bad boy… this is core, customer-facing – maybe we don’t want to do that anytime soon.” – The Register

This is an inconsistency. Is everything going to be fully automated? Because this is the message from Oracle, or are there examples, many examples perhaps of where things won’t be automated?

“But, he argued, “the big thing” about the autonomous database is that Oracle is offering customers the choice and ability to “get to it at whatever pace makes sense for them”. – The Register

This is a textbook pivot.

Pivoting Away from Automation When Challenged

Reality is conflicting with Oracle’s messaging. The reason for this is Oracle is overstating the degree to which customers will be able to automate Oracle. When faced with questions about this reality, the response is that now the customer has the “choice.”

But that is not the marketing pitch. The marketing pitch is things are about to become incredibly automated with Oracle DBs.

“If Oracle’s customers’ enthusiasm for that change is anything to go by, we will be waiting some time before its autonomous database is the norm.” – The Register

We congratulate The Register for pushing back on Oracle here.

Without even knowing themselves, they were able to find out what customers thought and include that in the article.

Why Oracle is Selling This Automation Story

This telling execs exactly what they want to hear. There is no nuance to the explanation of the Oracle automated database — such as AWS obtains economies by managing large numbers of DBs and with its all web maintenance of DBs and elastic offering reduces maintenance overhead. Instead, Oracle’s pitch is that what amounts to a magic box called Oracle automation will automate everything.

AWS is making real change happen with its approach, and Oracle is off talking about cutting slices of cheese off the moon.

The Automated Database for Selling Oracle’s Growth Story Wall Street

Our analysis is that there is very little to the autonomous database. However, Oracle is using the autonomous database as a selling point to Wall Street.

“Under Mark Hurd and Safra Catz, who share the chief executive officer title, Oracle has bet its future on a new version of its database software that automates more functions and a growing suite of cloud-based applications. Last quarter’s results were a reminder that the company still faces stiff competition from cloud vendors including Amazon.com Inc., Microsoft Corp. and Salesforce.com Inc.” – Bloomberg

So both the autonomous database story is inaccurate, and Oracle’s cloud story is inaccurate. Oracle is seeing very little growth in its cloud business. (The financial analysts have picked up on this second story.)

And in our review of several analysts comments around the autonomous database, they seem to lack the understanding of how Oracle databases work in practice to validate Oracle’s claims. They assume the Oracle autonomous database will become successful. Secondly, the observation that we have made, that the autonomous database is the opposite of what Oracle has been about is also not present.

Oracle wins our Golden Pinocchio Award for its claims about the Oracle Automated Database.

Conclusion

Oracle’s claims around the autonomous database don’t hold up to scrutiny. In fact, for the claim of Oracle Automated Database wins our award.

Secondly, Oracle is a curious source for the autonomous database as Oracle has throughout its history had what is widely considered as the highest overhead and most complex and difficult to manage databases. The argument was always that the Oracle DB could do things that other databases could not do. However, part of this was based on the fact that Oracle made such exaggerated claims for its database. But the distinction in upper-end capabilities between Oracle and other far less expensive to purchase and maintain databases has declined.

Now that this is becoming a more broadly understood concept, Oracle is marketing against its traditional messaging (and the reality of its database product).

In this way, the automated database marketing strategy looks identical to SAP’s Run Simple marketing program, which attempted to recreate SAP’s image as complicated to run and use, when in fact SAP is without question the most complicated set of applications to run. However, Oracle has not been able to push the claims of the automated database as effectively as SAP pushed the claims of Run Simple because Oracle does not have SAP’s partner ecosystem or its degree of control over the IT media.

Finally, an article on AWS from Silicon Angle has the following to say several months after this article was published.

“But Oracle’s push doesn’t appear to have had much impact on AWS, whose revenue rose 49 percent in the latest quarter, to $5.4 billion — even faster than the previous quarter. Moreover, Vogels noted that AWS has seen 75,000 migrations from other databases to its own in the cloud since the migration service launched in early 2016, up from 20,000 in early 2017.”

A Review of Sources Provided by Oracle in Response to this Article

As a response to this article, a representative from Oracle provided the following documents.

Artice 1: Automated vs. Autonomous (By Oracle)

https://blogs.oracle.com/database/autonomous-vs-automated?

This article is by someone out of product marketing at Oracle and merely serves to repeat the claims made about the autonomous database, comparing it to inventions like the telephone. This is consistent with Larry Ellison’s claim that the Oracle autonomous database will be similar as an innovation as the Internet. Here is the exact quote:

“The Oracle Autonomous Database is based on technology as revolutionary as the internet.” – Larry Ellison

So the author of the Oracle article compared the autonomous database to…

  • The Invention of the Telephone
  • The Dawn of the Personal Computer
  • The Internet
  • The iPhone
  • The Self Driving Car

Some comparisons were left out. For instance the internal combustion engine, the discovery of DNA and the light bulb. But it is not clear why Oracle restricted its claims to only the some of the most important discoveries in human history.

Our Conclusion from the Oracle Article

There was nothing for me to comment on anew as the claims were already evaluated earlier in this article. This article targeted as people who don’t think very deeply about topics.

Article 2: Oracle’s Autonomous Cloud Portfolio Drives Greater Developer Productivity and Operational Efficiency

The second article was from Ovum. We are not familiar with Ovum as a source, but they were introduced to us by the Oracle representative as independent of Oracle.

https://www.oracle.com/us/corporate/analystreports/ovum-autonomous-cloud-4417640.pdf

First, the location of this report is a problem. It is on Oracle’s website. If you check with Consumer Reports, they do not allow companies they rate to place the results on their websites or in any printed material. Gartner allows the same thing, which is one of many reasons Gartner cannot be considered a true research firm — which is covered in the following article How Gartner Research to Compares Real Research Entities.

This immediately should cause one concern for Ovum’s true independence from Oracle. You will not find any Brightwork Research & Analysis report on any vendor’s website. Why would we? We receive no income from any vendor. Something to understand as soon as an entity accepts money from a vendor the study converts from research to marketing propaganda. All of the vendors that have reached out to Brightwork Research & Analysis asking for research to be performed began with the research conclusion they wanted the study to come to. The idea was that you then assemble the information to support the conclusion.

If we take this quotation, it is instructive of the overall approach of the article.

“While, at the top level, the concept of a fully packaged and managed PaaS should ideally include the provisions for the automation of tuning, patching, upgrade, and maintenance tasks, it is the capabilities driving developer productivity and faster time to value that deliver greater value to users. In this context, Oracle has an early-mover advantage and offers a clear differentiation in comparison to its nearest PaaS competitors. This is in line with Oracle’s strategy to embed artificial intelligence (AI) and machine learning (ML) capabilities as a feature to improve the ease-of-use and time-to-value aspects of its software products, and not just focus on directly monetizing a dedicated, extensive AI platform.”

The claims made by Oracle are not so much analyzed in this report as they are assumed to be true. The article does not question how it is that Oracle has appeared with such capabilities so recently after offering such a high maintenance database for decades. The article does not read so much as independent research as “dropped in” Oracle marketing material.

The following is an example of this.

“On the data management side, Oracle offers the ability to rapidly provision a data warehouse, and automated, elastic scaling, with customers paying only for the capacity they use. In the context of security and management, Oracle offers ML-driven analytics on user and entity behavior to automatically isolate and eliminate suspicious users.”

Can it Be Detected if the Sentences are From Ovum or From Oracle?

If this were merely a quotation from Oracle that the Ovum author then analyzed it would be fine. But it isn’t. This is Ovum’s statement regarding the autonomous database.

Notice this paragraph uses the same superlatives that would have been used by Oracle to describe the benefits. There is no outside voice in these explanations. If the source were to be removed, it is impossible to guess whether this quotation is written by Oracle or by someone friendly to Oracle.

Our Conclusion from the Ovum Article

Overall, while we never read a report from Ovum previously, this report damaged our view of this entity, and from this report at least, they can not be said to have performed any research at all. Ovum merely repeated marketing statements made by Oracle.

It is difficult to see the distinction between Oracle having written this report.

Article 3: Oracle’s Autonomous Database: AI-Based Automation for Database Management and Operations

https://idcdocserv.com/US43571317

The third report sent to use by the Oracle representative is from IDC. IDG owns IDC. It breaks down thusly.

  • IDG is the overall conglomerate that runs many IT media websites and takes money from any vendor of any reasonable size to parrot their marketing literature.
  • IDC is the faux research arm of IDG. IDC claims it is the research arm of IDG, but we dispute IDG’s claim that they perform actual research and are quite obviously tightly controlled by entities that pay either IDG or IDC which gives them major conflicts. IDG may have been paid directly by Oracle for this article or may have written it because Oracle is such a large customer of IDC.

IDG is a media conglomerate that is neither a journalistic entity nor a research entity that operates to maximize profits in the current media climate where virtually no income comes from readers, and the media entity must fund itself from industry sources. Neither IDG nor IDC ever disclose this ocean of funding that operates in the background. Masses of IDG ad sales reps are in constant contact with vendors and consulting companies negotiating fees and discussing what industry-friendly article will appear where and at what price.

We have extensive experience analyzing IDG produced material. IDG owns eight of the 20 largest IT media outlets including names like ComputerWeekly and CIO. IDG accepts paid placements and produces large quantities of vendor friendly and inaccurate information and is paid by Oracle both for placements and for advertisements. We covered IDG in the following article.

Normally when we review an IDG article which covers SAP, that article will score between a 1 and 3 out of 10 for accuracy.

Now that we have reviewed the conflicts of interest and credibility problems with IDC/IDG let us move to analyzing the content of the article:

The first quote to catch our attention was the following:

“Databases and other types of enterprise software have had heuristics for years that provide various levels of operations automations. Oracle is no exception to this. What is new is the use of machine learning algorithms that replace the heuristics.There are numerous reasons for this — lack of sufficient amounts of data needed to train an ML model, lack of compute power to train the model effectively and in near real time, and lack of a sufficient variety of data coming from different types of users and use cases that helps to broaden the applicability of the algorithms.”

As with the Ovum study, this appears to be a copy and paste from Oracle’s provided information.

Secondly, its foundational assumption, that ML is always superior to heuristics is untrue. In the book, Rationality for Morals outlines how heuristics can often defeat more complicated models that deploy the analysis of more observations.

In this quote, the article repeats Oracle claims which we disputed the benefits of earlier in our article.

“In addition to providing all tuning and maintenance functions, most of which are automated, this service also provides regular software patching and upgrading, so the user is always running on the latest software, and knows that, for instance, the most recent security patches have been applied.”

As we stated, most customers are not even on Oracle 12. They are running older versions of the Oracle DB. Many companies have dropped Oracle support entirely because it is considered such a poor value.

Moreover, upgrading a database has a number of implications, and it is not a simple matter of upgrading automatically. The authors of this report do not account for or even mention any of this.

In this quotation later in the report, the IDC makes a false claim about ML.

“Although machine learning libraries have been around for decades and have been offered as part of many of the world’s statistical packages, including IBM’s SPSS, SAS, and so forth, the use of machine learning by enterprises hasn’t been widespread until recently because these algorithms require a lot of data and a lot of compute power.”

That is inaccurate. Let us look into why.

ML Has Risen Due to Recent Advancements in Hardware?

Computers for many years now have been of sufficient speed to run ML algorithms. And the majority of time in running ML is in data collection and data munging and then analysis. The actual time spent processing is normally short unless very large number of variables are used. (and using so many variables, while now popular brings up a question of overfitting)

When I run ML routines, the results are returned in less than 10 minutes, and I am using a seven-year-old laptop. We have had gobs and gobs of processing power for many years now.

The reason for the rise in the discussion of ML has not been computer hardware related, but due to marketing departments latching onto ML to help market their products. How can this be proven? Because, according to Google Trends, the most significant rise in the interest in ML was from 2014 to the present. How much did computer hardware increase in speed from 2014 to 2017? Furthermore, the interest in ML was greater in 2004 than in 2014. Were computers faster in 2004 than 2014?

ML/AI is very effective at illustrating value to people without a mathematical background.

Our Conclusion of the IDG Report

Overall the IDG report is a restatement of Oracle’s claims around the autonomous database without any analysis. There is no explanation for the sudden appearance of AI/ML in Oracle’s database, and no questioning of explanations regarding AI/ML.

This is like the Ovum report, not research.

Overall Report Conclusions

None of the sources provided demonstrate any thinking and only serve the demonstrate that Oracle has a lot of money to spend on media entities and faux research entities that will take money to repeat whatever Oracle’s marketing team tells them to write.

The Problem: A Lack of Fact-Checking of Oracle

There are two fundamental problems around Oracle. The first is the exaggeration of Oracle, which means that companies that purchased from Oracle end up getting far less than they were promised. The second is that the Oracle consulting companies simply repeat whatever Oracle says. This means that on virtually all accounts there is no independent entity that can contradict statements by Oracle.

Being Part of the Solution: What to Do About Oracle

We can provide feedback from multiple Oracle accounts that provide realistic information around a variety of Oracle products — and this reduces the dependence on biased entities like Oracle and all of the large Oracle consulting firms that parrot what Oracle says. We offer fact-checking services that are entirely research-based and that can stop inaccurate information dead in its tracks. Oracle and the consulting firms rely on providing information without any fact-checking entity to contradict the information they provide. This is how companies end up paying for items which are exorbitantly priced, exorbitantly expensive to implement and exorbitantly expensive to maintain.

If you need independent advice and fact-checking that is outside of the Oracle and Oracle consulting system, reach out to us with the form below or with the messenger to the bottom right of the page.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Database Content

References

https://www.theregister.co.uk/2017/09/08/ellison_and_cos_equity_now_relies_on_a_80_oracle_stock_price_and_cloud_success/

Unrelated article that shows Oracle’s focus on executive compensation. Executive compensation (overcompensation) driven off of stock prices is a primary reason for the release of false information to the public. Lying in public announcements can also be seen as a way of communicating loyalty to the company.

https://aws.amazon.com/rds/oracle/

https://siliconangle.com/blog/2018/06/21/amazon-cto-cloud-offers-database-need/

https://www.theregister.co.uk/2017/10/08/oracle_openworld_2017_analysus/
In this article The Register pushes back on Oracle’s claims for the automated database.

https://www.forbes.com/sites/oracle/2018/03/30/larry-ellison-oracle-is-revolutionizing-the-database-and-it-service-delivery/#4813b9e87a4d

The Chinese publication Forbes has this article by a Jeff Erickson, who is listed as an “Editor at Large for Forbes.” Its title is Larry Ellison is Revolutionizing Database and IT Service Delivery. One wonders if the author has any conflicts of interest by declaring this? Did Forbes consider this potential conflict? Or did Oracle paying them to publish the article at Forbes assuage these concerns?

This article repeats outlandish claims by Larry Ellison, ensuring Jeff Erickson a good annual review it would seem. Claims include:

“This technology changes everything,” he said. “The Oracle Autonomous Database is based on technology as revolutionary as the internet.”

“To set up, provision, and use Oracle Autonomous Data Warehouse Cloud, a user simply answers a few short questions to determine how many CPUs and how much storage the data warehouse needs. Then the service configures itself typically in less than a minute and is ready to load data.

Once the data warehouse is up and running, its operation also is autonomous, delivering all of the analytic capabilities, security features, and high availability of Oracle Database without any of the complexities of configuration, tuning, and administration—even as warehousing workloads and data volumes change.”

This article written by Oracle comes to a surprising conclusion about AWS. Can you guess what it is before reading it?

“AWS Comes Up Short – At the launch event at company headquarters in Redwood City, California, Ellison showed how Oracle Autonomous Data Warehouse Cloud can run faster than comparable database offerings from Amazon Web Services, while being more scalable, and costing less.”

That is curious. I would have expected an article written by Oracle to praise AWS. How odd.

“In addition to running faster and thus costing less, Oracle Autonomous Data Warehouse Cloud is truly elastic, Ellison said, while the Amazon Elastic Compute Cloud, ironically, is not. With the AWS service, “you pay for a fixed configuration” and when you want to add CPUs, you have to take the database down and wait, he said.”

Well, AWS’s service sounds truly useless. Probably no purpose in investigating it now is there.

In the following article also by Jeff Erickson…

https://www.forbes.com/sites/oracle/2018/03/27/how-oracles-new-autonomous-data-warehouse-works/#7cf519de5c7f

Titled How Oracle’s New Autonomous Data Warehouse Works

Oracle claims that an Autonomous Data Warehouse Cloud allows a data warehouse to be setup in less than a minute.

“set up a high-powered data warehouse in less than a minute by answering just five questions:

How many CPUs do you want?
How much storage do you need?
What’s your password?
What’s the database name?
What’s a brief description?

“And that’s it,” says Keith Laker, an Oracle lead product manager for the company’s autonomous data warehouse. “Twenty-five seconds and you’ve got a high-performance data warehouse that’s ready to go.”

And once the data warehouse is running, its operation also is autonomous, using the world’s most advanced database platform and machine learning to operate without human intervention, tuning and optimizing itself for top performance and patching itself without taking the system offline.”

Truly amazing. If Oracle has not yet, they should be recommended to Nobel Society for consideration for a Noble Prize.

Finally, after decades, people now have a place to put their data as evidence by the following quotation.

“And that’s it,” says Keith Laker, an Oracle lead product manager for the company’s autonomous data warehouse. “Twenty-five seconds and you’ve got a high-performance data warehouse that’s ready to go.”

And once the data warehouse is running, its operation also is autonomous, using the world’s most advanced database platform and machine learning to operate without human intervention, tuning and optimizing itself for top performance and patching itself without taking the system offline.”

And without a hint of the potential for overstatement, Jeff Erickson finishes off the article thusly.

“Autonomous Data Warehouse Cloud Service is the next-generation cloud service for the whole organization, with high performance and reliability and vastly reduced labor costs because it’s autonomous. The service runs as little as $1.68 per CPU hour, with storage as low as $148 per terabyte per month. Oracle customers can also bring their existing on-premises licenses to take advantage of Oracle’s BYOL program for PaaS services. Get details on the pricing page.”

https://www.oracle.com/database/autonomous-database/feature.html
Very little information is provided about the autonomous or automated database at the Oracle website.

https://www.sdxcentral.com/articles/news/oracles-ellison-touts-totally-automated-self-driving-oracle-database/2017/09/
SDX Central simply repeats Oracle’s claims verbatim in this article.

https://read.acloud.guru/why-amazon-dynamodb-isnt-for-everyone-and-how-to-decide-when-it-s-for-you-aefc52ea9476

*https://www.amazon.com/Rationality-Mortals-Uncertainty-Evolution-Cognition/dp/0199747091

Oracle’s new database uses machine learning to automate administration


VentureBeat simply repeats Oracle’s claims for the autonomous or automated database verbatim in this article.

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

https://forums.theregister.co.uk/forum/1/2017/10/08/oracle_openworld_2017_analysus/
Good comments on Oracle’s claims.

https://www.theregister.co.uk/2017/08/11/number_off_oracle_rounds_up_major_database_release_cycle_numbers/
Explains the jump for Oracle 12… to Oracle 18

Not on the automated database but on new the new versioning. Included to explain to readers confused about the jump from 12 to 18.

“So what would have been Oracle Database 12.2.0.2 will now be Oracle Database 18; 12.2.0.3 will come out a year later, and be Oracle Database 19.

The approach puts Oracle only about 20 years behind Microsoft in adopting a year-based naming convention (Microsoft still uses years to number Windows Server, even though it stopped for desktop versions when it released XP).”

https://www.theregister.co.uk/2014/07/24/oracle_in_memory_database_feature/
Describing costs of upgrading to Oracle In Memory

The Risk Estimation Book

 

Software RiskRethinking Enterprise Software Risk: Controlling the Main Risk Factors on IT Projects

Better Managing Software Risk

The software implementation is risky business and success is not a certainty. But you can reduce risk with the strategies in this book. Undertaking software selection and implementation without approximating the project’s risk is a poor way to make decisions about either projects or software. But that’s the way many companies do business, even though 50 percent of IT implementations are deemed failures.

Finding What Works and What Doesn’t

In this book, you will review the strategies commonly used by most companies for mitigating software project risk–and learn why these plans don’t work–and then acquire practical and realistic strategies that will help you to maximize success on your software implementation.

Chapters

Chapter 1: Introduction
Chapter 2: Enterprise Software Risk Management
Chapter 3: The Basics of Enterprise Software Risk Management
Chapter 4: Understanding the Enterprise Software Market
Chapter 5: Software Sell-ability versus Implementability
Chapter 6: Selecting the Right IT Consultant
Chapter 7: How to Use the Reports of Analysts Like Gartner
Chapter 8: How to Interpret Vendor-Provided Information to Reduce Project Risk
Chapter 9: Evaluating Implementation Preparedness
Chapter 10: Using TCO for Decision Making
Chapter 11: The Software Decisions’ Risk Component Model

Did SAP Simply Reinvent the Wheel with HANA?

Executive Summary

  • SAP made many claims about HANA, but upon analysis, what SAP actually did with HANA is reinvent the wheel.
  • We cover SAP proposals ranging from code pushdown to in-memory computing to the fictitious backstory for HANA.

Introduction to HANA as a Derivative Product

Innovation has been a critical selling point of HANA. You will learn how the claims around HANA’s innovation checks out.

Understanding SAP’s History with HANA

When SAP first introduced HANA, which was begun with enormous fanfare the idea was that SAP had created a whole new database. In fact, in as recent as the Q4 2017 analyst call, Bill McDermott stated the following:

“Back in 2010, we set bold ambitions for SAP. We focused on our customers to be a truly global business software market leader. We set out to reinvent the database industry.

Forrester has now defined the new market for translitical data platforms, and of course, they ranked SAP HANA as the clear number one. We led the market with intelligent ERP, built on an in-memory architecture.”

In this article, we will analyze how much of what SAP created with HANA is new and how much is simply copied from other database vendors and then claimed as innovation.

Important Information About HANA

We have performed extensive detailed analysis of HANA. The more we look, the less we can find that is unique or innovative.

Let us review some of the major points of inconsistencies with the innovation story around HANA.

The Performance of HANA

SAP has vociferously proposed that HANA is faster than any other database. However, they have provided no evidence that this is true. We performed the necessary research into this topic and concluded that SAP’s claims of superiority versus the competing offerings of Oracle, IBM and Microsoft are untrue. We have explained this in the article What is the Actual Performance of S/4HANA? 

The Column Oriented Design of HANA

SAP has proposed that they essentially invented column-oriented databases. Column-Oriented databases go back as far as row oriented (often referred to as relational), and SAP acquired Sybase in 2010 before HANA was introduced. And Sybase already had IQ, now SAP IQ, that is was a column-oriented database.

Furthermore, SAP made other acquisitions very silently, to give the impression that they “invented” the technologies that underpin HANA. At the same time, SAP’s marketing documentation was intended to give prospects the impression that SAP had invented a new category of databases. Notice Bill McDermott’s statement around “reinventing the database industry.”

That is reinventing it with a database design that had been around for decades? Furthermore, this comment was not made in 2011 or 2013 when HANA had yet to be challenged. This comment was made in 2018 after plenty of time has passed to verify the marketing statements about HANA with both real implementations, benchmarking and the HANA technical documentation.

SAP has a long history of faking innovation. Faking innovation is a major strategy in both software and patent drug industries which covers for a process whereby innovations are taken from the public domain (or from competitors) and repackaged as something developed internally. 

Calling All New HANA Development “Innovations”

SAP has claimed that because HANA is being actively developed, that each development is innovation.

Yet, the items that HANA is developing already exist in other databases. The definition of innovation is that the item needs to be new. Not new to the software vendor, but new to the world.

While not discussed, the innovation should also be beneficial. Areas where SAP has done things that are new, such as reducing aggregates, have not been demonstrated to be beneficial. Reducing database size is really an issue if a company is somehow constrained in the size of their databases, but with the very low cost of modern storage, this is not the case.

SAP has proposed through surrogates like John Appleby of Bluefin Solutions that hard disks “take up a lot of space,” and that companies cannot afford the storage space to locate disks – which is either absurd or insulting depending upon your perspective. One has to question the innovation of any company that has a spokesman like John Appleby (which we cover in the article Why John Appleby Was so Wrong About His HANA Predictions), Vishal Sikka or Hasso Plattner that are repeatedly found in hindsight to have knowingly released false information into the marketplace. It is normally the case that truly innovative entities do not find it necessary to lie about their innovations.

Introduction

SAP uses terms for propagandistic effect more than any vendor that we study at Brightwork Research & Analysis. Using terms for propaganda means that the term has no other purpose other than to mislead the audience.

In this article, we will explore the specific issue of how SAP uses the term innovation as a term of propaganda and in order to convert a weakness into a strength, in particular with respect to the SAP HANA product. In this article, we made the following observation about innovation and HANA that we have not seen articulated elsewhere.

HANA and Innovation?

In the article How HANA 2 Was a Cover Story for the Real Story, we covered how SAP repeatedly used the term innovation as cover for the fact that something very undesirable occurred with HANA 2. Something that customers of earlier versions of HANA would have never imagined they would have to deal with. HANA 2 was not backwards compatible with HANA 1 SPS 9 and below.

“What is really changing is SAP enabling customers to choose how they manage their update cycles. Currently, SAP releases a new SPS pack for HANA every 6 months and expect customers to be within 2 SPS levels as part of the terms of their maintenance contract. For some organisations this rate of change is just too much but something they have to manage to retain support, meaning those organisations often have to live with the constant change – this constant change of their core landscape can often kill innovation. – AgilityWorks

Research at Brightwork into HANA indicates that SAP and their surrogates use the term “innovation” to simply mean development. HANA is behind its competitors and is developing more rapidly in order to catch up. But the correct word for this is not innovation. Here is the definition of innovation.

But the correct word for this is not innovation. Here is the definition of innovation.

Innovation Used to Excuse Database Instability

Another major use of the term innovation by SAP has been to serve as a cover for product immaturity. SAP has repeatedly talked about how customers would receive “rapid innovations.” However, again, these areas SAP is developing are not new. They are new to HANA, but that does not make them new to databases in general.

What SAP has been doing is because HANA is not stable, SAP again brought out the trope of innovation. Therefore, the fact that SAP was “innovating” was why customers should not expect stability. The obvious fact that the same functionality is available from far more established database vendors that already have the same features and that also have stability is not observed by SAP or by consulting companies that advise companies on SAP.

The Code PushDown of HANA (Innovation or Innovative Terminology?)

A stored procedure is the established term for when the code is moved from the application layer to the database layer, normally for performance reasons. However, SAP decided to come up with a new term, code pushdown. 

Why?

Well as our colleague points out.

“But by using the code pushdown term and not “stored procedures + DB views”, they not only have an innovative term for real “stored procedures” but also obscure that classic ABAP views are extremely far behind REAL Views that exist for decades and that this is one reason why the database is kept so stupid in classical “ERP on AnyDB”.”

This is why analyzing the terminology that SAP uses is so important. SAP uses specialized and often inaccurate terminology in order to to lie to customers. This is found the way that SAP called HANA “in memory,” which we cover in the next section. When a false term is used, it is just the starting point. It can be considered the sound of a gate opening for what will be a torrent of false information.

SAP’s presented logic for code pushdown is performance, but when SAP had no database to sell, they were in favor of performing processing in the application layer. The code pushdown is what has served as justification for SAP to keep S/4HANA exclusive to HANA. That is curious, an ERP system, which has relatively low-performance requirements must have the code pushed down to the HANA database, but other applications that SAP offers, that SAP has less account control over, still work on AnyDB.

Here the obvious factor in the determination of which applications are exclusive to HANA and which does not have to do with leverage, not the technical requirements.

Oracle on SAP’s Code Push Down

In Oracle’s August 2019 paper Oracle for SAP Database Update, Oracle has the following to say on SAP’s innovation claim regarding code pushdown.

SAP used to think of a database as a dumb data store. Whenever a user wants to do something useful with the data, it must be transferred, because the intelligence sits in the SAP Application Server. The disadvantages of this approach are obvious: If the sum of 1 million values needs to be calculated and if those values represent money in different currencies, 1 million individual values are transferred from the database server to the application server – only to be thrown away after the calculation has been done. As a response to this insight, SAP developed the..

„Push down“ strategy: push down code that requires data-intensive computations from the application layer to the database layer. They developed a completely new programming model that allows ABAP code to (implicitly or explicitly) call procedures stored in the database. And they defined a library of standard procedures, called SAP NetWeaver Core Data Services (CDS).

20 years earlier, Oracle had already had the same idea and made the same decision. Since version 7 Oracle Database allows developers to create procedures and functions that can be stored and run within the database.(emphasis added) It was, therefore, possible to make CDS available for Oracle Database as well, and today SAP application developers can make use of it.

How can code pushdown be innovative if Oracle had been doing it 20 years before SAP?

The CDSs of HANA

Core Data Services are a type of code pushdown that is a database view. SAP has introduced CDSs as something new, when in fact they are copying the idea of the dictionaries that have been available in AnyDB databases for decades.

SAP has stated that AnyDB can also “use” CDSs, but the question is why they would want to do so. SAP is giving the impression that what is really just catching up with other database vendors, it is actually coming up with something new that does not already exist for AnyDB database. Here again, SAP’s innovation claims do not pass the smell test.

Caching of Queries

In the document Boost Performance for CDS Views in SAP HANA, SAP states that it needs to cache queries for performance.

It further states:

“Keep CDS views simple (in particular serviceQuality A and B = #BASIC views)
In transactional processing, only use simple CDS views accessed via CDS key
Expose only required fields define associations to reach additional fields when requested”

This is odd. For an in-memory zero latency database like HANA, why would these limitations need to be put into place?

“Perform expensive operations (e.g. calculated fields) after data reduction (filtering, aggregation)
Avoid joins and filters on calculated fields
Test performance of CDS views. Test with reasonable (= realistic) test data”

This speaks to the need to limit the consumption of computing resources. Again, it should not apply to HANA.

“Stay tuned on caching possibilities of SAP HANA and Fiori apps.”

Caching for Both HANA and Fiori?

Caching for both HANA and Fiori? Impossible! A foundational proposal of SAP since HANA was first introduced was that there should be no caching.

Everything, literally everything is supposed to be in memory. Caching makes no sense with the presented HANA design. The people working at SAP on HANA and who presented this at TechEd 2017 clearly do not understand Hasso’s vision.

According to Hasso Plattner, HANA is and forever will zero latency. But the techniques that are described in the actual HANA technical documentation show a much more complicated picture with SAP performing caching in several locations.

Not only can HANA not provide zero latency (surprise, surprise), but testing even optimized demo boxes shows that Fiori running on HANA underperforms open source databases and server technologies like MySQL and Apache as explained in the article Why is the Fiori Cloud so Slow?

Furthermore, the hardware specs that SAP has for HANA are extremely large. The column-oriented store combined with the large quantities of RAM is supposed to be so incredible, that these types of techniques should not be necessary. But HANA underperforms other databases even though it has far more hardware. The Oracle benchmark shows that HANA was only able to come close to Oracle 12c performance with far more hardware. This is, of course, a benchmark produced by Oracle. However, other private benchmarks that have been made available to Brightwork show the same thing.

Everything In Memory and In-Memory Computing?

When HANA was first introduced, SAP stated that the entirety of the database would have to be loaded into memory. However, the technical documentation on HANA shows clearly that only some tables are loaded into memory. Neither the large tables nor the column-oriented tables are immediately loaded into memory. This is peculiar, as it was supposed to be the relationship between column-oriented and tables and high-speed memory that were to provide HANA with its analytical advantage. However, either way, HANA uses memory optimization….surprise, just as all of the other database vendors that SAP is copying its solution from. As we covered in the article How to Understand Why In-Memory Computing is a Myth, all databases have their tables placed into memory.

However, a database is much more than whether a larger percentage of the tables are placed into memory or whether it uses a column store for more of the tables. This is, by the way, another detail that has come to light as time has passed. Originally, SAP stated the entire database was a column store (which would not have made any sense by the way), then it is determined that many of the tables in HANA are in fact rows.

Here again, one thing is stated about how HANA works, implying that all other database vendors are backward for using memory optimization, and then once the technical details are read, HANA just does the same thing other databases do. This gets back to the central point that almost nothing that a salesperson or HANA marketing literature says about HANA can be trusted.

This article was published in March of 2018. However, in August of 2019, Oracle published a document called Oracle for SAP Database Update.

In this document, Oracle made the following statement about HANA versus Oracle.

Oracle Database 12c comes with a Database In-Memory option, however it is not an in-memory database. Support-ers of the in-memory database approach believe that a database should not be stored on disk, but (completely) in memory, and that all data should be stored in columnar format. It is easy to see that for several reasons (among them data persistency and data manipulation via OLTP applications) a pure in-memory database in this sense is not possible. Therefore, components and features not compatible with the original concept have silently been added to in-memory databases such as HANA.

Here Oracle is calling out SAP for lying. Furthermore, we agree with this. SAP’s proposal about placing all data into memory was always based upon ignorance and ignorance on the part of primarily Hasso Plattner.

If SAP had followed Oracle’s design approach, companies would not have to perform extensive code remediation — as we covered in the article SAP’s Advice on S/4HANA Code Remediation.

The Obvious Conclusion

Increasingly it simply appears that purchased some database products, and then reverse engineered existing databases, while putting an extra emphasis on placing more of the database in memory.

Innovation or Copying while Throwing in Confusing Terminology?

When discussing this topic with several other people investigating HANA, the following insight was given to us.

They recreate the wheel as an octagon because anyWheel is round and then sell you a cycloid molded road to drive smoothly – but only on their roads.

What is Truly New in HANA?

It seems like a lot of this is just recreating the wheel, but with the blue SAP bow on top.

This is how Hasso Plattner, SAP and SAP partners would like to present the genesis of HANA. Not as a series of technologies that SAP purchased more than year before HANA’s development, not based upon databases that had been developed more than a decade before HANA, but as divine inspiration by Hasso and his brilliant PhDs. Hasso has repeatedly been referred to as a genius. A genius who “discovered” something that he directed SAP to purchase, and then after purchasing, immediately invented. This false storyline is laid out very carefully in the book The In Memory Revolution. 

HANA, The Only Database With a Purpose Built Fictitious Backstory

In the article Did Hasso Plattner and His Ph.D. Students Invent HANA?, we uncovered (with some helpful hints from someone who reached out to us) that unlike what was stated by SAP and Hasso Plattner. And unlike what has been repeated ad nauseam by compliant IT media entities and SAP consulting partners, the underlying technology for HANA was purchased not invented by SAP. Furthermore, Hasso Plattner and his Ph.D. students added nothing to these technologies except developing rather impractical ideas such as a database having no aggregates.

SAP did not innovate with HANA. Their primary contribution was to promote the idea of dual-purpose databases, that is a database that can equally well perform transaction process and perform analytics. Yet there is no evidence that this strategy is worthwhile. While doing this SAP has both massively overstated the benefits to such a design while at the same time glossing over all of the downsides to such a design, one of which being higher overhead. Furthermore, as we covered in the article HANA as a Mismatch for S/4HANA and ERP, it is clear that SAP has not mastered the ability to perform both OLTP and OLAP equally well from a single database.

Through four books, which are littered with falsehoods, and serve more as marketing collateral for HANA than “books” in the traditional sense “written” by Hasso Plattner was meant to storm the consciousness of prospects with how amazing HANA would be. It is one of the first books written that describes the invention of something that had already been invented. 

Ding Ding Ding!

SAP receives our Golden Pinocchio Award for first purchasing the technologies that underpin HANA, then reverse engineering other databases and calling it innovation. HANA should be considered a case study in innovation fakery. Why is this not publicly known? Due to the partnership agreements that SAP maintains with other vendors, this has prevented SAP for being called out for its innovation fakery by vendors that know but are censored due to their partnership agreement with SAP. The only entity that could cover this story would have to have complete independence from SAP, which also rules out IT media entities that cover SAP. 

Conclusion

HANA is consistent with what is becoming an established history with SAP of exaggerating its innovations and making it appear that ideas and techniques that it took from other places were developed inside of SAP. HANA does not run 100,000 times faster than all competitive offerings (Bill McDermott). It is not an innovative database.

The primary thing that is innovative about HANA is that SAP tells customers that it is innovative. Once you look under the hood what you have is a far less mature database than other offerings, and a desire of SAP to push competitors out if “its” accounts by using a falsified storyline about how innovative HANA is to customers that are soft targets. That is the less database knowledge prospects have; the more SAP can gain traction in those accounts by propagating very large disruptions to their customers while greatly increasing the TCO of the databases used by them.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Who Was Right and Wrong on HANA Content

References

https://www.oracle.com/a/ocom/docs/ora4sap-dbupdate-5093030.pdf

https://seekingalpha.com/article/4141369-saps-sap-ceo-william-mcdermott-q4-2017-results-earnings-call-transcript

https://www.sap.com/documents/2018/02/2e6393af-f47c-0010-82c7-eda71af511fa.html

https://www.oracle.com/a/ocom/docs/ora4sap-dbupdate-5093030.pdf

https://www.zdnet.com/article/sap-acquires-sybase-for-5-8-billion-but-why/