- John Appleby made bold predictions on HANA.
- How accurate were his predictions for HANA in 2015?
John Appleby’s article on the SAP HANA blog was titled 10 SAP HANA Predictions for 2015 and was published on Dec 15, 2014.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Lack of Financial Bias Notice: We have no financial ties to SAP or any other entity mentioned in this article.
Lack of Financial Bias Notice: We have no financial ties to SAP or any other entity mentioned in this article.
Prediction #1: Large-Scale Suite on HANA adoption?
“Hasso described very nicely in a comment to the linked blog how he believes he will see large-scale customers adopt Suite on HANA.
A company with a 100 tera byte ERP system on AnyDB will need
– 20 tera bytes using ERP on HANA,
– 10 tera bytes using sERP (no aggregates, no redundant tables, less db indices),
– 2-4 tera for actual data and 8-6 tera for historical data with the business logic based partitioning.
I believe that we will see such systems in 2015, and this will give the typical SAP customer with 5-10TB of ERP the confidence that HANA works for them too.”
This did not happen in 2015. SAP stopped reporting numbers in 2015 because the growth leveled out in 2015, as we covered in Why Did SAP Stop Reporting HANA Numbers After 2015?
Suite on HANA was only moderately implemented.
Prediction #2: HANA for Consolidated Workloads?
“Most HANA deployments today are a single database on a single appliance. Because HANA is so high-value, this has been acceptable, but this will change in 2015.
In HANA SPS09 we have the capability to consolidate databases into a single appliance using containers, which are logically separated and quite secure. Less well-known is that the inter-database security mechanisms have already been built. There are two main scenarios for this:
1) Regional systems with consolidated reporting. You can have regional BW databases on one appliance, and do group reporting on top of the regional systems, with no duplication of data.
2) Consolidation of Enterprise Systems. You can put ERP, CRM, SCM, BW all on one HANA appliance but in separate databases, to retain segregation for security purposes. Already you don’t need the BW PSA, because that is duplication. You can remove some BW data for operational reporting, and federate directly into the other databases.
Neither of these are supported configurations yet, but we will see them in 2015.”
This did not change in 2015, as predicted. Most companies stopped implementing in BW.
Prediction #3: HANA Has Done an Incredible Job Winning Over the SAP Installed Base and Acquired Tons of Good IP from Sybase?
“SAP HANA has done an incredible job of winning over the SAP install base, with over half of the 3000+ customers, running SAP BW (that’s over 20% of the BW install-base, in 2 years). There have been big wins in startups too, with the Startup Focus program attracting around 1500 startups.
However, the focus on the Business Suite has meant that the tools to develop standalone applications haven’t had as much attention. SAP acquired a ton of great IP for this from Sybase and this hasn’t been integrated into SAP HANA yet, including database tooling like bulk loads, and stored procedure languages.”
SAP exaggerated its customer uptake, particularly up to 2015, as we covered in Why Did SAP Stop Reporting HANA Numbers After 2015?
Most HANA implementations have indeed been for BW, which has continued until the present day (2019).
The Sybase acquisition is a failure. As we covered in How Accurate Was SAP on the Sybase Acquisition?
Transaction Processing is 2x Competing DBs — Out of the Box?
“Progress has been made in the Business Suite around optimization – we find that out the box, response times are 2x faster on HANA than other databases for transactional tests.”
No, even in 2019, our data points still contradict this even in 2019.
Transaction processing is also shown faster than competing DBs in a minority of respondents to our HANA survey.
HANA is the Most Comprehensive Data Platform in the World?
“HANA is the most comprehensive data platform in the world by now, and it requires a lot of documentation. When each SPS is released, I make a point to download it all and consume it, so I have all of the documentation for each release. In SPS06, it was 27 documents and 120MB in size. In SPS09 this has grown to an incredible 89 documents and 375MB. The biggest PDFs are 896, 884 and 828 pages long.
In particular, it can be tricky to figure out the details of what is new in the details of each component. Part of the reason for this is because HANA does so much, but there does need to be a move to simplify the documentation so it is easier to consume.”
HANA is not a data platform. It is a database.
One of the major complaints around HANA is its lack of documentation. So while Appleby may be pointing to voluminous documentation, the commentary is that many things in HANA are not documented.
HANA does not do more than other databases. The issue is that the documentation is confusing and that HANA, particularly in 2014, but even recently keeps changing with new components switching in and out.
Prediction#4: BI Suite Integration?
“In 2014, we saw integration of the BI Suite into HANA, Lumira Server, Design Studio for HANA. In 2015 I believe that BI and HANA will become synonymous. There was a HANA version of Explorer in Lumira Server 1.14, but it was removed, and I think we will see something like this resurrected.
We will also see the maturity of EIM, Master Data Management and a full version of the BI Suite embedded into HANA – codenamed Project WHO. For Business Suite on HANA customers, this will mean that BI is embedded into the HANA platform.”
BI Suite on HANA is BW. Lumira is a discontinued product, as we covered in Why Does S/4HANA Need B4HANA for Analytics?.
Design Studio is now barely used. BI is not synonymous with HANA, as most BI/BW implementations are still on Oracle. Master Data Management is a dead product. The last parts of the quote are challenging to validate.
Prediction #5: Overhaul of the HANA Modeler?
“The HANA Modeler has been more-or-less static since SPS04 – Attribute Views, Analytic Views and Calculation Views. It has had enhancements, especially to performance, but it hasn’t had a major upgrade. We will see this in 2015, I believe with a single unified modeler based on the existing Calculation View designer. Analytic and Attribute views will be deprecated.
This will come with the deprecation of CE Functions, and ways to call out to SQLScript, integration direct with Predictive Analysis functions and Business Functions and much more advanced OLAP features. It will provide functionality equivalent to Business Objects universes.”
Well, views increased, but the predictive analytics of HANA either were not delivered but never became a relevant factor for HANA projects.
Prediction #6: The End of the Row Store?
“HANA SPS08 quietly introduced a row buffer to the column store, which is one of the reasons why HANA has great OLTP performance. High-frequency updates are inserted into a row buffer, which are then inserted into the delta store in bulk. There are currently very few reasons to use the row store, and SAP often hides it in marketing literature. The two main scenarios are for queues, and configuration tables.
In 2015, we will see the end of the row store, with a unified table construct that will choose the right table format based on data. With this, the CREATE COLUMN TABLE syntax will be deprecated.”
HANA has never had “great OTLP performance.” Earlier, Appleby stated that HANA was not developed to improve OLTP, now according to Appleby, it is. However, the original proposals by Hasso Plattner were that the entire database was column oriented. Why was a row buffer added in SPS08? This means the design evolved. But furthermore, it contradicts the initial claims about HANA. Naturally, without a row-oriented design, HANA would be extremely poor in transaction processing/OLTP.
The row store did not end. HANA still has row oriented tables.
Multiple Predictions: SAP HANA Predictions for 2015
If you look at the HANA roadmap slides, the “Future Themes” aspect is very telling.
“Continued Big Data & IoT platform enhancements, integration, capabilities and optimization
Continued cloud deployment capabilities & options
New and enhanced advanced analytic capabilities, features, & application services
Continued third-party tool & application support
Continued transformation of SAP Business Suite processes
It’s also telling if you read my SAP HANA Predictions for 2014 post – last year, I predicted a ton of new things that didn’t exist. This year, I predict a refactor of existing functionality, and doing existing things better.
I also believe that 2015 will be a watershed year for SAP HANA. In the last month, I did a pilot of the Business Suite on HANA in a customer. In one week, we migrated a 7TB MSSQL database onto HANA (1.2TB of data in HANA). In another week, we transformed finance and payroll business process. There was no muss, no fuss and no problems. Transactions were 2x faster and analytics were 100x faster and it lays the foundation a transformational change program.
Once customers realize that this is the case, and that HANA just works, and it provides consolidation, TCO reduction and all the new functionality in their Business Suite applications, they will move. I believe the number of customers will double in 2015. Let’s wait and see!
Appleby was in constant communication with SAP to have been apprised of new things coming down the pike. So many of these things are predictions. They are just information about what SAP was working on at the time.
Appleby and Bluefin Solutions did not migrate a 7TB DB in one week or transform finance and payroll in a second week. The actual implementation times for doing these things are much longer than this. This matches the types of sales claims made by people who don’t spend time on projects and are in sales mode.
Transactions may have been faster, but Appleby is leaving out any hardware effect as we covered in How Much of Performance is HANA? However, the HANA database’s hardware would have been much larger than that used for SQL Server.
Analytics were listed as 100x faster. But this is inconsistent with the messaging from Bill McDermott. He promised a 100,000x performance increase as we covered in How Accurate Was SAP About HANA Being 100,000x Faster Than Any Other Technology in the World? But even that reported number is likely not accurate. As the same issue of hardware applies. And is unmentioned by Appleby.
Appleby proposed the database compression was 1.2/7 = 83%. However, information from the field is that compression is closer to 30%. Secondly, to get more size reduction, one needs to perform extensive archiving — which would be difficult to do in a week.
It is also unlikely that there was “no muss and no fuss.” All HANA implementations have both technical issues as well as long term high maintenance overhead.
This article scores a 3 out of 10 for accuracy.