A Comparison of SAP HEC with Virtustream Versus AWS

Executive Summary

  • SAP is aggressively pushing HEC or HANA Enterprise Cloud and other private cloud providers.
  • We compare HEC versus AWS.

Introduction

Virtustream and AWS are two differently positioned cloud service provider. AWS is the largest cloud provider in the world by far and was the early pioneer in public cloud. Virtustream is a niche private cloud provider. AWS provides support for a wide variety of applications, which Virtustream is focused on SAP.

The Comparison

AWS supports some SAP applications. However, there are extra complexities to running SAP on AWS. One of the easiest items from SAP to run on AWS is HANA. SAP offers some images of HANA on both AWS, GCP and Azure under a free development license to entice people to use HANA. This does not extend to SAP’s applications.

Virtustream bristles as the comparison of AWS as a direct competitor. This is explained in the following quotation by the CEO of Virtustream.

While AWS focused on serving scale-out cloud native applications, we architected our cloud platform to solve a different engineering problem: to run scale-up, I/O-centric enterprise applications that require higher availability and security.– Diginomica

Part of this quote is more marketing positioning than a reflection of reality. Here are some interesting points.

  • Scalability: It is unlikely that Virtustream can scale up more than AWS, that it has higher availability than AWS.
  • I/O: The points around I/O might be correct.

Security: Private cloud/hosting is considered to be a good fit for customers with the highest security requirements. But is difficult to make the argument of higher security than AWS considering all of the security clearance focused government entities that use AWS.

The previous points combined with the fact that Virtustream is a private cloud means that Virtustream is a premium priced offering over AWS. Virtustream uses miniaturized virtual machines, with their product call uVM that it claims improves resource utilization. Virtustream claims 250 production instances (that 1000 SAP total instances, but 250 in production) of HANA and S/4HANA. This seems to be relatively small for a company that makes a big point about how it specializes in SAP.

The Comparison Matrix Provided

We found this comparison matrix sent to a client of ours.

Something of note is that no “cons” are listed with using HEC and Virtustream. However, there are in fact cons to using them. The only compared item that has any cons is AWS. The message from this comparison matrix is that AWS is the clear laggard of the compared providers.

This matrix was received from the firm contracted with to perform analysis for the procurement.

This matrix does not do a good job of drawing the distinctions between the various cloud providers listed in the matrix. It appears designed to dilute the differences. And second, some of the differences listed in the matrix are inaccurate. The first impression is that this matrix was rigged to push the client down a predetermined pathway. That is the only way the matrix could be so specifically inaccurate.

Let us review each of the line items for accuracy.

  • Instance Size?: According to the matrix, AWS only supports “medium instances?” First, AWS offers very large instances, and these are priced right on their website. We will show the large instance sizes available right from the AWS website further on in this document. Secondly, AWS has very large customers running large instances. All of this is extremely well known among those that work in or research cloud service providers
  • All Cloud Providers Lead AWS in Automation?: According to the matrix, AWS only has planned automation, while the other providers have automation currently. What services are considered automation in HEC, HPE, and Azure? It’s difficult to understand what is meant here without more specifics. However, there are many automated services in AWS. Something to consider is that AWS is the leading innovator of the cloud service providers. There is virtually nothing that other cloud service providers have that AWS does not have. Examples of automation in AWS include Chef for infrastructure automation. Another is AWS System Management Automation. And a third is Ansible for AWS. Overall Brightwork ranks AWS in the leader category in automation.
  • All SAP Products Work on All Service Providers?: This matrix proposes that all SAP products can be run on all the cloud service providers. That is incorrect. Not all of SAP’s products will run in the cloud. In fact, SAP has very little in the way of production instances in the cloud, particularly as a percentage of their on-premises numbers. SAP’s application have specific limitations in this regard, which need to be overcome during the migration/implementation. There are overall, a low volume of SAP internally developed applications current residing in public cloud or even hosted (private cloud). The vast majority of internally developed SAP applications are still delivered on premises. Products like SuccessFactors or Ariba, primarily acquisitions, that were in the cloud before they were acquired by SAP are designed to run natively in the cloud. The majority of S/4HANA on-premises versions of the application are on premises rather than private cloud/hosted.
  • Greenfield Implementations?: Why does AWS only support Greenfield implementations? If the issue is one of hybrid cloud, both Cisco and VMware have introduced hybrid cloud extensions of AWS. This matrix may be dated as both VMware and Cisco hybrid extensions are recent developments.
  • All of the Cloud Service Providers But AWS Support EMEA and APAC?: Which three cloud service providers have the highest CAPEX in the world? The answer is AWS, Google Cloud and Azure.

Where are the (supposed) missing data centers for AWS in EMEA and Asia?

The following few pages provides coverage of the testing of latency, which contradicts the statement regarding AWS’s limitations internationally. For those interested in the main topic of the overall comparison, just scroll past this section.

International Latency Testing

However, of the three hyperscale could service providers, AWS does lag both Azure and Google Cloud in their network connectivity. This topic is covered in the research by ThousandEyes, which is a company that works in networking and network analysis.

Why AWS chooses to route its traffic through the Internet while the other two big players use their internal backbone might have to do with how each of these service providers has evolved. Google and Microsoft have the historical advantage of building and maintaining a vast backbone network. AWS, the current market leader in public cloud offerings, focused initially on rapid delivery of services to the market, rather than building out a massive backbone network. Given their current position, increasing profitability and recent investments in undersea cables, it is likely that their connectivity architecture will change over time. Enterprises considering a move to the public cloud should consider connectivity architectures to evaluate their appetite for risk while striking a balance with features and functionality. Enterprises should also be aware that even though public cloud backbones are each maintained by a single vendor, they are still multi-tenant service infrastructures that typically don’t offer SLAs. Furthermore, public cloud connectivity architectures continuously evolve and can be subject to precipitous changes at the discretion of the provider.”

However, even though there are differences that give the advantage of Azure and GCP over AWS, in testing the bi-directional latency was quite similar.

From the US to other regions, the results are quite close, with AWS lagging somewhat in 4 of the 5 comparisons. But the differences are small.

From Singapore, there is no discernable pattern of advantage between the three providers.

Azure appears to lag both other providers. Each provider only has one location in Central and South America.

Therefore, while AWS lags the other service providers in the physical infrastructure sense, the outcome is that AWS records a very similar performance to Azure and GCP.

There is no possible scenario where Virtustream would compare against AWS or the other two hypercloud service providers listed in the Thousand Eyes study as Virtustream has far fewer data centers than do any of these providers. Even Oracle has only four regions and has historically had a small cloud CAPEX versus the hyperscale providers.

Therefore why is Virtustream listed in the matrix as having a larger geographic scope than AWS? It certainly appears nonsensical.

Conclusion on the Matrix

The matrix provided that is designed to compare HPE to Virtustream to AWS and Azure is has low accuracy. It is inaccurate to the degree that it develops a natural curiosity as to the reasons for why the matrix was developed as it was. Is this a matrix designed to show the technical capabilities of the cloud service providers or is this a matrix intended to highlight those companies that have the most robust partnership with SAP? Finally, why are so many critical technical details missing from this matrix?

Without any partnership or financial relationship with SAP or any of the cloud service providers, the Brightwork matrix of comparison looks quite a bit different.

These criteria are far more relevant for the client when choosing a cloud provider, and this matrix also illuminates the distinctions between a private cloud offering and a public cloud offering.

What is Private Cloud (in Real Terms) Again?

A final problem with the matrix is that it treats public cloud (HPE & Virtustream) and public cloud (AWS & Azure) as if they are almost the same thing. However, they are entirely different.

Private cloud is merely a new word for what has historically been known as “hosted.” For example, IBM and CSC have been very dominant in this market for decades, before public cloud ever existed. However, IBM and CSC are very small in revenues in the public cloud. Private cloud/hosted and public cloud providers tend to not play in each other’s spaces. However, nearly all the growth in the cloud is public cloud, not private cloud/hosted. Private cloud/hosted does not scale well and is often referred to as just “moving the location of the hardware.”

Technology providers know they have to break into the public cloud market, as that is where the overall market is headed. (hence the IBM acquisition of Red Hat/Open Shift).

Private cloud is not cloud. The term “private cloud” was created by entities that offered hosting, as a way to rebrand their offering into something that was “cool,” and to piggyback on the concepts of innovation and growth that is primarily in the public cloud.

One cannot get any sense of this from reviewing the matrix above.

The Issue of Private Pricing

One of the requirements of public cloud (sometimes referred to as just cloud) is that it has transparent pricing. Though this public pricing it can be determined that SAP has been marking up AWS cloud services in some cases that we checked by a factor of 10x. This client was aware of the price difference between Virtustream direct and Virtustream through SAP, but only because Virtustream has provided a separate quote from SAP on Virtustream services. Ordinarily, SAP would prefer that Virtustream not give a quote to this client. SAP would consider a referral back to SAP if this client were to reach out to a private cloud/hosting provider as “good partnership behavior.” If Virtustream repeatedly circumvented SAP to sell directly to SAP customers rather than giving them a markup, Virtustream would immediately disappear from SAP’s recommended providers listing.

The benefit of the public cloud options is that as the pricing is easily determined (online actually from within the cloud service provider consoles) the markup from SAP is a known quantity.

This slide from a presentation by AWS shows a complementary relationship between AWS, SAP and a number of partners. However, the reality is quite a bit different. Not only SAP, but many of the partners on this slide intent to significantly intermediate between AWS and the final customer.

This slide (by AWS) shows what is feasible, but not necessarily what is desirable for customers. Under the scenario where the SAP Cloud is the origination, the prices are dramatically higher. The table to the right shows that AWS claims a far more complete offering than GCP or Azure.

All of SAP’s recommendations for cloud come with a markup over the cloud provider, which SAP does not discuss and pretends does not exist.

Why SAP should be marking up cloud services at all is an interesting topic as they will not be doing the work. For example, SAP does not make a margin on the IT department for an on-premises implementation (that is SAP cannot say, pay us 2x of your internal cost to host our applications), so it is odd that SAP would ask for compensation for something that has nothing to do with.

The Issue of Dedicated Servers and the Public Cloud

As stated, a primary issue is not explained in the matrix, which is that SAP’s internally developed products (not acquired products like Ariba or SuccessFactors) have traditionally run on dedicated servers. This should not be all that surprising. ECC and BW were developed prior to the public cloud being a delivery mechanism or what we describe as a  “hardware modality.” Therefore there are several challenges entailed in moving these applications to the private cloud.

How SAP Uses a Dedicated Server

The primary issue with SAP’s need to use a dedicated server and a primary reason for this is that SAP applications use a dedicated IP address. Private clouds use primarily dedicated servers. However, AWS now also offers dedicated servers.

Amazon/AWS allows for dedicated instance, which means that the instance runs on dedicated hardware. One pays a price premium for this, but dedicated hardware is appropriate in some circumstances. It is desirable to choose a cloud service provider that has both cloud/shared and dedicated capabilities. Some people will comment that AWS only offers shared, but that is no longer true. Although there is little doubt that the vast majority of AWS’s revenues come from shared services.

Notice the instances recommended by AWS for SAP. They lean towards “Memory Optimized” or “EC2 Bare Metal” or dedicated.

Bare Metal is a recent addition for AWS (notice 2H of 2018 in the slide). However, bare metal is far easier to manage than shared resources. It is more of a matter of installing the bare metal infrastructure.

The dedicated instance pricing with AWS is also transparent.

There are instances of S/4HANA that can be used that have transparent pricing. This is called BYOL, or “Bring Your Own License.” A license from SAP could be run on AWS following the BYOL model, which would provide pricing transparency. Under this approach, there is no intermediary.

The price paid by the client would be the AWS price, which is far lower than if SAP plays the middleman.

The screenshot above is the largest S/4HANA instance before one moves to bare metal (which of course costs more) in AWS. This configuration has 4 TB of memory and 128 virtual CPUs.

AWS Dedicated Instances

Dedicated instances are available from AWS. As for the need of dedicated instances, we will address this issue from two directions.

  • Experience from the company AutoDeploy, which migrates customers from on-premises to the public cloud, is that this issue of SAP requiring a static IP is overstated, there are some straight ward things that can be done to adjust for this restriction that can be changed.

This need for a dedicated server is waning. The evidence for this is that SAP instances can be brought up directly onto cloud/shared instances.

Even though AWS is the largest cloud services provider in the world, they do not have very many images for S/4HANA. And the ones they do have are previous versions rather than the latest.

This paucity of images extends to BW as well.

Some of the images for SAP aren’t much more than Linux images that have been tuned for SAP. In discussions with companies that move applications out to AWS, it is likely they would not use these images, as they are restrictive. Instead, they would just build their own.  

  • Something interesting to note is that there are very few images for SAP on AWS outside of HANA and Adaptive Server. This brings up the question of “why.”
  • SAP databases aren’t used outside of supporting SAP applications and SAP reporting, so this shows a large discrepancy between the SAP database and the SAP applications available on AWS. (And BTW, GCP has like no SAP applications.)

_______________________________________

*Document Note

The coverage on Containers and Firecracker is of a more technical nature than may be digestible for some readers. It is informational for those with the right background. For those less interested in how to optimize bringing up SAP on the cloud, just scroll through this section.

_______________________________________

Using Containers & Firecracker

Placing an SAP application on AWS, one would most likely use containers to reduce the number of VMs (virtual machines). Reducing VMs normally has the effect of reducing the price. This is something that could be tested before or during the implementation.

  • Firecracker is a very lightweight virtualization technology or virtual machine manager that is new but already very highly regarded.
  • A new item adopted recently open-sourced by AWS is Firecracker, which could also be tested for improving the cost and performance of placing SAP on AWS.

Firecracker combines the virtues of both virtual machines and serverless (or autoconfigured servers).

Firecracker creates extremely lightweight VMs (called MicroVMs) for the Linux Kernel Virtualization Machine (KVM).

“The number of Firecracker microVMs running simultaneously on a host is limited only by the availability of hardware resources.” – Github

However, from within SAP Cloud it is feasible to bring up a variety of SAP applications and HANA, where one has a choice of AWS, GCP and Azure.

Including BPC 11.

 

  • We brought up S/4HANA 1809. It took 1.5 hours to be available. We found some missing login information restricting our ability to access the application.
  • This trial was brought up for testing purposes, but Brightwork recommends against going through the SAP Cloud for moving SAP applications to AWS, GCP or Azure.

Virtustream’s Internal Changes as a Company

The cloud area is growing very rapidly, but Virtustream is not growing in US employment in the US, and in fact, appears to be shrinking. Virtustream is moving to a lower cost model for its employees with less experience and more offshore resources. Overall, Dell’s acquisitions have been a negative for the company and for Virtustream customers. (acquired through the EMC acquisition).

The following comment found off of GlassDoor is unheard of in the public cloud space.

“No one is sure of the direction Dell is looking to take this company in, seems they are trying to dissolve. Extremely cut throat atmosphere, major layoffs happening (I heard in the ballpark of 20%). When the most common word that pops up in the company reviews is ‘circus’ that should probably tell you something…”

This quotation also gets to the reputation of Virtustream as a sales oriented organization that significantly overpromises the ability to deliver.

  • Customers should expect to have lower performance in operations than in the past, as Virtustream is turning over its more experienced technical resources.
  • What Virtustream was before, as they readied for acquisition, is not the company that they are presently, or will be in the future.
  • This will be an issue for the client as delivery will lag the promises of Virtustream.

Company Profiles

In this section, we will provide a profile of each company.

AWS

“AWS continues to invest and innovate in the cloud services that it offers. It has evolved to include sophisticated tools for development including machine learning capabilities, a wide range of storage options, IoT and mobile platforms and others. AWS has taken a very proactive approach to compliance with GDPR. AWS global footprint continues to expand to satisfy the needs of its expanding customer base and services offered. It now has: fifty-three availability zones across 18 geographic regions, one local region, and has announced plans for 12 new availability zones and four more regions: Bahrain, Sweden, Hong Kong, and a second US GovCloud region.

The AWS Migration Acceleration Program (MAP) is designed to help enterprises migrating existing workloads to AWS. MAP provides consulting support, training and services credits to reduce risk, to build a strong foundation and to help offset the initial costs. It includes a methodology as well as a set of tools to automate and accelerate common migration scenarios.

AWS has a clear and open approach to security and compliance. It has a very wide range of independent certifications for compliance. AWS has led the CISPE code of conduct to provide clarity to cloud customers around the shared responsibilities for compliance with GDPR and to confirm the steps they are taking to support this.

AWS remains the leading IaaS Global service provider, offering the widest range of services across the greatest number of geographies.” – Ahmed Azmi

AWS Strengths

Strong basic IaaS platform

Rich DevOps capabilities

Speed of innovation of new services

Global footprint for availability and compliance

Hybrid / Private deployment support to cloud enable existing workloads

Independent certifications for a wide range of compliance

Strong security – Ahmed Azmi

AWS Challenges

While AWS has made significant progress in attracting enterprise customers, to retain this leadership position, it must continue to enhance its attractiveness to these customers

Competition from other CSPs that are evolving to challenge AWS position. – Ahmed Azmi

Summary

There is a large amount of public information available regarding AWS. This combined with the ability to directly test AWS, an advantage to all public clouds provides a low risk option for the client.

Virtustream

Virtustream is a U.S.-based subsidiary of Dell Technologies, is focused solely on cloud services and software. Virtustream was founded in 2008. It was acquired by EMC in July 2015, and EMC’s managed services and some cloud-related assets were moved into Virtustream before EMC was acquired by Dell in September 2016.

Virtustream’s xStream cloud management platform and Infrastructure-as-a-Service (IaaS) are intended to meet the requirements of complex production applications in the private, public and hybrid cloud. Virtustream is headquartered in New York, NY with major operations in 10 countries.

Virtustream Enterprise Cloud uses patented xStream cloud resource management technology (μVM), to create secure, multi-tenant cloud environments that deliver assured SLA levels for business-critical applications and services. Virtustream provides managed services to help organizations to migrate legacy applications to their cloud platform. It also enables production and mission-critical applications to take advantage of technologies such as Big Data analytics such as SAP HANA and Hadoop, as well the advantages like agility, backup, and disaster recovery offered by cloud computing.

Virtustream Enterprise Cloud offers assured application level SLAs with up to 99.999% availability. High levels of security are provided as standard including 2-Factor authentication; Intel TXT Trusted Computing, separate application zones, integrated GRC, and continuous compliance monitoring. Flexible deployment options from the private cloud (on-premises), virtual private cloud, public cloud, public plus private cloud (hybrid) and trusted federated cloud exchange. The Virtustream offering is SAP certified and is independently certified as being compliance with a wide range of regulations and laws. – Ahmed Azmi

Virtustream Strengths

Innovative platform for migration and deployment of complex applications

Managed services available to support this migration and deployment

Strong security and compliance characteristics

Backing from Dell Technologies – Ahmed Azmi

Virtustream Challenges

Focus on enterprise workload migration rather than DevOps.

Differentiating their offering against the major CSPs evolution towards enterprise solutions. – Ahmed Azmi

Summary

Virtustream has a limited footprint and inadequate devops and automation compared to AWS. The rate of innovation is low and internally, the company is facing sustainability issues due to its management.

The key to Virtustream’s differentiation is bundling services like migration, maintenance, and upgrades with hosting, so the customer needs to deal only with Virtustream. AWS has a far superior IaaS, global presence, and automation capabilities. Also, AWS offers a much more full range of services in analytics, database, and more so the customer will get access and can grow on AWS integrated products. However, AWS provides services via partners so customers will have to work with two providers rather than one.

Offerings

Virtustream Enterprise Cloud is hypervisor-neutral but typically supports VMware and KVM. It is offered in both single-tenant and multitenant variants; furthermore, it can support single-tenant compute with a multitenant back end, as well as bare metal. VMs are available by the hour, bare metal is available by the month, and both paid-by-the-VM and SRP models are available. The offering embeds a tool for governance, risk management and compliance (GRC) leveraging capabilities from Virtustream’s Viewtrust software. A similar offering, Virtustream Federal Cloud, targets U.S. federal government customers. The Virtustream Storage Cloud offers S3-compatible object storage that can integrate with some EMC storage products. Managed services are optional. Virtustream also offers its CMP, xStream, as software. – Ahmed Azmi

Locations

Virtustream has multiple data centers in the eastern and western U.S., the U.K., France, Germany, the Netherlands, Australia, and Japan. It has a sales presence in the U.S., the U.K., Ireland, Germany, Lithuania, Australia, India, and Japan. Virtustream’s service portal is provided in English, German, Japanese, Lithuanian, Portuguese and Spanish. Documentation and support are provided in English only. – Ahmed Azmi

Cautions

Virtustream’s roadmap is inextricably tied into other Dell entities, such as VMware, EMC, and Pivotal, which each have their own sets of differing, and possibly competing, priorities. Customers should treat Virtustream as a specialized provider for the workloads that suit the strengths and weaknesses of its technology platform.

Although Virtustream supports self-service capabilities, it primarily targets complex, mission-critical applications where it is likely that the customer will purchase professional services assistance for implementation, and managed services on an ongoing basis.

Virtustream is a compelling and unique provider for particular enterprise application use cases, but it is better suited to implementations where an environment will be carefully and consultatively tuned for the needs of particular applications, rather than general-purpose environments where workloads are deployed without oversight. Prospective customers should ensure that they have a clear understanding of roles and responsibilities and that their expectations match what is actually written in the contract. – Ahmed Azmi

Gartner’s View of Virtustream and AWS

Observe the discrepancy between AWS and Virtustream. In many Gartner Magic Quadrants, the fees paid to Gartner are instrumental in determining the ranking, however, these MQ’s match our viewpoints and neither AWS nor Google are significant contributors to Gartner. The funds paid by Microsoft show their impact, because we do not see Microsoft anywhere near AWS, and they are significantly behind Google/GCP in the offering, although Microsoft/Azure is larger due to Microsoft’s ability to cross-sell existing customers into Azure.

Conclusion

Whichever way the client decides to go, Brightwork thinks it is critical that the client has an accurate representation of these two providers.

A large part of the decision is also related to how much the client wants to take control of their cloud services, versus having everything managed for them. Virtustream by the nature of their business model will provide less transparency than AWS as to the cloud services, as AWS is entirely open to customers. With Virtustream the infrastructure and the services are combined with one company. With AWS, one can select from many partners ranging from “white-gloved” to smaller partners that tend to provide more specific technical support. Virtustream and AWS are appealing to a very different type of customer.

The Problem: Bad Advice on the Cloud

It is not only Virtustream, but all of the private cloud providers are extremely expensive compared to the public cloud. This is explained in the following quotation.

For this very reason, “Cloud Only“ is a predicament: There are just too many questions left unanswered. What is more, HEC, HCP and SCP are much more expensive than clouds from AWS, Microsoft and Google, but do not have more or even any consulting services that would justify this difference.

Some SAP partners have been noticing the first customers turning away from SAP clouds and to AWS, Azure and Google with their help.

They are entirely competitive, and private cloud makes the customer dependent on the private cloud provider. This is the problem with taking cloud advice from SAP or any SAP consulting partner. The result will mean a terrible outcome for the customer and will mean a high degree of profits and account control for SAP and the SAP consulting partner.

Being Part of the Solution: Getting Independent Advice on SAP Cloud Proposals

SAP only recommends cloud options that SAP can mark-up. And SAP consulting companies simply repeat whatever SAP says to customers no matter how bad the value is, as they are obligated to as SAP partners.

This means there is no independent advice giving entity which will contradict what SAP says on the vast majority of SAP sales proposals. Gartner only provides generalistic advice and stays out of negotiation support as they are paid by SAP. We offer completely independent research and advice on SAP and have the largest SAP research database anywhere. This advice has been our client’s secret weapon when evaluating SAP’s cloud options.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other SAP Cloud Content

References

https://marketo-web.thousandeyes.com/rs/thousandeyes/images/ThousandEyes-2018-Public-Cloud-Performance-Benchmark-Report.pdf

*https://e3zine.com/2018/12/07/sap-erp-cloud-computing/

https://en.wikipedia.org/wiki/Virtustream

https://www.virtustream.com/solutions/solutions-for-sap

https://diginomica.com/2018/05/08/virtustream-dells-forgotten-cloud-designed-legacy-enterprise-applications/

https://www.crn.com/news/cloud/300084167/googles-new-sap-agreement-puts-heat-on-growth-hungry-virtustream.htm

https://www.virtustream.com/blog/five-gotchas-when-migrating-to-hana

https://autodeploy.net/

https://www.chef.io/partners/aws/

https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-automation.html

https://aws.amazon.com/blogs/opensource/firecracker-open-source-secure-fast-microvm-serverless/

https://github.com/firecracker-microvm/firecracker/blob/master/docs/design.md

https://serverless.com/blog/firecracker-what-means-serverless/

https://www.ansible.com/integrations/cloud/amazon-web-services

https://www.slideshare.net/AmazonWebServices/track-1session-2sap-on-aws-running-your-critical-workloadspdf?

For example, Brightwork estimates that roughly 10% of S/4HANA implementations are S/4HANA Cloud.

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Understand Thought Leadership Transition from SAP and Oracle to AWS and Google Cloud

Executive Summary

  • AWS and GCP have totally taken over thought leadership from SAP and Oracle.
  • What this transition means for the cloud.

Introduction

For a time Oracle provided a distinctly differentiated product to the market in their Oracle database. For a time SAP provided a distinctly differentiated ERP system. These two developments gave those companies great power. However, at this point, those products are not anywhere as distinct as when they were first introduced. Also, SAP and Oracle have grown into difficult vendors to manage with enormous senses of entitlement over the IT budget and with both vendors pushing the envelope as to what is legal to achieve their all-consuming revenue objectives.

Furthermore, if you implement SAP and Oracle’s products as they and their consulting partners stipulate, the result is the highest TCO in the industry. SAP and Oracle want this TCO hidden and IT analysts and SAP and Oracle consulting partners are only too happy to help SAP and Oracle keep this information quiet. SAP and Oracle want pricing secret, so that pricing can never be determined without a lengthy interaction with their sales representatives. AWS and Google Cloud are offer price transparency because they aren’t software vendors, but service providers.

A Menu of Options

AWS and Google Cloud offer a menu of options, the prices are communicated to customers in real time for various services configuration. The customer chooses, and AWS and Google Cloud are happy to make money from any of them. We spin up AWS and Google Cloud services without ever talking to an AWS and Google Cloud sales rep, and you know what?

We don’t miss them. In fact, if we never interact with an Oracle or SAP sales rep again, that would be a good thing.

The following quote from Denis Myagkov further illuminates this.

“I think that SAP’s and Oracle’s myth department is propelling they database solutions without any context. It’s pretty weird to compare one database with another and not mention of its application. Any database is only a way to store some data somewhere and somehow and here we have the huge gap – what system will be consumer of they databases?

AWS and Google act like good merchants, they simply propose an assortment of different databases for developers. Maybe I’m wrong, but I prefer to choose tools for the task, but not vice versa.”

And this is the issue. When we debate SAP, but more Oracle DB resources, what we get back is how deep the Oracle database is in this or that, how it is used by the World Data Center for Weather or some other upper tier case studies (with all of the upper tier case studies open source databases ignored). However, the database is part of the IaaS, and the IaaS enables the database to do things, or it sets the boundaries for what is possible (for example horizontal scalability, which is multi-location and based upon the IaaS).

Should anyone be surprised?

Because the database is what SAP and Oracle have to sell, as they have not figured out IaaS beyond having offerings that function more as propaganda (that makes Wall Street think there is something there, make it seem like they are hip and cool, etc..) and that lead the industry in licenses purchased for shelfware.

Loving Bare Metal

Oracle loves promoting bare metal. Unsurprisingly, bare metal is what Oracle is offering, as they cannot do the sophisticated things with multitenancy, etc. that AWS or Google Cloud can do. This is equally true of SAP, which for their internally developed products are designed to work on dedicated rather than virtualized servers. Bare metal is hosting; it is not cloud. If hosting were the answer, IBM and CSC would be rising, instead of being companies that barely anyone talks about with respect to the cloud.

Let us say that a salesperson wants to sell you an engine out of context with the value it provided to you. They could discuss its technical specifications. For example, it could be a very powerful engine (a selling point the salesperson chooses to emphasize) It may product 1000 horsepower. It may have a fantastic compression ratio, and so on. However, what about how it fits within the car and the daily use of that car? The salesperson can go into a lengthy soliloquy discussing very narrow characteristics of the engine. Pretty soon, if you listen to that salesperson, you will put that engine in your economy car. After all, it’s a great engine! Sports car advertisements are similar in that they sell a dream of a car, out on an open country road, that provides a very different experience in traffic, where you might prefer more leg room and an automatic. The car may go 200 mph, but by the way, the speed limit is 65 mph. 45 to 55 mph with traffic. That is the danger in listening to a salesperson who has something to sell and only that particular thing to sell.

Hasso Plattner’s Context Free Selling

Hasso Plattner engaged in this type of context free selling when SAP introduced its HANA database. First, nearly everything he said about HANA was not true as we covered in the article When Articles Exaggerate SAP S/4HANA Benefits. However, let’s say for a moment it was all true. Even if true, it would not improve the condition of the user as Hasso and SAP have proposed. Also, it certainly would not be worth the price, maintenance overhead, and indirect access implications.

SAP and Oracle both like to pretend the car/road or the IaaS is immaterial to the discussion, and that the primary focus should be what they have to offer, which are applications and databases. And only commercial databases and applications, of course, no open source databases or applications are to be considered.

The Move Away from Proprietary Hardware

A critical component of AWS and Google Cloud is the ability to move away from proprietary hardware. AWS and Google Cloud have amazing economies of scale in hardware and data center technology and management. How could a company put together a hardware setup that is competitive on price or flexibility with AWS/Google Cloud? Those data centers have untold economies of scale. It’s like mass production, versus a job shop for an IT department. If we look at a big company, say Chevron, they are still not going to have the scale or competence of AWS/GCP. Does anyone look to Chevron for technology? Of course not.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

Our Experience with Migrating the Brightwork Explorer Application to AWS

Executive Summary

  • We migrated our application Brightwork Explorer to AWS.
  • We described our experiences with AWS and what it means for future application migrations.

Introduction

Brightwork Research & Analysis recently engaged in a development effort to create an application that could help companies monetize forecast error (rather than using the typical forecast error measurements) and which calculated parameters for MRP systems.

The software addresses two basic problems.

  1. Forecast accuracy measurements are often not performed in companies in a way that enables the company to focus on what to improve, and the forecast error measurements used usually are too abstract and divorced from the financial impact can end up with the wrong items receiving the emphasis for forecasting improvement.
  2. In all the cases that we have seen over decades of working with MRP systems, they have had poorly optimized parameters (things like safety stock, reorder point and economic order quantity) MRP systems (usually executed within ERP systems) do not have good ways of managing these parameters.

We consider the application that was developed to be a “Jiffy Lube” for forecasting and supply planning systems, allowing for an external analysis in a flexible tool, that then can provide values that help those supply planning and demand planning systems to work better.

Because of our background in SAP, the Brightwork Explorer was first targeted towards companies that ran SAP ERP or SAP advanced planning tools. But the application is vendor neutral as all systems regardless of the vendor use similar inputs that are calculated by the Brightwork Explorer. At this point, we were relying on CSV file import and export and were considering an interface to a particular application as a future development.

We named this application the Brightwork Explorer, and we thought it would be instructive to describe our experience in migrating our application to AWS.

What the application does is less important in this story than how we decided to leverage AWS.

The Development of the Brightwork Explorer

We began the development of the Brightwork Explorer locally. The development occurred over video conferences with the coding performed in real time from interaction with the designer and business process owner with the developer logged into and coding on the designer and business process owner’s computer, and with some of the work performed by the developer on his computer. After the application was ready to be shared with users, the development was migrated to AWS. We chose PostgreSQL as our database and S3 to store upload files. We also started with a small server configuration to begin, as we would be breaking the application in with a small number of customers and even beginning with smaller test files.

Important Benefits from Using AWS for Our Application

There were a few critical business models that we decided to follow was to offer a cloud application.

This had three important benefits.

  1. Lower Barrier to Implementation: We would never have to worry about getting a customer to implement our software on their server. They would not have to buy the software, but could first test it to see if it was a fit for their needs. This allowed us to make the software available to far more customers.
  2. Upgradeability and Reduced Maintenance Overhead: We planned to make many upgrades to the software, particularly in the first year, and would never have to worry about previous versions of our software “floating around” eating up maintenance efforts and we could make our changes directly to our multi-tenant application.
  3. Accessibility: We could directly access any client’s data so that we could provide support. This significantly reduced our overhead. This was also true for companies that used the software. The Brightwork Explorer creates simulation versions, and those simulation versions can be shared among various users within one company.

Each time a combination of settings is saved in the Brightwork Explorer, it is saved as a simulation. This is a combination of settings that results in an aggregate number of values for inventory, costs, profits, pallet spot consumption, and several other critical items. The application allows simulations to be saved so others can review them.

After we decided on the cloud, AWS was the natural choice for us. This is the configuration we leveraged for the Brightwork Explorer. Those were the initial reasons for moving the Brightwork Explorer to AWS. However, for our purposes, we could have also used Google, but our developer for the Brightwork Explorer was more familiar with AWS, having used AWS for several applications in the past.

How Did the Brightwork Explorer on AWS Work in Practice?

All of these beginning reasons turned out to be true, once we deployed the application we received extra benefits that we did not list above.

Easy Collaboration With Our Developer

We are not in the same country as our developer, so AWS became a shared space for us to collaborate.

  • As it is our application, we controlled our AWS account and paid the AWS billing, but provided our developer with an account.
  • If we needed another developer to add something, we could create a new account for them to allow them access.
  • We had several types of collaboration. One was the collaboration with our customers. This was feedback provided by using the Brightwork Explorer application.
  • From this feedback, we determined what changes we need to be made.
  • We were testing a new application and “guinea pigging” our first customers (and telling them this, and not charging them for accessing the application in its early state).

One of the areas of change that we needed was the ability to delete simulations. We told the developer to make this change at 12:00 PM and the change was applied, along with a second change related to date management at 2:02 PM the changes had been made. We refreshed the screen, and we were able to access the changes. There was no downtime.

Scalability

The second area of benefit from using AWS was scalability. We were going to be initially testing the software with just a few companies. Therefore, there was little need to pay very much for a higher performance and volume system on AWS because of the limited use.

  • So we kept our costs low in the beginning.
  • We will be able to scale the Brightwork Explorer to any size as new customers come on board.
  • This also allowed us to test the response from the market with minimal financial investment.

Conclusion on This AWS Case Study Experience

From multiple dimensions, we were quite pleased with our experience in setting up our application on AWS or Google Cloud.

  • Previously the Brightwork Explorer had been in a series of R and Python scripts that we ran on projects. And without a reasonably easy way to deliver the business logic through something like AWS or Google Cloud, we most likely would have never developed the application.
  • The effort involved maintaining such an application would have been overwhelming, and the Brightwork Explorer is only one of the things we work on, so we could not having it consume that much of our time. Indeed, if we had to distribute on-premises versions of the Brightwork Explorer, we would have lost interest in investing in the application and commercializing it for broader distribution what we think is an essential calculation for forecasting and supply planning systems.

Overall, we are quite pleased with our AWS experience. It is a small application and certainly not an example of anything that leveraged advanced features of AWS, but it allows us to get our requirements met quickly and to begin deploying to anywhere. While reviewing an AWS document on migration, we found the following quotations.

“Democratize advanced technologies: Technologies that are difficult to implement can become easier to consume by pushing that knowledge and complexity into the cloud vendor’s domain. Rather than having your IT team learn how to host and run a new technology, they can simply consume it as a service. For example, NoSQL databases, media transcoding, and machine learning are all technologies
that require expertise that is not evenly dispersed across the technical community. In the cloud, these technologies become services that your team can consume while focusing on product development rather than resource provisioning and management.

Go global in minutes: Easily deploy your system in multiple Regions around the world with just a few clicks. This allows you to provide lower latency and a better experience for your customers at minimal cost.”

We found both of these proposals to be true of our implementation on AWS.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

[i] This quotation provide an important distinction on virtualization. “It’s important to note that virtualization environments typically lack key capabilities of cloud systems – such as self-service, multi-tenancy governance, and standardized instances.” – https://assets.rightscale.com/uploads/pdfs/Designing-Private-and-Hybrid-Clouds-White-Paper-by-RightScale.pdf

[ii] https://aws.amazon.com/storagegateway/

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Understand AWS Services for On Premises with VMware Cloud for AWS for Hybrid Cloud

Executive Summary

  • AWS is being combined with VMware Cloud for AWS to provide hybrid cloud capabilities.
  • What this means for on-premises environments.

Introduction

AWS is becoming increasingly popular, but until very recently there was an extra hurdle if you wanted to introduce AWS into your on-premises environment. And most of the infrastructure globally is by far on-premises. There are an enormous number of virtual machine or VMware instances on corporate data centers and on-premises virtualizations is one of the ways that allow companies to leverage the cloud.

The advantages of using VMware include the following:

“1. Cost Reduction: While most, if not all, of the remaining advantages, also contain the potential for cost savings, this deserves a mention of its own. Direct cost reductions come in the form of server consolidation, and operational efficiencies that are not available otherwise.
2. Optimize Licenses: Oracle and SQL Server are two prime examples of RDBMS products that license according to underlying hardware specifications. By optimizing the hardware, we can reduce the license footprint, while still gaining the other advantages listed here.
3. High Availability: HA solutions for Oracle and SQL Server databases are complex, costly, difficult to maintain, or altogether non-existent. VMware provides a lower cost, yet highly effective HA solution that is optimal for many workloads.
4. Recoverability: If your DR solution is for the database only, you are missing out on DR coverage for the rest of the application stack required to make that database work. VMware encapsulates the application stack and, when coupled with proper configuration and testing of replication, provides as close to perfect recoverability as any solution can get.
5. Development Process: Developers need systems and data that are like production in every way possible. When they get quick access to production-like development environments, development time is reduced, and features are more robust on release.
6. Test/QA: The optimum environment to test on is production itself. That can be risky, however, since the production system needs to keep your organization running. VMware allows you to provide an exact copy of the production environment to the Test/QA team without impacting your critical systems.
7. Deployment Flexibility: Do you want to move your database workload from one licensed server to another? Do you want the advantages of a private, hybrid, or public cloud environment? Do you want to create live datacenters that are thousands of miles apart, and be able to swing back and forth between them? VMware virtualization, coupled with stretch storage solutions from EMC can facilitate these options.
8. Security: Not only does VMware create a minimalized attack profile with its small footprint hypervisor, but also advanced security features like Trusted Execution Technology from Intel allow virtual machines to validate security down to the chip level.”

Changes Afoot on the On-Premises VMs

As we discussed in the book there are many changes afoot with these on premises VMs.

  1. The number of VMs is being reduced as containers are becoming more prevalent, and containers reduce the need for as many VMs. This reduces “VM bloat,” and has the positive consequence of increasing server capacity due to the lower hardware overhead of containers.
  2. VMs are increasingly ported between on-premises and cloud, whereas in the past they stayed on premises. This trend intersects with containers, as containers are more portable than VMs.

AWS, Google Cloud and Azure are based on Linux LXC, which is an operating system based containerization. A major contributor of containerization code to the Linux kernel was Google.

Yes, you heard it right.

Azure on Linuz?

Microsoft Azure works on Linux! All clouds based on virtualization and every single server works as an application server. That is a SQL database, NoSQL database, video broadcaster and anything else. Cloud providers assign specific instances to servers in proportion to the resources they demand.

And here is the problem. Virtualization or containerization is not applicable to SAP. Most of SAP’s products require their own server. The only choice is the physical location of such server (cloud or on-premise).

This is why SAP’s inclusion of VMs and containers in the SAP Cloud are more of brochureware than anything serious that customers will end up leveraging.

And again, those VMs and containers are actually on AWS, Google Cloud or Azure with SAP not offering a PaaS and not doing much more than marking up AWS, Google Cloud and Azure services by in many cases a factor of ten (as we covered earlier in the book).

Infrastructure Efficiency

Curiously, while VMware created great opportunities for infrastructure efficiency, because of the adoption of far lower quality infrastructure tools that are proposed to IT departments as “standard” SAP environments suffer from deep inefficiencies. On SAP projects the client always tells us to make changes in their development system or their quality system. However, once we begin using the development system or their quality system, we find that the systems are useless because they are not maintained.

On SAP projects it is widespread for all of the boxes are years out of synch with each other. If VMware were being used and used correctly, these old instances would be blown away and replaced by recent copies of production. However, only some SAP components, like the SAP APO optimizer, can be placed on virtualization. This means that SAP projects it is commonly thought that a full system refresh is not possible, and instead, it seems these SAP environments are using some SAP tool to do a “system copy.”

The system copy tool that is used by recommended by SAP. And the outcome of this system copy tool? The result is all of the boxes on SAP accounts are far out of date with production, and asking for a recent copy is like asking for the moon. As we have been told on SAP projects…

“It would be nice to make a complete duplicate of an SAP box, but it is not possible.”

This is odd because, on projects where SAP infrastructure tools are not used, this is commonplace. A new image can be made in less than an hour, yet for SAP customers controlled by the SAP paradigm, doing it is something that is planned for weeks. For some strange reason, the SAP System Copy tool works within SAP.

Why would that make any sense?

VMware Versus SAP’s Horrid Infrastructure Tools

VMware copies the entire stack. SAP promotes customers to use SAP software over all other non-SAP software because the SAP software is “standard.” VMware is not standard. If SAP System Copy is the official tool for SAP, then according to SAP consulting firms, customers should implement it. After decades these SAP consulting companies and SAP “Platinum Consultants” will attest that the best course of action is to go 100% SAP. Anything can be critiqued as not SAP by stating “it is not standard SAP.”

VMware benefits from being used by all environments and VMware as the company is the premier virtualization vendor in the world, and thus know and are capable of all manner of virtualization that SAP is not capable of performing. This gets into an infrastructure efficiency question around what is called SAP Basis. With Basis on SAP projects, everything seems to take so long and is so hidden. However, when one pops around AWS or Google Cloud, everything is right there, and one has transparency. The fact is that SAP infrastructure efficiency is very low. It is controlled by Basis resources which are used to the SAP ways of doing things.

There is a growing chorus of companies with on-premises environments would like to leverage AWS to at least some degree. In response to this, AWS introduced VMware Cloud on AWS.

VMware Cloud on AWS is a full VMware vSphere cloud that is deployed and runs on AWS. We have a screenshot below.

vSphere is VMware’s environment for hybrid clouds. vSphere supports many different types of workloads. (3D graphics, big data, HPC, machine learning, in-memory, and cloud-native) vSphere has been around for years and is widely used. VMware Cloud for AWS is an extension of vSphere for AWS.

Using VMware Cloud for AWS allows for on-premises companies to leverage AWS more easily. This means that privately hosted VMware deployment in a data center. VMware Cloud for AWS brings AWS’s Relational Database Service (RDS) to these corporate data center customers. This is designed to make the transition to the cloud more accessible to customers. All the standard databases apply. However, it also allows AWS Lambda, Simple Queue Service, S3, Elastic Load Balancing, DynamoDB, Redshift and much more.

This is a major development, as it simplifies the speed and simplicity of managing the migration to AWS.

How AWS Supports the Hybrid Cloud

AWS begin in the cloud, and for the longest time, the argument was between cloud providers like AWS and Google Cloud versus on premises. However, AWS’s value proposition has begun to extend to on premises. Also, the natural connection is through the on-premises virtual machine.[i] This is a massive change in the value offered by AWS. When AWS extends its services into on premises, it is now leveraging its software and not offering its hardware. Hybrid cloud is when workloads and data reside in both on-premises and the cloud.

AWS has clearly targeted growth into the on-premises environment. The following are a few more examples:

  1. AWS Firehose
  2. AWS EC2 or Elastic Computer Cloud
  3. AWS CodeDeploy
  4. AWS Storage Gateway
  5. CockroachRB/AWS RDS

Let us briefly get into each one.

AWS Firehose

AWS Firehose is used for hybrid environments to move the actual data to AWS. The VMWare Cloud on AWS provides a high-performance network through the Amazon Virtual Private Cloud (VPC).

AWS EC2 or Elastic Compute Cloud

EC2 or Elastic Compute Cloud’s console can be used to manage on-premises instances. This means that a single console can provide a company of the full picture of what they are managing. It costs AWS very little to do this, but it makes even more likely that that customer will enable more AWS services.

AWS Storage Gateway

AWS Storage Gateway can be used for backup and archiving, disaster recovery, cloud data processing, storage tiering, and migration.

This is covered in the following quotation from AWS.

“The gateway connects to AWS storage services, such as Amazon S3, Amazon Glacier, and Amazon EBS, providing storage for files, volumes, and virtual tapes in AWS. The service includes a highly-optimized data transfer mechanism, with bandwidth management, automated network resilience, and efficient data transfer, along with a local cache for low-latency on-premises access to your most active data.”[ii]

The AWS Storage Gateway works in conjunction with VMware of (as one example) on the on-premises server. It then connects to the AWS SG service, which then enables the services on AWS.

AWS CodeDeploy

AWS CodeDeploy is a deployment service but not only to compute services such as Amazon EC2, AWS Lambda but also to on-premises servers.

This is covered in the following quotation.

“AWS CodeDeploy makes it easier for you to rapidly release new features, helps you avoid downtime during application deployment, and handles the complexity of updating your applications.”

CockroachDB/AWS RDS

With multiple databases or database copies being deployed in various regions in the cloud and on-premises, several approaches have come forward to manage this issue.

For instance, both CockroachDB and Spanner are sold on the concept of “horizontal scalability.” This is very different from vertical scalability, which is more commonly discussed and is how large a database can become without losing its initial performance characteristics.

How Cloud Spanner compares to other database categories (according to Google).

Both AWS RDS and CockroachDB allows one copy of the database (called a node) to be on premises, with other nodes of the database to reside in the cloud, with the nodes kept in synch and therefore allowing full distribution of the database. A combination of nodes makes ups a cluster. AWS RDS and CockroachDB can be used on the public cloud or the private cloud, and hence is presented as also supporting hybrid clouds. The difference between the two approaches is that AWS RDS employs a master-slave model, where one database is the master and copies changes to the slave databases. CockroachDB operates under a design where all the nodes are equal.

(Initial image from CockroachDB)

This places the node closest to the customer. For a company with an on-premises node, having a multi-node database like Cockroach DB in the cloud can serve users in regions of the world where the company has no data center. This has a meaningful impact on speed, and is a perfect example of one of the benefits of a hybrid cloud, even for companies that plan to keep their on-premises location or locations.

This “need for speed” is covered in the following quotation.

“This problem is compounded by the fact that latencies quickly become cumulative. If your SLAs allow for a 300ms round trip between an app and a database, that’s great––but if the app needs to make multiple requests that cannot be run in parallel, it pays that 300ms latency for each request. Even if that math doesn’t dominate your application’s response times, you should account for customers who aren’t near fiber connections or who live across an ocean: those 300ms could easily be 3000ms, causing requests to become agonizingly slow.

If you need a gentle reminder as to why this matters for your business, Google and Amazon both have oft-cited studies showing the financial implications of latency. If your site or service is slow, people will take their attention and wallets elsewhere.”

The big selling point of multi-node databases is both the ability to keep an application up during a failure of one of the nodes. After the failure, when the node that failed is brought back up, it is automatically synchronized with the other good nodes.

The example from the CockroachDB website shows a node as “suspect,” in the upper right hand corner of the user interface. Of course with CockroachDB one (or more) of the nodes can be on premises.

In addition, CockroachDB will allocate the traffic that would ordinarily be directed to Availability Zone 1 to Availability Zone 2. Client A is not directed back to Availability Zone 1 until the changed data from Node 2 and Node 3 (which would be kept in sync) is updated to Node 1. This means the database is “inherently high availability.” That is, no special configuration is necessary to make it high availability.

This is a simulation which shows how multiple nodes interact with one another.

We have been discussing this in the context of database failure, but it applies to the overall availability zone as is covered in the following quotation.

“Individual datacenters and entire regions also fail, and many teams are caught by surprise when they do. To survive these kinds of failures, you should employ a strategy similar to the one you adapted for networking and availability zone failures: spreading your deployment to more geographies.”

In addition to unplanned downtime, this works the same way for planned downtime. This improves the ability to bring down a node for maintenance, bring the node back up, and have CockroachDB takes care of all of this. For those that use DropBox and synch across multiple computers, the principle is similar. All of the copying and synchronizing takes place without the user worrying about how that is done. All the user sees that that (after a few minutes if the computer is recently turned on and connected to the WiFi) the files on another computer are the same as the files on the presently used computer.

Although CockroachDB/Spanner is one approach. AWS RDS has a different approach, which is explained in the following quotation.

“DB instances using Multi-AZ deployments may have increased write and commit latency compared to a Single-AZ deployment, due to the synchronous data replication that occurs. You may have a change in latency if your deployment fails over to the standby replica, although AWS is engineered with low-latency network connectivity between Availability Zones. For production workloads, we recommend that you use Provisioned IOPS and DB instance classes (m1.large and larger) that are optimized for Provisioned IOPS for fast, consistent performance.”

SQL presents a fundamental problem to horizontally scaled databases. This is the fact that the data in the database has to be available on a single server. Without this, it will not be possible to perform JOIN requests. However, with the cloud, either standard or hybrid, there are multiple instances or nodes of duplicate databases. This is explained in the following quotation.

“Relational databases scale well, but usually only when that scaling happens on a single server node. When the capacity of that single node is reached, you need to scale out and distribute that load across multiple server nodes. This is when the complexity of relational databases starts to rub against their potential to scale. Try scaling to hundreds or thousands of nodes, rather than a few, and the complexities become overwhelming, and the characteristics that make RDBMS so appealing drastically reduce their viability as platforms for large distributed systems.”

AWS RDS uses a master-slave copy scenario where one master is copied to the other slave nodes. CockroachRB, the open source versions of Google Cloud Spanner is, in theory, treats all of the nodes as if they are equal. AWS outlines the problem with this in the following quotation.

“In a Multi-AZ deployment, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. The primary DB instance is synchronously replicated across Availability Zones to a standby replica to provide data redundancy, eliminate I/O freezes, and minimize latency spikes during system backups. Running a DB instance with high availability can enhance availability during planned system maintenance, and help protect your databases against DB instance failure and Availability Zone disruption.”

The reason for this is there are have multiple master-servers, collisions of data will be unavoidable.

Also, performance on read/write will depend on the number of servers. With the master-slave design, that problem is eliminated. However, this means that the databases are not genuinely independent. They rely upon the master database. And the slave databases only provide read access, not write access.

Luckily, any database has about 5% of write requests and 95% of read requests. So, with the single master you provide a full server only for write requests. In the case where a change is made, that change sent to the master directly (that is bypassing the slave). This is considered far more robust than the CockroachDB approach of synchronizing multiple SQL databases.

Notice in this graphic, displays the standard master-slave database model. What can be a Public Cloud contains the slave database, while the master stays in the Private Cloud which is normally on premises.

Both approaches support a hybrid cloud. And many people think the AWS RDS/Cloud SQL approach is more practical, but this may change as technology continues to advance. It should also be considered that this problem goes away if a NoSQL database is used. This is why NoSQL faster and have no problems with replications.

We know of many projects where NoSQL works like a real-time DB for application and then data in parallel transformed into a SQL database. For programmers NoSQL is more beneficial during run-time.

AWS & OpenStack and CloudStack

Up to this point, we have discussed using AWS services and software to control the on-premises environment. However, open source cloud management projects like OpenStack and CloudStack allow AWS to be accessed, as well as any on-premises resources. This ability to connect to any cloud and any on-premises resources from one UI is sometimes called a “single pane of glass.”

OpenStack is truly impressive. We recommend watching this specific video which shows how quickly things can be set up in OpenStack.

OpenStack can control AWS, Google Cloud and other cloud resources, and it also runs on VMware, as explained by VMware.

“VMware Integrated OpenStack is a VMware supported OpenStack distribution that makes it easy to run an enterprise grade OpenStack cloud on top of VMware virtualization technologies. VMware Integrated OpenStack is ideal for many different use cases, including building a IaaS platform, providing standard, OpenStack API access to developers, leveraging edge computing and deploying NFV services on OpenStack.”

What AWS’s Hybrid Strategy is Pointed Towards

With AWS pushing into on-premises, this means that AWS is not only offering services that run on its infrastructure, but has extended its services to run on customer’s infrastructure. Some have argued that IBM and Oracle have the advantage when it comes to offering hybrid cloud. This is due to IBM and Oracle already being on premises in their customer’s accounts. However, neither of these companies demonstrate much of an intuitive understanding of the cloud. They do not offer the ease of use that AWS offers, and once you use AWS for on-premises, you can, of course, use AWS’s cloud services.

The Oracle Cloud appears to have similar issues to the SAP Cloud, as it shows low usability, indicating that like SAP, Oracle is getting little feedback as to what should be improved.

Oracle has had a long time to figure out the cloud, and they don’t appear to be making progress. Oracle resources have continually told us to “try out the Oracle Cloud.” We have. And we do not want to use it. We suggest Oracle resources that suggest using the Oracle Cloud try it for themselves first, before being frustrated when told it is not usable. One has to have the software to be part of the hybrid cloud future. AWS and Google Cloud do.

AWS offering for on premises is an attempt to put a “straw” into on-premises environments.

That is they will entice on-premises customers with free items (that cost AWS little to offer) that will make on-premises customer increasingly comfortable with the cloud. As time passes and as more AWS services are used, it will become increasingly difficult for these on-premises customers to justify investments into more on-premises overhead.

(Graphic from the book Hybrid Cloud for Architects: Build Hybrid Cloud Solutions Using AWS and OpenStack)

This is the future desired state for IaaS providers. They see the hybrid cloud as an intermediate state to taking over the infrastructure of customers.

This is proposed by Elias Khnaser in the following quotation.

“The public cloud will host most workloads, and you will have just the most important things in the private cloud. So you are going to justify putting something in the private cloud versus putting something in the public cloud, as the prices fall as well.”

Their roadmap shows a large amount of collaboration to make the VMWare Cloud on AWS a success.

The caveat explained by House of Brick is that there may be restrictions and may require a legal review by each Oracle customer planning on leveraging this new offering due to licensing restrictions.

“From these indications, it seems like a strong possibility that the VMware Cloud on AWS is in compliance with the requirements of the Licensing Oracle Software in the Cloud Computing Environment document published by Oracle. We will be doing more internal analysis of this possibility, as well as consulting with our legal partners, and the legal teams of the clients we are working with. Once we feel confident in making a more definitive statement, we will do so at that time.” – House of Brick Technologies

Conclusion

VMware Cloud for AWS is a may of melding on-premises environments with AWS. This will improve SAP, and Oracle environments as VMware Cloud for AWS can serve as a “straw” into these on-premises environments that not only provides access to AWS services but allows AWS services to be used to manage on-premises resources. The overall announcement of VMware Cloud For AWS is a recently announced product as per the time of this book’s publication and is something whose full effect has yet to be discovered.

VMware Cloud for AWS fits directly into the hybrid cloud, which is how different clouds and on-premises environments interact with one another. Also, we addressed the topics of OpenStack and CloudStack which allow the control of not only AWS but provide a single pane of glass on all resources (cloud and on-premises). AWS’s clear intent is to manage more and more of customer’s overall stack. Currently, AWS has the bulk of their services towards the bottom of the stack (Network, Storage, Computer, Virtualization, and Operating System) but it is clear they will increase their top of the stack services in the future.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Hybrid Cloud Content

References

[i] This quotation provide an important distinction on virtualization. “It’s important to note that virtualization environments typically lack key capabilities of cloud systems – such as self-service, multi-tenancy governance, and standardized instances.” – https://assets.rightscale.com/uploads/pdfs/Designing-Private-and-Hybrid-Clouds-White-Paper-by-RightScale.pdf

[ii] https://aws.amazon.com/storagegateway/

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to SAP’s Arguments Against AWS and GCP

Executive Summary

  • SAP makes continuous arguments against AWS and GCP.
  • How valid is this argument against the cloud from SAP?

Introduction

SAP consulting partners usually will only know how to repeat the arguments that come from SAP. An SAP consulting partners intends to denigrate both the other systems used by a customer and to promote SAP Cloud. This is true even though very few SAP consultants use SAP Cloud. In the mind of the SAP consulting partners, SAP works best when 100% SAP is used. The use of SAP applications, SAP databases, SAP infrastructure, SAP development tools is strictly supported, without ever needing to justify usage.

Not Being Able to Meet Requirements Does Not Count Against SAP?

In the mind of SAP and SAP consultants, any lack of ability of SAP’s applications to meet requirements can be attributed to either a lack of business process re-engineering or to custom coding in SAP, using SAP development tools and SAP’s proprietary coding language. It is the preferred strategy to recode any “legacy” application used by the customer into SAP.

SAP’s Connection to IaaS Providers

SAP has made accommodations for non-SAP IaaS. One can use SAP on AWS, Google Cloud and Azure. We covered the topic in SAP’s Multicloud Announcement.

As of this book’s publication, there are 92 SAP related items available on the AWS Marketplace. This is however misleading as many are slightly different versions of the same basic thing. Most of the offerings are either HANA or the Adaptive Server Enterprise. The Adaptive Server Enterprise is one of the most confusingly named products, and it is a database. It was renamed from the Sybase DB, which SAP acquired.

The Real Point of Offering HANA and Adaptive Server Enterprise on AWS

Looking at the HANA offerings, they appear to be more out on AWS for marketing purposes or that is more specifically to get traction for these products and exposure rather than serious usage. The problem is that SAP’s databases are not competitive versus the other items AWS offers.

If we take the HANA database, it is a problematic database both from the perspective of cost and maintenance overhead. If HANA is used with all manner of liabilities, both technical as well as license/legal, as we covered in the article The HANA Police and Indirect Access Charges. SAP is offering license-free versions for developers to work with, once a company tries to activate it for production, the costs will dramatically rise, and then they will run into other limitations such as HANA’s limitations in clustering.

This is how SAP “boobie traps” its AWS offerings. Furthermore, HANA is a way for SAP to take over the data layer at companies, proposing that every database that touches HANA must either be HANA or must pay extra fees to SAP.

It is true that there are limitations in running SAP on AWS and Google Cloud. However, the problem is that SAP resources are not a good source of information on this topic, as they are told by SAP that the objective is to get customers to adopt the SAP Cloud, as it is standard. This is particularly true if the SAP consultant works for a company that is a partner with SAP. As we cover in the article How to Best Understand the Pitfalls of Vendor Partnerships with SAP, consulting companies that are SAP partners lack any autonomy from SAP.

Conclusion

The best way to handle SAP’s objections that customers should access the SAP Cloud instead of alternatives is by having the real story about how SAP setup their cloud to work against their customer’s interests, and to reinforce SAP’s account control.

The information provided to customers about both SAP Cloud and AWS and Google Cloud is unreliable. SAP allows customers to access AWS, Google Cloud and Azure, but not in a way that makes any sense to use SAP Cloud. We can see no justification for using SAP Cloud when customers can open AWS and Google Cloud accounts and access so many more options and without SAP’s exorbitant markup. If customers use SAP or SAP consultants to consult on the cloud, it will lead to inferior outcomes. First, SAP has no idea what they are talking about when it comes to cloud, and second, all advice will lead right back into the clutches of SAP. SAP is also struggling with how to jam as much of their investments into on-premises technologies in the cloud context. ABAP is a perfect example of this, but there are many others. The SAP consulting firms are incentivized to redirect any customer questions about cloud back to SAP Cloud, which a non-starter as far as all three of the authors of this book are concerned. Those customers looking for advice on how to leverage AWS or Google Cloud (that is real cloud) for their SAP environment need to find advisors who are not SAP partners.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to Oracle’s Argument on Whether Oracle Can Complete with AWS and Google Cloud With Far Fewer Data Centers?

Executive Summary

  • Oracle makes a curious argument that Oracle can compete with AWS and Google Cloud with far fewer data centers?
  • How valid is this argument against the cloud from Oracle?

Introduction

Oracle has needed to cover up for the inconsistency of their statements around their dedication to the cloud versus their relatively small investment in data centers. For context, while the aggregate spending of AWS, Google Cloud and Azure were $31 billion in 2016, Oracle’s was only $1.7 billion.

Mark Hurd, CEO of Oracle addressed this issue in the following quotation.

“We try not to get into this capital expenditure discussion. It’s an interesting thesis that whoever has the most capex wins,” Hurd said in response to a question from Fortune at a Boston event on Tuesday. “If I have two-times faster computers, I don’t need as many data centers. If I can speed up the database, maybe I need one fourth as may data centers. I can go on and on about how tech drives this.

Oracle has said it runs its data centers on Oracle Exadata servers, which are turbocharged machines that differ fundamentally from the bare-bones servers that other public cloud providers deploy by the hundreds of thousands in what is called a scale-out model. The idea is that when a server or two among the thousands fail—as they will—the jobs get routed to still-working machines. It’s about designing applications that are easily redeployed.”

Mark Hurd on the Missing Oracle CAPEX

Mark Hurd is not interested in getting in any discussion where the facts are obviously against his position. And why are discussions around CAPEX, or Oracle’s measured investment off limits?

Is there some taboo about discussing questions of investment?

One has to wonder about the honesty of a person who when confronted with a very reasonable question which goes directly to the storyline being proposed by Oracle (that they are competing on their data center investments), states they are “not interested in getting in a discussion” around that topic.

The underinvestment on the part of Oracle contrasts with AWS which places three locations or data center into any region it enters. This is to allow for redundancy in the region. Using Exadata servers at one location does not resolve that issue. As per Oracle’s explanation of its cloud posted as of October 19, 2018, Oracle only has 4 regions worldwide. These are Phoenix, Ashburn, London and Frankfurt. That small number of regions naturally increases the distance to consumers of the services and is far behind AWS and Google Cloud. That means more network latency, and it is not something that will be addressed by Mark Hurd’s statements about Exadata, even if it were true.

The truth is that Oracle isn’t investing much into the cloud. Oracle has leased data centers that rely on the Internet instead of dedicated fiber to communicate.

  • Oracle has followed monopolistic practices through continual acquisition. However, there is a problem with Oracle extending this strategy to the cloud.
  • The acquisition approach doesn’t work in the cloud. Thus, Oracle’s approach has been to play defense and delay cloud adoption in its install base as much as possible. This is for instance by raising database license costs for running on AWS/Azure, pushing hard for on-premise deployments of its own hardware, limiting choice by refusing to license the database for Google and IBM.

Hiding Cloud Revenue

Hiding their cloud revenue in June of 2018 and changing how they reported cloud revenue was a striking indicator for a company that had to cover for previous cloud projections that have not come true. It also triggered many analysts to question what Oracle was hiding. Last year, Google, Microsoft, and Amazon each spent more than 10 billion dollars on data centers each. How did Oracle spend its money? On $12 billion buying back its own shares. A better translation for Mark Hurd’s comments is that he does not need as much investment into data centers, when he can use that same money to buy back stock. Curiously, when discussing CAPEX, Mark Hurd left out how much stock Oracle repurchased that year. This highlights just one example of how Oracle is managed for the short-term financial benefits of its top executives. Much like SAP, Oracle prefers not to make investments that they need to make to match the claims made by their marketing department.

Furthermore, this statement was made by Mark Hurd in April of 2017. But notice Mark Hurd’s statement on February 2018?

“The Redwood City, Calif., company said in February it planned to quadruple the number of giant data-center complexes over the next two years, taking on the market’s biggest spenders: Amazon.com Inc., Microsoft Corp. and Alphabet Inc.’s Google.”

Switching Direction

Why would Oracle need to do this? Remember, according to Mark Hurd, Oracle’s faster servers and databases should allow it to compete with AWS, Google Cloud and AWS with a far smaller investment. Wasn’t that the story in April of 2017? Alternatively, perhaps does using Exadata servers with the Oracle database not help overcome Oracle’s lack of cloud investment?

But even if Oracle’s investments were where they needed to be, there is no evidence that they would be able to do what AWS and Google Cloud are able to do with their investments. That is every dollar that Oracle spends on cloud infrastructure would not be as effective as a dollar spend on AWS or Google Cloud infrastructure. Therefore, the argument is the exact opposite of the one proposed by Mark Hurd regarding comparative CAPEX. To match AWS or Google Cloud, Oracle would need to significantly exceed AWS or Google Cloud’s CAPEX.

This is explained in the following quotation.

“The exact number of servers in Google’s arsenal is “irrelevant,” Garfinkel says. “Anybody can buy a lot of servers. The real point is that they have developed software and management techniques for managing large numbers of commodity systems, as opposed to the fundamentally different route Microsoft and Yahoo went.

Of particular interest to CIOs is one widely cited estimate that Google enjoys a 3-to-1 price-performance advantage over its competitors—that is, that its competitors spend $3 for every $1 Google spends to deliver a comparable amount of computing power. This comes from a paper Google engineers published in 2003, comparing the cost of an eight-processor server with that of a rack of 176 two-processor servers that delivers 22 times more processor power and three times as much memory for a third of the cost.

But although Google executives often claim to enjoy a price-performance advantage over their competitors, the company doesn’t necessarily claim that it’s a 3-to-1 difference. The numbers in the 2003 paper were based on a hypothetical comparison, not actual benchmarks versus competitors, according to Google. Microsoft and Yahoo have also had a few years to react with their own cost-cutting moves.”

Scale Economies in the Cloud

AWS and Google Cloud can get scale economies with their investment that is difficult for other IT companies, even giants like SAP, Oracle, and Microsoft to match. That is scale economies running server farms specifically. Again, SAP and Oracle began their lives as vendors, selling software, not running software. This gives them a huge advantage over vendors like SAP and Oracle that are in business to sell software and outsource implementation to someone else.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to Oracle’s Arguments on Virtual CPUs on AWS EC2 or RDS?

Executive Summary

  • Oracle makes a curious argument that virtual CPUs on AWS EC2 or RDS.
  • How valid is this argument against the cloud from Oracle?

Introduction

Oracle argues that if the number of vCPUs or virtual CPUs on Amazon EC2 or RDS is reduced, The licenses to be purchased should be the total of CPUs that could have been used.

This is completely false.

The number of licenses to be purchased is the number of CPUs that are running as observed by House of Brick. Oracle is well known for merely asserting additional licenses are needed, and they lose nothing by making the assertion. It is the customer’s job to figure out if Oracle is telling the truth.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to Oracle’s Arguments that AWS is Only Migrating Non Critical Databases?

Executive Summary

  • Oracle makes the argument that AWS is only migrating noncritical databases.
  • How valid is this argument against the cloud from Oracle?

Introduction

Oracle has argued that the databases migrated to AWS aren’t mission critical.

“So, sure, you can do test/dev and run small non critical Databases as many have been doing for years on VMWare. This is what AWS continues to report in their “XXK Successful Database migrations to AWS”. But if you look at how many are true “Oracle Database” workloads, you will see that they are less than .1% of the total.”

AWS had $6.11 billion in revenue in the 2nd quarter of 2018 and is growing its revenues year over year at close to 50%. Does Oracle expect companies to believe that RDS is only for small and non-critical databases?

Customers that Run Oracle Database for Enterprise One on AWS

One customer we are aware of runs Oracle Database for EnterpriseOne on AWS. They have almost a terabyte of production data. They moved off of Secure-24, a hosting provider, and are experiencing more uptime, better database response time, the better overall response time from WebLogic servers on AWS. This is not an isolated story; there are many stories like this.

Oracle has a long-term history of discounting every other offering in the market and no matter Oracle’s position; Oracle is the best possible option, the only problem, according to Oracle, is people just don’t “understand” how great the Oracle option is.

There usually are two positions that you can hold in the eyes of someone from Oracle. One is a position that agrees with Oracle. The other is the position of idiots that questions any of Oracle’s superiority.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other Cloud Databases Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to Oracle’s Arguments that AWS Not Provide Sufficient Uptime?

Executive Summary

  • Oracle makes the argument that AWS does not provide sufficient uptime.
  • How valid is this argument against the cloud from Oracle?

Introduction

Oracle has questioned these SLAs as is presented in the following quotation.

“AWS’s SLAs, like those of most competitors (except Oracle Cloud), only guarantee uptime, not performance. The upfront fees paid to reserve EC2 instances are not taken into account in the calculation of the service credits.
“Oracle Cloud SLAs cover these 3 key customer requirements:
Availability SLA
—————-
Compute and Block Volume storage have external connectivity, and are available to run customer workloads >99.95% of the total customer provisioned time-Object Storage and FastConnect external connectivity, and are available to run customer workloads >99.9% of the total customer provisioned time
Manageability SLA
—————-
APIs provided to create and manage IaaS services are available >99.9% of the time
-not offered by any IaaS competitor today
Performance SLA
—————
Block storage, local NVMe storage, and cloud networks are delivering normally expected performance levels-SLA coverage for performance degradation providing service credits if disk or network performance drops below 99.9% of expected levels, not offered by any IaaS competitor today
Enterprise SLAs Information
https://cloud.oracle.com/en_US/iaas/sla
Enterprise SLA Press Release
https://www.oracle.com/corporate/pressrelease/oracle-iaas-sla-021218.html
Detailed Terms of Service
https://www.oracle.com/us/corporate/contracts/paas-iaas-pub-cld-srvs-pillar-1117-4021422.pdf
Expected levels is the level of performance that Oracle has documented. They all should be listed in here: https://www.oracle.com/us/corporate/contracts/paas-iaas-pub-cld-srvs-pillar-1117-4021422.pdf
Oracle is providing a true performance guarantee SLA which you cannot get with any other cloud vendor.”

Response

There is some important context to provide to these comments.

  • Performance is adjusted elastically on AWS and Google Cloud. While there are frequent complaints about Oracle meetings its guarantees, this is not an issue with AWS and Google Cloud.
  • The performance is known to move in lockstep with the chosen configuration of the AWS or Google Cloud service. Secondly, when and if Oracle does not meet its performance guarantee, the customer is forced into the support pathway, and Oracle does not offer customers well regarded or responsive support.

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.

How to Respond to Oracle’s Arguments that AWS Has a Problem with Scalability?

Executive Summary

  • Oracle makes the argument that AWS has a problem with scalability.
  • How valid is this argument against the cloud from Oracle?

Introduction

The next argument moves to the scalability argument.

“And finally, what happens when your AWS Database workload seriously gets big, as in TB’s or PB’s? Can you move your AWS Database on-premise? No, they are public-cloud only. What if you have governance requirements for the Database and can’t run it on AWS as they don’t have a location that’s in-country? Well, don’t have an option their either. So again, you become locked-in to AWS and as we know, the Database is probably the worst of lock-ins and where it resides is probably second. So choosing the right Database and location is probably the biggest IT decision being made.”

After spending a lot of time analyzing Oracle’s arguments, This is a standard strategy by Oracle, which is to take one particular case study and to try to generalize from it and make it appear more significant than it is.

Public Cloud

Yes, AWS is offering public cloud. Public cloud with security (as we covered in the VPC topic earlier in the book), but public cloud. Private cloud or hosting loses most of the advantages of the cloud. There is a great debate as to how much more of an advantage a private cloud is from merely being on premise. Therefore, this does not hold interest for AWS. AWS is interested in businesses, or sections of a business that can be scaled, and that is a public cloud. While Oracle is talking up its private cloud, AWS is building public cloud capabilities that it now appears that neither SAP nor Oracle, with all of their resources, are not able to replicate.

Managed Scaled Multitenancy

The reason is not a lack of financial resources, but because neither SAP nor Oracle has experience managing scaled multitenancy (of course, as SAP and Oracle have ex-AWS and ex-Google Cloud employees, specific employees do have that experience, but it was attained while at other companies). The experience of SAP and Oracle is replicating the same items at hundreds of thousands of accounts. That is implementing applications and databases in a job shop manner on-premises but in hundreds of thousands of customers. Imagine factories where factory workers (i.e., the consultants) were trying to make the production process as expensive as possible, and this would be a rough approximation of SAP and Oracle environments.

This is a primary reason why SAP and Oracle are so expensive, while both being immense companies they follow more of a job shop model rather than a mass production model. Everything from how software is sold to how it is installed is the same repeating model, which is inefficient.

As for Oracle’s assertions of cloud lock in, is this really an argument Oracle should be making?

Financial Disclosure

Financial Bias Disclosure

Neither this article nor any other article on the Brightwork website is paid for by a software vendor, including Oracle, SAP or their competitors. As part of our commitment to publishing independent, unbiased research; no paid media placements, commissions or incentives of any nature are allowed.

Search Our Other AWS and Google Cloud Content

References

AWS and Google Cloud Book

How to Leverage AWS and Google Cloud for SAP and Oracle Environments

Interested in how to use AWS and Google Cloud for on-premises environments, and why this is one of the primary ways to obtain more value from SAP and Oracle? See the link for an explanation of the book. This is a book that provides an overview that no one interested in the cloud for SAP and Oracle should go without reading.