How to Respond to Oracle’s Arguments About Serious Databases Not Running on VMs

Table of Contents: Select a Link to be Taken to That Section

Executive Summary

  • Oracle makes the argument that serious arguments do not run on VMs.
  • How valid is this argument against the cloud from Oracle?

Introduction

Oracle proposes that AWS is not ready for enterprise workloads. This is a common argument on Oracle that only its offerings are true “enterprise-ready.”

The following quote covers this argument.

“The problem for AWS, is that their architecture was not originally designed to run Database workloads. Back in 2002 when AWS was first architected ( I was at Sun when several of my friends/colleagues went to AWS so I know the history), the intent was to just sell what unused compute and storage capacity wasn’t being used by Amazon themselves. And over the last 15+ years, AWS just heavily invested in building out this, what I call generic/basic architecture worldwide-leveraging the Asian OEM/ODM builders to get cheapest costs.

But the problem & challenge for AWS is that it’s (still today) 100% fully virtualized (until their bare metal instances finally go live) and as we all know, serious Databases don’t run on virtualized environments and why AWS realizes they need bare metal to eliminate needing virtualization.”

The Bare Metal Argument

This is essentially the bare metal argument again. Bare metal does have better performance than virtualization, but bare metal puts one back into performing hosting rather than being in the cloud. Also, all the benefits of the cloud are lost when this is done. AWS does not offer bare metal, but it is not where AWS gets most of its revenue. It is well understood that bare metal has specific use cases, but Oracle is attempting to make the entire situation about performance. If what Oracle said were true, AWS and Google Cloud and Azure would not even exist (as they do) because they could not add benefits over bare metal hosting. Secondly, the problem with the cloud is bandwidth, not database speed. All commercial and databases have a sub-second response time on many operations.

This is covered in the following quotation from AWS.

“Amazon RDS provides a fully managed relational database. With Amazon RDS, you can scale your database’s compute and storage resources, often with no downtime. Amazon DynamoDB is a fully managed NoSQL database that provides single-digit millisecond latency at any scale. Amazon Redshift is a managed petabyte-scale data warehouse that allows you to change the number or type of nodes as your
performance or capacity needs change.”

Oracle Response Time?

The problem is not the database response time. For decades development has focused on making databases faster, applying better hardware to the database, primarily focusing on different aspects of the database. However, the bottleneck is less frequently the database, and instead, the bottleneck is cloud latency when a data center is located 10,000 kilometers away. That’s why AWS invests heavily in building DCs everywhere. The speed of light in fiber is still constant. AWS investments and capabilities get into great detail to optimize network performance, as the following quotation explains.

“In AWS, networking is virtualized and is available in a number of different types and configurations. This makes it easier to match your networking methods more closely with your needs. AWS offers product features (for example, Enhanced Networking, Amazon EBS-optimized instances, Amazon S3 transfer acceleration, dynamic Amazon CloudFront) to optimize network traffic. AWS also offers networking features (for example, Amazon Route 53 latency routing, Amazon VPC endpoints, and AWS Direct Connect) to reduce network distance or jitter.”

Furthermore, AWS, Google Cloud, and Azure all support HTTP 2.0. The newer protocol (released 2015) supports multiplexed requests across a single connection. Instead of buffering requests and responses, it handles them in a streaming fashion. This reduces latency and increases the perceived performance of your application.

HTTP 2.0’s multiplexing and concurrency, dependency streaming, header compression, and server push decrease web resource load time by at least 200% compared to HTTP 1.1.