How Accurate Is AWS Advice on Migrating From Mainframes to AWS Services?

Executive Summary

  • AWS provides advice to companies to migrate from mainframes to AWS services.
  • How accurate is this advice?

Introduction

Many companies have copied Gartner’s mostly inaccurate arguments against mainframes. An excellent example of this is found in the information provided about mainframe migration by AWS.

Our References for This Article

If you want to see our references for this article and other related Brightwork articles, see this link.

AWS Repeats the False Claims on Mainframes Made by Gartner

It was curious to come across the following quotations from an article in Data Center Knowledge regarding what AWS is proposing regarding migrating workloads from mainframes.

AWS has announced a new managed service that enables businesses to migrate mainframe workloads to the cloud.

AWS Mainframe Modernization offers customers two options. Some might want to refactor their mainframe workloads to run on AWS by transforming legacy applications – likely written in COBOL – into modern Java-based cloud services. – Data Center Knowledge

The question of why a company would want to refactor applications written in COBOL should be questioned. This is a lot of work, and COBOL applications can be modified and kept where they are at far lower effort. You will observe in the quotations that while many workloads work quite well on mainframes, the long-term effectiveness of mainframes in running such workloads is never discussed by AWS in any of their quotations. AWS advice falls very neatly into “mainframes old and bad” and “cloud new and good” logic that is copied directly from Gartner — although Gartner previously proposed “mainframes old and bad” and “client-server new and good.”

The quote continues…

Alternatively, customers can keep their applications as written and re-platform their workloads to AWS reusing existing code with minimal changes. – Data Center Knowledge

This was the argument normally the presentation by client server companies. However, the minimal changes end up typically being much higher than initially anticipated. Also, the word “can” is repeatedly used by AWS. Customers “can” migrate their workloads from mainframes to AWS cloud servers — however, the question of “why” is little addressed.

The quote continues…

Mainframe Modernization promises a complete, end-to-end migration pipeline that includes development, testing, and deployment tools necessary to automate the process. – Data Center Knowledge

Again, this fits into a marketing construct rather than reality. Several terms used here have proved to be problematic in the past, and two are “end to end” and “automation tools.” Let us review these terms.

Deceptive Term #1: Modernization or Mainframe Modernization

Mainframes are modern. And AWS does not mean modernization, and it uses the term to disparage mainframes, but what AWS is doing is eliminating the mainframe and moving the load to their cloud facility.

The mainframe modality of computing has been proven over many decades, and one can buy a new mainframe as one can purchase new client-server hardware. There is nothing less “modern” about a mainframe than a client-server design. Mainframes predated client servers, but that does not make client servers better or more modern than mainframes. What is curious about this is that the cloud modality of computing is similar to mainframes. In each case, the processing and storage are centralized, and people log in to the system remotely. However, each has advantages and disadvantages. Mainframes have specialized processors, while cloud servers are non-specialized as I cover in the article How to Understand the Design of Mainframes. AWS would like the following.

  1. On the Differences Between Mainframes and Cloud Servers: AWS would like customers to not understand the difference between cloud commodity servers and mainframes — imagining that they are essentially the same thing.
  2. Mainframes Are Completely Dated: The ultimate idea AWS is trying to present is that all companies that use mainframes are using the same mainframe from 1965. And therefore, “modernization,” i.e. replacement is required.

Thus the term is highly deceptive in multiple dimensions. No one can argue that mainframes lack modernity, so instead, misleading terminology like this is used by vendors without explaining why this assertion (the lack of modernity) or implied assertion is true.

Deceptive Term #2: End to End

This proposes to executives that everything is taken care of, and I have never seen a situation where the term meant anything.

Deceptive Term #3: Automation Tools

Automation tools are always overstated by the entity that uses automation to make the task seem more straightforward than it is. Previously, I have covered several automation tools by SAP in the article How Accurate Were SAP’s Rapid Deployment Solutions in Speeding Implementations, whose only impact was to cause companies to underestimate the timeline and work effort for implementing SAP.

The quote continues…

“With today’s launch of AWS Mainframe Modernization, customers and systems integrators can now more quickly modernize their legacy mainframe workloads in a predictable way and get rid of much of the complexity and manual work involved in migrations,” said William Platt, GM of Migration Services at AWS.

The overreliance on the term “mainframe modernization” is borderline insulting to the reader. AWS is copying Gartner’s false claim that one must move workloads off of mainframes because they are dated, and it is deceptive. Gartner never let on, and what has been barely covered is that companies paid Gartner to take a position against mainframes.

The quote continues…

The service puts the cloud vendor at odds with mainframe manufacturers and specialist software vendors that include IBM, Unisys, Bull, NEC, and Fujitsu. – Data Center Knowledge

Yes, that is true.

However, AWS is doing this to pull business away from other companies, which is left out of the explanation by Data Center Knowledge. I find it curious that there is no analysis in this article by Data Center Knowledge, and it simply serves as a press release for AWS and repeats its claims.

The quote continues…

Shots fired
More than 50 years after the appearance of the first mainframe, the IBM System/360, these hulking machines are still a common sight in banking, insurance, and retail, thanks to their ability to efficiently process huge volumes of transactions, and their reputation for security and uptime.

However, these systems are incredibly expensive and difficult to maintain, and the pool of people qualified to deal with their legacy software is shrinking all the time. – Data Center Knowledge

This also frames mainframes as dated.

Data Center Knowledge Claim #1: The Size of the Mainframes is Unmanagable?

The size of the machines is not relevant to whether or not they are dated. A bunch of client-server machines in racks can be significant as well.

Are these machines too hulking to find a place for? What is the difference between this size and a rack at an AWS facility?

Physical size is not relevant to the discussion, and the cost of renting physical space to store a machine is a tiny fraction of the cost of servers.

Data Center Knowledge Claim #2: Mainframes Are Difficult and Expensive to Maintain?

It is also not clear why the machines are difficult to maintain. Some mainframes have been running essentially the same software for over 50 years. This was decried in a report by the Congressional Budget Office (CBO) about several government systems, like the IRS and Social Security Administration, without seeming to notice that systems that stay up for 50 years are a good thing, not a bad thing. The CBO does not appear to know much about how computing systems work.

These are custom-developed software running on mainframes, and they have been highly effective and low in cost versus alternatives for their life — and their lives are not at an end. All that is necessary is to update hardware and update the code for new requirements.

The problem is that the US government has not put effort into maintaining these systems. But that is not an argument for rewriting all of the code from COBOL into a new language and placing the loads onto the cloud. However, this is the simplistic platitude proposed by the CBO, AWS, and virtually any company that can make money from convincing companies to do this.

Data Center Knowledge Claim #3: The Pool of People Qualified to Deal with their Legacy Software is Declining All the Time?

This claim falls into a pattern of quotes on mainframes that only older and soon to retire workers can maintain.

This would be the case only if the career has been made unappealing to the younger generation. This same issue applies to truck driving. Trucking companies in the US are fond of pointing out that their driving workforce is aging and they face a “driver shortage.” As I cover in the article The Real Story on the Reduced Standards in Trucking they leave out that since trucking deregulation, the job of a truck driver has declined in appeal significantly both in terms of pay and in working conditions, and new hires have roughly a 100% turnover rate. And all of this is because the job standards have been lowered.

This same issue is common regarding maintaining mainframe systems. If the industry makes jobs unappealing versus other opportunities, the industry will soon find itself with a labor shortage in that area. In my research into government COBOL programmers, I found examples where these programmers had been mistreated by the management within these government departments. Unsurprisingly, upon being mistreated, many of these COBOL programmers decided to leave. Multiple US government departments now face acute COBOL labor shortages and have had to pay a premium to hire these same programmers they prompted to retire early, at much higher costs. All of this background is left out by government departments when they discuss their issues with COBOL worker shortages.

As for the software being legacy, this is another term of propaganda that now makes the software sound as if it is also dated. But again, if it is has fallen out of conformance with business requirements, it can be adjusted. This same issue would also apply when requirements change in the future. If a change in requirements occurs that requires another adjustment in the code, is the current code also going to be called legacy at that time. In the article How SAP Used and Abused the Term Legacy, I cover that a person generally uses the term legacy with a sales quota that wants to replace an existing item with an item that helps meet their quote. That is, to a furniture salesman, all of the furniture in your house is legacy.

The quote continues…

AWS has been trying to get customers off mainframes and into its data centers for years.

Yes, but not because it is necessarily good for these prospects, but because it is profitable for AWS. – Data Center Knowledge

The quote continues…

This time, the company says it has built a runtime enthronement provides all the necessary compute, memory, and storage to run both refactored and replatformed applications while automatically handling capacity provisioning, security, load balancing, scaling, and application health monitoring.

Since this is all done via public cloud, there are no upfront costs, and customers only pay for the amount of compute provisioned. – Data Center Knowledge

Yes, but is any of this true? AWS already made false claims about mainframes in this article, so how much about its enthronement program is correct, or is marketing puffery? Furthermore, the idea that AWS does not have upfront costs is inaccurate. As soon as the company begins the process, it will incur both internal and likely consulting costs. Months before it can use its new system in production, it will need to have the AWS instance brought up. What is more accurate to say is that AWS does not charge license costs.

Is Data Center Knowledge a Passive Stooge for AWS?

Data Center Knowledge apparently has no other interest except to repeat whatever AWS says to them. What is the value of Data Center Knowledge in writing an article on what amounts to AWS’s press release? If they are not going to analyze the claims, it is worse than the same information being on the AWS website, as at least if it is on the AWS website, it will be perceived as marketing information. There is no thinking provided by Data Center Knowledge in this article. I wrote a much more lengthy article where I had to perform thought, which DCK was either not interested in doing, or was paid not to do by AWS. The article qualifies as a press release run in DCK by AWS. That is, Data Center Knowledge pretended to write an article when nearly the entire article is written — in effect — by AWS.

Included Case Study?

In the following quote, AWS provides a customer that has not yet gone through their deceptive mainframe modernization process — but they are super excited.

Brazil’s Banco Inter was among the first organizations to sign up for Mainframe Modernization with AWS.

“By using the AWS Mainframe Modernization managed runtime, we expect to simplify our card processor operations for enhanced resiliency and scalability,” said Guilherme Ximenes, CTO at Banco Inter. “We are also excited by the DevOps CI/CD pipeline for increasing the agility we need to more quickly deliver new credit and debit card transaction capabilities to our customers.” – Data Center Knowledge

It is a straightforward thing to do to trot out a single customer or a few customers to support the claims by AWS. First, the customer wants to get press coverage for themselves. The executive in question uses such things to negotiate for increases in salary as their capability has been acknowledged in the press. Furthermore, I have been in many companies as a consultant where what was written about them was not explained in the article. That is, the existence of an article about how a project went is not evidence that the project went that way. For example, you might notice the only time a problematic project is publicized is when the customer files a complaint in court against a vendor or consulting firm. That is, negative truth is told only when the company seeks to claim damages.

But observe that this is not even a customer that has already gone through the process. The quote states that Banco Inter…

expect to simplify our credit and debit card processing.

That future tense, not past tense.

And observe this part of the quote.

we need to more quickly deliver new credit and debit card transaction capabilities to our customers.

It is unclear why this can’t be done on a mainframe. Why can one run new credit and debit card transaction capabilities on the AWS cloud but not on the mainframe environment? I use both a credit and a debit card, and I can say I don’t have any desire for “new transaction capabilities.” When I buy things, I observe the transaction on my credit or debit statement. What are these new capabilities?

Furthermore, credit and debit card transactions have been proven for decades on mainframes. Guilherme Ximenes uses the terms “simplify,” “enhanced resiliency,” and “scalability.” AWS, no doubt, these are all things that AWS has told him, but it’s not clear why any of these but scalability is gained by moving to AWS from a mainframe. AWS can provide improved elasticity or scalability — however, it comes at a cost. And I’m afraid I have to disagree AWS is more straightforward. AWS is quite complex. And AWS resources charge at the top of the market rates.

AWS and other cloud providers are perfect for spinning up test systems — for this, it’s hard to compete as they have the environments ready to spin up, and they can be quickly brought down. However, the logic of placing all of your processing with them for production is not as strong. And as for AWS, their pricing has become quite high compared to competing companies like Digital Ocean.

An Unconvincing (Future Tense) Case Study

Overall, it is unclear why these transactions would run more quickly on a client-server design than a mainframe design. They are a pure transaction processing load. AWS is recommending companies move loads off of mainframes for which mainframes are their perfect matching processing modality.

The quote continues…

AWS Mainframe Modernization is available in preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), EU (Frankfurt), and South America (São Paulo) regions, before being expanded to additional locations “in the coming months.”

Is this a paid placement into Data Center Knowledge by AWS?

The only thing left out of the article is the pricing.

Why doesn’t Data Center Knowledge put the signup form right on the Data Center Knowledge website if they will be so obvious about it?

The quote continues…

It would be useful to note that the death of the mainframe has been predicted for at least two decades, and yet these machines have managed to stay surprisingly relevant.

This is the only part of the article where Data Center Knowledge required mental effort on the author’s part. However, the author does not delve into why mainframes have stayed surprisingly relevant. Does he not know?

Allow me to explain.

It has been “surprising” because Gartner and client-server vendors provided so much false information that made people think that mainframes were irrelevant. That is why their staying power has been “surprising.” This is just like with Y2K. As I covered in the article How ERP Vendors Deliberately exaggerated the Y2K Issue, vendors put out so much exaggerated information about the negative consequences of Y2K to create a “burning platform” to purchase their products and services that when Y2K came, it was a “surprise” how little disruption it caused.

Air Force and The New York Times AWS Case Studies

AWS’s Mainframe Modernization page has several actual case studies from companies that have gone through the process.

I found this quote interesting from the Air Force case study.

After we completed Phase 1, the resulting converted Java code contained COBOL paradigm design remnants, or COBOL overtones, that required personnel to have specialized skills to maintain the codebase. A plan was developed to identify and correct COBOL overtones with standard Java solutions. This part of the effort was considered low-risk because we used approaches, processes, and techniques proven in Phase 1.

Our refactoring approach used the TSRI JANUS Studio tool and a semi-automated refactoring method that performs further code optimization, naming changes, and other enhancements to improve architecture, design, security, sustainability and performance.

Refactoring the COBOL Memory Model to Java
Refactoring the COBOL data mapping layer to native Java and SQL
Removing COBOL-style control-flow/GOTO Logic
Identifying and removing redundant code sections
These techniques, along with the improved method synthesis algorithm, greatly improved the maintainability of the Java codebase. – AWS

Is Java the best language for this refactoring? I did not find anything in the case study as to why Java was used.

I found a quote from the New York Times case study of interest.

An attempt to manually rewrite the home delivery application between 2006 and 2009 failed. In 2015, an evaluation of alternate approaches determined that a second attempt at redeveloping the application would have been much more expensive, and an alternative emulator re-hosting would have continued to lock-up data in proprietary technology.

In 2015, CIS had grown to more than two million lines of COBOL code, 600 batch jobs, and 3,500 files sent daily to downstream consumers and systems. It consumed around 3 TB of hot data made up of 2 TB of VSAM files, and 1 TB of QSAM sequential files. It used 20 TB of backup cold storage.

The New York Times selected an automated conversion approach, which retains functional equivalence and critical business logic while converting core legacy applications to maintainable, refactored object-oriented Java. – AWS

This is the second of two case studies where the COBOL was refactored into Java. I found other statements around the use of Java by AWS in their Mainframe Modernization material. I found this quote on their website.

Modernizing with AWS allows mainframe customers to gain accessibility and scale by transitioning out of archaic components and technologies to more modern languages such as Java. Moving away from legacy proprietary mainframes allows access to a large pool of talented architects and specialists for design and operation. This resolves the mainframe retirement skill gap and attracts new talent to modernize core business workloads. – AWS

Is Java the language AWS directs customers to rewrite into? Notice again the use of “legacy,” “proprietary,” etc… What is amusing about this is that AWS has repeatedly taken open-source projects and made them proprietary. So AWS should probably not throw stones at other companies for offering proprietary technologies.

Observe also the use of the term “retirement skills gap,” which is an allusion to the idea that only older workers can program in COBOL.

The quote from the AWS website on the New York Times case study continues.

With the mainframe in operation for more than 35 years, the COBOL application had accumulated a fair amount of obsolete code due to a lack of adequate maintenance. It’s a best practice to identify this code and remove it, which reduces the amount of refactoring and testing work to do. – AWS

So this comment does not have anything to do with using AWS services. If the code is not maintained, it is likely it won’t be adequately maintained in the future either.

Conclusion

Overall, AWS makes a number of false claims that fit into a pattern of deceptive claims made by companies who don’t have any interest in what is true and just want to gain business for themselves and fall into the pattern of denigrating mainframes, without any evidence, simply through the use of insulting terminology.

The only analysis of the statements by AWS was found not in the article by Data Center Knowledge but in a comment on the article, which is the following quotation.

While there are many workloads that should be moved from the mainframe to the cloud, there are also many misconceptions (several in this article) about the mainframe. First, it is not the same mainframe from the 1960s. It has been continually modernized with the latest technology, in some cases ahead of cloud vendors. It runs Java, supports Rest APIs, runs analytics and AI/ML, supports DevOps processes, and is extremely secure with pervasive encryption. The mainframe is no longer “hulking” as it fits in a standard data center rack.

Choosing to run workloads in the cloud should be a decision made on an individual business application basis. Some workloads should move to the cloud and others should stay on the mainframe and possibly converted from COBOL to Java to gain costs and skills benefits. The cloud vendors would paint this as a black and white situation when it is not.

Beware the vendor who claims to have the golden hammer as that is a well recognized anti-pattern. Cloud vendors who tell you everything should move to the cloud are making everything look like a nail since all they have is a hammer.

Precisely.

Article Rating

This article by Data Center Knowledge is either a paid placement or has been written at one step above being a PR release by AWS in order to continue/promote some financial relationship with AWS — most likely advertising dollars. AWS’s statements are deceptive and rely upon derogatory language vis-a-vis mainframes and accusations against mainframes that are either dated or were never true. I did not notice some of the charged terms they were using until the second time analyzing this article. The terms are designed to roll into your subconscious without you stopping to think “wait is that true?”

These combined charged terms are exactly the way SAP markets and many other vendors also, but the diminution of other alternatives is quite reminiscent of SAP in particular. Look at how short the article is — but then look how much text it took me to address its points. That is the power of charged words.

This article by Data Center Knowlege (or AWS as it is not apparent where Data Center Knowlege wrote anything) receives a 1.5 out of 10 for accuracy.