AWS’ secret weapon is revolutionizing computing – SiliconANGLE

[ad_1]

Amazon Web Services Inc. is pointing the way to a revolution in system architecture.

Much in the same way that AWS defined the cloud operating model last decade, we believe it is once again leading in future systems. The secret sauce underpinning these innovations is specialized designs that break the stranglehold of inefficient and bloated centralized processing architectures. We believe these moves position AWS to accommodate a diversity of workloads that span cloud, data center as well as the near and far edge.

In this Breaking Analysis, we’ll dig into the moves that AWS has been making, explain how they got here, why we think this is transformational for the industry and what this means for customers, partners and AWS’ many competitors.

AWS’ architectural journey: the path to Nitro and Graviton

The infrastructure-as-a-service revolution started by AWS gave easy access to virtual machines that could be deployed and decommissioned on-demand. Amazon used a highly customized version of Xen that allowed multiple VMs to run on one physical machine. The hypervisor functions were controlled by Intel Corp.’s x86 central processing unit chips.

According to Amazon.com Inc. Chief Technology Officer Werner Vogels, as much as 30% of the processing was wasted, meaning it was supporting hypervisor functions and managing other parts of the system, including the storage and networking. These overheads led to AWS developing custom application-specific integrated circuits that helped accelerate workloads.

In 2013, AWS began shipping custom chips and partnered with Advanced Micro Devices Inc. chips and announced EC2 C3 instances. But as the AWS cloud scaled up, Amazon wasn’t satisfied with the performance gains and it was seeing architectural limits down the road.

That prompted AWS to start a partnership with Annapurna Labs in 2014 and the cloud giant launched EC2 C4 instances in 2015. The ASIC in C4 optimized offload functions for storage and networking but still relied on Intel Xeon as the control point.

AWS shelled out a reported $350 million to acquire Annapurna in 2015 – a meager sum to acquire the secret sauce of its future system design. This acquisition led to a modern version of Project Nitro in 2017. [Nitro offload cards were first introduced in 2013].

At this time, AWS introduced C5 instances, replaced Xen with KVM and more tightly coupled the hypervisor with the ASIC. Last year, Vogels said that this milestone offloaded the remaining components, including the control plane and rest of the I/O, and enabled nearly 100% of the processing to support customer workloads. It also enabled a bare-metal version of compute that spawned the partnership with VMware Inc. to launch VMware Cloud on AWS.

Then in 2018, AWS took the next step and introduced Graviton, its custom designed Arm-based chip. This broke the dependency on x86 and launched a new era of architecture, which now supports a wide variety of configurations to support data-intensive workloads. These moves set the framework for other AWS innovations, including new chips optimized for machine learning and artificial intelligence, from training to inference.

The bottom line is that AWS has architected an approach that offloaded the work currently done by the central processor. It has set the stage for the future allowing shared memory, memory disaggregation and independent resources that can be configured to support workloads from the cloud to the edge – at much lower cost than can be achieved with general-purpose approaches.

Nitro is the key to this architecture. To summarize: AWS Nitro is a set of custom hardware and software that runs on Arm-based chips spawned from Annapurna. AWS has moved the hypervisor, network and storage virtualization to dedicated hardware that frees up the CPU to run more efficiently. The reason this is so compelling in our view is that AWS now has the architecture in place to compete at every level of the massive total addressable market, comprising public cloud, on-premises data centers and both the near and far edge.

Setting the direction for the entire industry

This chart below pulls data from the Enterprise Technology Research data set. It lays out key players competing for the future of cloud, data center and the edge. We’ve superimposed Nvidia Corp. and Intel. They don’t show up directly in the ETR survey, but they clearly are platform players in the mix.

The data shows Net Score on the vertical axis – that’s a measure of spending velocity. Market Share is on the horizontal axis, which is a measure of pervasiveness in the data set. We’re not going to dwell on the relative positions here, rather let’s comment on the players and start with AWS. We’ve laid out the path AWS took to get here and we believe it is setting the direction for the future.

AWS

AWS is really pushing hard on migration to its Arm-based platforms from x86. Patrick Moorhead at the Six Five Summit spoke with David Brown, who heads EC2 at AWS. He talked extensively about about migrating from x86 to AWS’ Arm-based Graviton 2. And he announced a new developer challenge to accelerate migration to Arm.

The carrot Brown laid out for customers is 40% better price-performance. He gave the example of a customer running 100 server instances that can do the same work with 60 servers by migrating to Graviton2 instances. There’s some migration work involved by the customers, but the payoff is large.

Generally, we bristle at the thought of migrations. The business value of migrations is a function of the benefit achieved, less the cost of the migration, which must account for any business disruption, code freezes, retraining and time-to-value variables. But it seems in this case, AWS is minimizing the migration pain.

The benefit to customers, according to Brown, is that AWS currently offers something like 400 different EC2 instances. As we reported earlier this year, nearly 50% of the new EC2 instances shipped last year were Arm-based. And AWS is working hard to accelerate the pace of migration away from x86 onto its own design.

Nothing could be more clear.

Intel

Intel is finally responding in earnest to the market forces. We believe Intel is essentially taking a page out of Arm’s playbook. We’ll dig into that a bit today. In 2015, Intel paid $16.7 billion for Altera, a maker of field-programmable gate arrays.

Also at the Six Five Summit, Navin Shenoy of Intel presented details of what Intel is calling an Infrastructure Processing Unit or IPU. This is a departure from Intel norms where everything is controlled by a central processing unit. IPUs are basically smart network interface cards, as are data processing units – don’t get caught up in the acronym soup. As we’ve reported, this is all about offloading work, disaggregating memory and evolving systems-on-chip or SoCs and systems on package or SoPs.

But let this sink in a bit. Intel’s moves this past week – it seems to us anyway – are clearly designed to create a platform that is Nitro-like. And the basis of that platform is a $16.7 billion acquisition. Compare that to AWS’ $350 million tuck-in of Annapurna. That’s incredible.

Now, Shenoy said in his presentation “We’ve already deployed IPUs using FPGAs in very high volume at Microsoft Azure and we’ve recently announced partnerships with Baidu, JD Cloud and VMWare.”

Let’s look at VMware in particular.

VMware

VMware is the other really prominent platform player in this race. In 2020, VMware announced Project Monterey, which is based on those FPGAs from Intel. So VMware is in the mix and it chose to work with Intel most likely for a variety of reasons. One is that the software running on VMware has been built for x86 and there’s a huge installed base. The other is that new Intel Chief Executive Pat Gelsinger was heading VMware at the time that Project Monterey was conceived – and he is an Intel legend and sees the future clearly.

Regardless, VMware has a Nitro-like offering. In our view its optionality is limited by Intel, but at least it’s in the game and appears to be ahead of the competition in this space.

Other hyperscalers

What about Microsoft Corp., Google LLC and Alibaba Group Ltd.? Suffice it to say that despite the relationship between Intel and Microsoft, we strongly believe Microsoft and Google, as well as Alibaba, will follow AWS’ lead and develop an Arm-based platform like Nitro. They have to, in our opinion, to keep pace with AWS.

The rest of the data center pack – Dell, Cisco, HPE, IBM and Oracle

Dell Technologies Inc. has VMware. Check. Despite the impending split, we don’t expect any real change there. Dell will leverage whatever VMware does and do it better than anyone else.

Cisco Systems Inc. is interesting in that it just revamped its UCS but we don’t see any evidence that it has Nitro-like plans in its roadmap. Same with Hewlett Packard Enterprise Co. Both of these companies have history and capabilities around silicon: Cisco designs its own chips today for carrier-class use cases and HPE, as we’ve reported, probably has remnants of The Machine hanging around. But both companies are very likely to follow VMware’s lead and go with an Intel-based design.

What about IBM? Well, we don’t really know. We think the best thing IBM could do would be to move the IBM cloud to an Arm-based Nitro-like platform. And we think the mainframe should move to Arm as well. It’s just too expensive to build a specialized mainframe CPU these days.

And if we were in charge of Oracle Corp., we would build, or partner to build, an Arm-based, Nitro-like database cloud, where Oracle runs cheaper, faster and consumes less energy than any other platform running Oracle. And we’d go one step further and optimize for competitive databases in the Oracle Cloud – and just run the table on cloud database. Imagine Snowflake running in the Oracle Cloud!

A word on FPGAs

We’ve never been overly excited about the FPGA market. Admittedly, FPGAs are not this author’s wheelhouse, but we’ve never felt these mega-acquisitions were justified. Intel’s move with Altera and AMD acquiring Xilinx for $35 billion — both of these were inflated in our view, especially when we compare these with AWS’ Annapurna acquisition. We found a nice overview of the FPGA market from The Next Platform, which positions FPGAs as a declining market. We’re not surprised.

At least AMD is using its inflated stock price to do the deal, but we honestly think that the Arm ecosystem will obliterate the FPGA market by making it simpler and faster to move to SoC with far better performance, flexibility, integration and mobility. We see FPGAs as low-volume and not nearly as attractive as programmable innovations coming from the Arm ecosystem.

We reached out to Patrick Moorhead of Moor Insights & Strategy to get his perspective on the AMD Xilinx deal. Here are his thoughts:

OK, so that’s encouraging feedback. It looks financially viable given the inflated market conditions and the use of AMD’s stock. We feel that if AMD focuses on integrating Arm components into its designs, it could accelerate its business.

We still can’t let go of brilliance of Amazon’s acquisition of Annapurna for $350 million. Amazing.

Intel’s vision for the data center of the future

Below is a chart that Shenoy showed depicting Intel’s vision of the future:

Let’s break this down. What you see above is the IPUs, which are intelligent NICs embedded in the four blocks shown and communicating across a fabric. General-purpose compute is in the upper left and machine intelligence on the bottom left, and up top right are storage services and then at the bottom right are the various alternative processors.

This is Intel’s view of how to share resources and go from a world where everything is controlled by a central processing unit to a more independent set of resources that can work in parallel.

And Gelsinger has talked about all the cool tech that this will allow Intel to incorporate, including PCI gen 5 and CXL memory interfaces that enable memory-sharing and disaggregation, and 5G and 6G connectivity, and so forth.

How Arm views the future

First, Arm marketing tends to be really techie. But there are definite similarities with Intel’s vision, as you can see below, especially on the righthand side as highlighted in the red dotted area. You’ve got blocks of different processor types that are programmable. Notice the “High Bandwidth Memory” HBM3 + DDRS on the two sides, bookending the blocks – that’s shared across the system. And it’s connected by PCIe-Gen5, CXL or CCIX, multi-die/socket.

OK, so you maybe are looking at this, saying two sets of block diagrams – big deal. Although there are similarities around disaggregation, implied shared memory and the use of advanced standards, there are also some notable differences.

In particular, Arm is at an SoC level, whereas Intel is talking FPGAs. Neoverse, Arm’s architecture, is shipping in test mode and will have product in end markets by late 2022. Intel is talking about 2025, or 2024 at best. Arm’s roadmap is much more clear. Now, Intel said it will release more details in October, so maybe we’ll recalibrate at that point but it’s clear to us that Arm is way further along.

The other major difference is volume. Intel is coming at this from the high-end data center and presumably plans to push down-market to the edge. Arm is coming at this from the edge — low cost, low power, superior price-performance. Arm is already winning at the edge and, based on the data we shared earlier from AWS, it’s clearly gaining ground in the enterprise.

History strongly suggests that the volume approach will win.

Implications for customers and the ecosystem

Let’s wrap by looking at what this means for customers and the partner ecosystem.

The first point we’d make is follow the consumer apps. The capabilities in consumer apps like image processing, natural language processing, facial recognition, voice translation – these inference capabilities going on today in mobile will find their way into the enterprise ecosystem.

Ninety percent of costs associated with machine learning in the cloud are around inference. In the future, much of the AI in the enterprise, and most certainly at the edge, will be real-time inference. It’s not happening today in the enterprise because it’s too expensive and immature outside consumer use cases. This is why AWS is building custom chips for inferencing. It wants to drive costs down and increase adoption.

The second point is you should start experimenting and see what you can do with Arm-based platforms. Moore’s Law is accelerating and Arm is in the lead in terms of performance, price-performance, cost and energy consumption. By moving some workloads onto Graviton, for example, you’ll see what types of cost savings you can drive, and possibly new applications you can deliver to the business. Put a couple of engineers on the task and see what they can do in two or three weeks’ time. You might be surprised or you might say it’s too early for us – but find out. You may strike gold.

We would also suggest that you talk to your hybrid cloud provider and find out if they have a Nitro. We shared that VMware has a clear path. What about your other strategic suppliers? What’s their roadmap? What’s the timeframe to move from where they are today — faster boxes every two years with a professional services-led as-a-service pricing model — to something that resembles Nitro and a much more attractive software model? How are they thinking about reducing your costs and supporting new workloads at scale?

And for independent software vendors, consider those consumer capabilities we discussed earlier – all these mobile and automated systems in cars now and things like biometrics. These machine intelligence capabilities are going to find their way into your software. And your competitors are porting to Arm, actively. They’re embedding these consumerlike capabilities into their apps. Are you? We would strongly recommend you take a look at that, talk to your cloud suppliers and see what they can do to help you innovate, run faster and cut costs.

Doing nothing and watching to see how the market evolves is a viable strategy sometimes. We don’t think this is one of those times.

Keep in touch

Remember these episodes are all available as podcasts wherever you listen. Email [email protected], DM @dvellante on Twitter and comment on our LinkedIn posts.

Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE.  If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at [email protected]

Watch the full video analysis:

Image: alepatika23

Show your support for our mission by joining our Cube Club and Cube Event Community of experts. Join the community that includes Amazon Web Services and soon to be Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.

Join Our Community 

We are holding our second cloud startup showcase on June 16. Click here to join the free and open Startup Showcase event.

 

“TheCUBE is part of re:Invent, you know, you guys really are a part of the event and we really appreciate your coming here and I know people appreciate the content you create as well” – Andy Jassy

We really want to hear from you. Thanks for taking the time to read this post. Looking forward to seeing you at the event and in theCUBE Club.



[ad_2]

Click here to get More Details

Leave a Reply

Your email address will not be published. Required fields are marked *