Azure now has two Arms: the familiar Azure Resource Manager infrastructure description language and tools, and now a family of Azure VMs running on Ampere Arm-based processors. This new hardware option is a big change for Microsoft’s cloud, as it aims to catch up with AWS’s custom Graviton systems.
The arrival of Arm hardware in Azure is as much an economic decision as a technology one. If you’ve ever visited one of its hyperscale data centers, you’ll have been taken around huge rooms in stadium-size buildings full of racks packed with servers, storage, and networking hardware. The newer buildings are full to the brim with racks upon racks of the latest hardware, but some older rooms are half empty.
Why use Arm in the cloud?
Those half-empty data centers were initially designed for older, larger, less-efficient servers, with power feeds for those servers. Newer hardware takes much less space for the same power requirements, and replacing those original feeds would require completely demolishing and rebuilding the data center. When that older hardware was retired, new systems came in, rapidly pushing existing space to its power limits.
What if we could use systems with lower power demands? Suddenly those empty halls would be full again, with much more compute at a higher density but with no need to replace the original power feeds. The resulting savings in power and infrastructure could be passed on to users. That’s the role of Arm in Azure, providing those lower-powered servers that take advantage of available power supplies while supporting the growing demands of an industry that’s still in the early days of a cloud-native transition.
Arm on Azure: right here, right now
Azure’s first Arm-based VMs are now generally available, running on Ampere Altra-based servers, with support for most common Linuxes, including Ubuntu, Red Hat, and Debian. Although Windows Server isn’t available yet, you do have the option of using Arm builds of Windows 11 Pro and Enterprise for application development and testing. This allows you to use cloud-hosted Windows systems to build Arm64 versions of your code as part of a CI/CD (continuous integration and continuous delivery) build pipeline.
Alternatively, if you’re using .NET 6, you can use Arm-based virtual machines to host ASP.NET and console applications, giving you a low-power option for hosting web front ends and business logic. Microsoft’s aim for Windows on Arm, as well as for .NET, is to have no difference in capabilities between x86/x64 and Arm64, with code built for both architectures and loaded as needed, with emulation handling any x86 and x64 that hasn’t been rebuilt for Arm.
The Ampere Altra servers in Azure deliver three different classes of VMs, with one physical core per virtual CPU. As they’re designed for high-density operations, you won’t find configurations that match some of the more high-end x64 systems in Azure, but they should cope with most common workloads.
The Epsv5 and Epdsv5 series of VMs offer up to 8GB of RAM per vCPU, run at 3GHz, and are designed to host enterprise workloads, such as databases and in-memory caches. You can ramp up to 32 vCPUs, with no directly attached SSD storage using Epsv5. If you want local storage for speed, you need to purchase the Epdsv5 which has up to 1,200GB of local SSD and offers Azure’s standard storage options.
The Dpsv5 and Dpdsv5 VMs are similar, intended to host general-purpose workloads. As a result, they only offer 4GB of RAM per vCPU. This makes them ideal for basic servers, like MySQL or running .NET and Kestrel. You can have as many as 64 vCPUs, and again, the Dpdsv5 option adds local storage, with support for up to 2,400GB of local SSD.
For smaller workloads, there are the Dplsv5 and Dpldsv5 VMs. Here you also get the choice of local or remote storage, but RAM per vCPU is limited to 2GB, with up to 64 vCPUs in one VM. Limited memory requires some work to ensure you have the correct services in your host OS. The resulting platform is intended to scale out microservices where you may need to quickly spin up new instances of a service. One option for this SKU is to use it to host nodes in Kubernetes, where you’re running many instances of the same service and need dense deployments to get the performance your application needs.
Pricing depends on where you’re located. In Azure’s East US region, a 2 vCPU Dpdsv5 system with 75GB of storage will vary from just over $25 per month for a three-year reserved instance to almost $66 per month using pay-as-you-go. The more vCPUs you have, the more expensive: A 64 vCPU system with 2,400GB of storage and 208GB of RAM is about $802 per month for a three-year reserved instance, and almost $2,112 per month for pay-as-you-go.
Choosing an Arm VM
How can you find out if the new Arm VMs suit your workloads? With many different base server specifications in Azure, with AMD and Intel CPUs as well as Arm, Microsoft has introduced the Azure Compute Unit to provide a base benchmark to compare different VM hosts. The ACU is standardized on a small VM, the Standard_A1 SKU, with a value of 100.
Other SKUs are benchmarked against that standard, so you can compare different CPU types and quickly see if your code will be able to take advantage of an alternate VM type. Unfortunately, Microsoft has yet to publish ACU values for its Arm SKUs, but you can make a reasonable guess by comparing them with other, similar VMs.
You'll find the most value by using Arm in applications that require a lot of dense compute. For now, that’s likely to be containers in Kubernetes, and Azure already supports Arm nodes in AKS. This feature is currently rolling out across the Azure cloud, but you should be able to find a region with access fairly easily. There are already Arm builds of Microsoft’s CBL-Mariner container host, and with Arm support for most Linuxes easy to find, you should be able to build, test, and deploy Arm binary-based containers relatively quickly.
Microsoft has long been rumored to be running some of its own services on Arm, so it’s good to see its Ampere hardware finally make a public appearance. Its commitment to the platform goes a lot further than cloud hardware. It’s also been working to bring its Open JDK build to Arm, with a port to the AArch64 architecture now part of the platform. Java remains an important enterprise platform, so with both .NET and Open JDK running on Azure’s Arm systems, you have a choice of how you build and deploy code.
With hyperscale data centers like Azure’s requiring significant amounts of power, a high-density, low-power alternative to Intel and AMD is important input to any buying decision, especially when businesses complete their annual environmental impact assessments. It’ll be interesting to watch how Microsoft’s Azure Arm offering evolves, as next-generation hardware based on Arm’s Neoverse platform architecture is released and as it continues rolling out Arm versions of its supported operating systems. Could an Arm-powered Windows Server release be just around the corner?