CoreWeave to Offer New 4th Gen Intel Xeon Scalable Processors for Accelerated CPU Performance

On January 10, Intel launched its new 4th Gen Intel Xeon Scalable processors — which will soon be available on CoreWeave Cloud. Our co-founder and CTO Brian Venturo joined Intel in their live stream launch event to discuss how this impacts our clients and their ability to work more efficiently.

xeon feature image
Intel

On January 10, Intel launched its new 4th Gen Intel Xeon Scalable processors — which will soon be available on CoreWeave Cloud. Formerly codenamed Sapphire Rapids, the processors deliver greater system power efficiency and performance that can't be achieved by simply adding more cores.

CoreWeave will include 4th Gen Intel Xeon Scalable processors in its NVIDIA HGX H100 server clusters, coming to CoreWeave clients in Q1 2023 and available to reserve today. The new clusters are purpose-built to support AI workloads, delivering up to 3.5X better efficiency than the NVIDIA HGX A100.

Intel’s 4th Gen Xeon Scalable processors unlock new levels of performance across a wide breadth of AI workloads, including inference, natural language processing (NLP), model training, and deep learning. Together, these processors and the NVIDIA HGX H100 servers will deliver unmatched speed and performance for supercomputer instances in the cloud.

As a long-time partner of Intel, we’re thrilled to work together to bring top-rated infrastructure solutions to innovative companies paving the way in their fields, including AI, video effects and animation, healthcare, and more. Our co-founder and CTO Brian Venturo joined Intel in their live stream launch event on January 10 to discuss how this impacts our clients and their ability to work more efficiently.

“The 4th Gen and the Max family deliver extraordinary performance gains, efficiency, security capabilities, breakthrough new capacities in AI, cloud, and networking, delivering the world’s most powerful supercomputers that have ever been built,” said Patrick Gelsinger, CEO of Intel, during the live-stream event

Boost efficiency, security, and performance with built-in accelerators

The 4th Gen Intel Xeon Scalable processors feature built-in accelerators designed to accelerate performance across the fastest-growing workloads, such as AI, analytics, networking, storage, and HPC. By making the best use of CPU core resources, built-in accelerators can result in more efficient utilization, helping businesses achieve their sustainability goals.

Take a look at key accelerators and extensions within the 4th Gen Intel Xeon Scalable processors and the benefit they offer:

  • Intel Advanced Matrix Extensions (Intel AMX): delivers exceptional AI training and inference performance through accelerated matrix multiply operations.
  • Intel Data Streaming Accelerator (Intel DSA): speeds up streaming data movement and transformation operations for faster data analytics and networking.
  • Intel In-Memory Analytics Accelerator (Intel IAA): increases query throughput for more responsive analytics and decreases the memory footprint for increased database performance and efficiency.
  • Intel Dynamic Load Balancer (Intel DLB): optimizes queue scheduling and packet processing to dynamically balance loads across CPU cores.
  • Intel Software Guard Extensions (Intel SGX): help bring a zero-trust security strategy to life while unlocking new opportunities for business collaboration and insights, even with sensitive or regulated data, in addition to other security features.

Scale across hundreds of thousands of CPU cores in seconds

Your computing infrastructure shouldn’t be a bottleneck. That’s why CoreWeave offers on-demand access to a massive scale of CPU servers — including the latest generation Intel Xeon CPUs.

Whether you need support for a general-purpose workload or raw horsepower at scale, we can match your project with the best compute resource possible. CoreWeave’s CPU-only instances provide the scale, range, and flexibility that you need, whether it’s final-frame rendering, data analysis, inference and model training, or video transcoding.

Learn more about our fleet of CPU servers and the NVIDIA HGX H100 servers or reach out to our team to get started and ask questions. 

 

Related:

Copyright © 2023 IDG Communications, Inc.