NVIDIA Grace Hopper Superchips designed for accelerated generative AI enter full production


NVIDIA  announced that the NVIDIA® GH200 Grace Hopper Superchip is in full production, set to power systems coming online worldwide to run complex AI and HPC workloads.

The GH200-powered systems join more than 400 system configurations powered by  different combinations of NVIDIA’s latest CPU, GPU and DPU architectures — including NVIDIA GraceNVIDIA HopperNVIDIA Ada Lovelace and NVIDIA BlueField® — created to help meet the surging demand for generative AI.

At COMPUTEX, NVIDIA founder and CEO Jensen Huang revealed new systems, partners and additional details surrounding the GH200 Grace Hopper Superchip, which brings together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink®-C2C interconnect technology. This delivers up to 900GB/s total bandwidth — 7x higher bandwidth than the standard PCIe Gen5 lanes found in traditional accelerated systems, providing incredible compute capability to address the most demanding generative AI and HPC applications.

“Generative AI is rapidly transforming businesses, unlocking new opportunities and accelerating discovery in healthcare, finance, business services and many more industries,” said Ian Buck, vice president of accelerated computing at NVIDIA. “With Grace Hopper Superchips in full production, manufacturers worldwide will soon provide the accelerated infrastructure enterprises need to build and deploy generative AI applications that leverage their unique proprietary data.”

Global hyperscalers and supercomputing centers in Europe and the U.S. are among several customers that will have access to GH200-powered systems.

Hundreds of Accelerated Systems and Cloud Instances
Taiwan manufacturers are among the many system manufacturers worldwide bringing to market a wide variety of systems powered by different combinations of NVIDIA accelerators and processors. These include AAEON, Advantech, Aetina, ASRock Rack, ASUS, GIGABYTE, Ingrasys, Inventec, Pegatron, QCT, Tyan, Wistron and Wiwynn — all featured in Huang’s COMPUTEX keynote address today as key partners.

Additionally, global server manufacturers Cisco, Dell Technologies, Hewlett Packard Enterprise, Lenovo, Supermicro and Eviden, an Atos company, offer a broad array of NVIDIA-accelerated systems.

Cloud partners for NVIDIA H100 include Amazon Web Services (AWS), Cirrascale, CoreWeave, Google Cloud, Lambda, Microsoft Azure, Oracle Cloud Infrastructure, Paperspace and Vultr.

NVIDIA L4 GPUs are generally available on Google Cloud.

Full-Stack Computing Across Accelerated Systems
The coming portfolio of systems accelerated by the NVIDIA Grace, Hopper and Ada Lovelace architectures provides broad support for the NVIDIA software stack, which includes NVIDIA AI, the NVIDIA Omniverse platform and NVIDIA RTX technology.

NVIDIA AI Enterprise, the software layer of the NVIDIA AI platform, offers over 100 frameworks, pretrained models and development tools to streamline development and deployment of production AI, including generative AI, computer vision and speech AI.

The NVIDIA Omniverse development platform for building and operating metaverse applications enables individuals and teams to work across multiple software suites and collaborate in real time in a shared environment. The platform is based on the Universal Scene Description framework, an open, extensible 3D language for virtual worlds.

The NVIDIA RTX platform fuses ray tracing, deep learning and rasterization to fundamentally transform the creative process for content creators and developers with support for industry-leading tools and APIs. Applications built on the RTX platform bring the power of real-time photorealistic rendering and AI-enhanced graphics, video and image processing to enable millions of designers and artists to create their best work.