NVIDIA Grace Hopper Superchip

The breakthrough accelerated CPU for giant-scale AI and HPC applications.

Higher Performance and Faster Memory—Massive Bandwidth for Compute Efficiency

The NVIDIA GH200 Grace Hopper Superchip is a breakthrough accelerated CPU designed from the ground up for giant-scale AI and high-performance computing (HPC) applications. The superchip delivers up to 10X higher performance for applications running terabytes of data, enabling scientists and researchers to reach unprecedented solutions for the world’s most complex problems.

Take a Closer Look at the Superchip

The NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace™ and Hopper™ architectures using NVIDIA® NVLink®-C2C to deliver a CPU+GPU coherent memory model for accelerated AI and HPC applications.

  • CPU+GPU designed for giant-scale AI and HPC
  • New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5
  • Supercharges accelerated computing and generative AI with HBM3 and HBM3e GPU memory
  • Runs all NVIDIA software stacks and platforms, including NVIDIA AI Enterprise, HPC SDK, and Omniverse™

GH200 is currently available.

NVIDIA GH200 NVL32

One Giant Superchip for LLMs,
Recommenders, and GNNs

The CPU-GPU memory interconnect of the NVIDIA GH200 NVL32 is remarkably fast, enhancing memory availability for applications. This technology is part of a scalable design for hyperscale data centers, supported by a comprehensive suite of NVIDIA software and libraries, accelerating thousands of GPU applications. GH200 NVL32 is ideal for tasks like LLM training, recommender systems, graph neural networks (GNNs), and more, offering significant performance improvements to AI and computing applications.

Explore Grace Hopper Reference Design for Modern Data Center Workloads

NVIDIA HGX

for AI training, inference, and HPC

NVIDIA GH200 Superchip
NVIDIA BlueField®-3
OEM-defined input/output (IO) and fourth-generation NVLink

NVIDIA unveils the next-generation GH200 Grace Hopper Superchip platform for the era of accelerated computing and generative AI.