Visit your regional NVIDIA website for local content, pricing, and where to buy partners specific to your country.
Massive memory supercomputing for emerging AI.
Open up enormous potential in the age of generative AI with a new class of AI supercomputers that interconnects NVIDIA Grace Hopper™ Superchips into a singular GPU. The NVIDIA DGX™ GH200 is designed to handle terabyte-class models for massive recommender systems, generative AI, and graph analytics, offering a massive shared memory space with linear scalability for giant AI models.
The NVIDIA DGX GH200 is the only AI supercomputer that offers a massive shared memory space across interconnected NVIDIA Grace Hopper Superchips, providing developers with more memory to build giant models.
Grace Hopper Superchips eliminate the need for a traditional PCIe CPU-to-GPU connection by combining an NVIDIA Grace™ CPU with an NVIDIA Hopper™ GPU on the same package, increasing bandwidth by 7X and slashing interconnect power consumption by more than 5X.
Build giant models in weeks instead of months with a turnkey DGX GH200 deployment. This full-stack data center-class solution includes integrated software and white-glove services from NVIDIA, from design to deployment, to speed the ROI of AI
Take a deep dive into what makes DGX GH200 ideal for developing giant AI models.
The NVIDIA DGX GH200 connects Grace Hopper Superchips with the NVLink Switch System.
Built from the ground up for enterprise AI, the NVIDIA DGX platform incorporates the best of NVIDIA software, infrastructure, and expertise in a modern, unified AI development and training solution.
NVIDIA Privacy Policy