NVIDIA at Hot Chips 35

Join us online and in-person at Stanford University, August 27–29, for this year’s Hot Chips to learn how the NVIDIA accelerated computing platform is reimagining the data center for the age of AI. Featuring three powerful architectures—GPU, DPU, and CPU and a rich software stack—it’s built to take on the modern data center’s toughest challenges. Attend our featured sessions to learn more.

NVIDIA Takes Machine Learning to the Next Level

NVIDIA recaps Chief Scientist Bill Dally’s keynote from Hot Chips 35. Learn about the dramatic gains in hardware performance that spawned generative AI, and the ideas for future speedups that are driving machine learning to new heights.

Schedule at a Glance

Sun, Aug 27 | 9:00-9:30 a.m. PT

ML Inference Overview

Tutorial | Micah Villmow (NVIDIA)
Sun, Aug 27 | 3:00-4:00 p.m. PT

UCIe Protocol

Tutorial | Marvin Denman (NVIDIA), Swadesh Choudhary (Intel)
Mon, Aug 28 | 11:30 a.m. - 12:00 p.m. PT

ARM’s Neoverse V2 platform: leadership performance and power efficiency for next-generation cloud computing, ML and HPC workloads

Session | Magnus Bruce (Arm)
Tues, Aug 29 | 9:00-10:00 a.m. PT

Hardware for Deep Learning

Keynote | Bill Dally (NVIDIA)
Tues, Aug 29 | 11:30 a.m. - 12:00 p.m. PT

NVIDIA's Resource Fungible Network Processing ASIC

Session | Kevin Deierling (NVIDIA)

Architectures for the Modern Data Center

Hopper GPU Architecture

The NVIDIA Hopper™ architecture is powering the next generation of accelerated computing with unprecedented performance, scalability, and security for every data center. With the ability to securely scale diverse workloads—from small enterprise to exascale HPC and trillion-parameter AI—Hopper enables brilliant innovators to fulfill their life's work at the fastest pace in human history.

Grace CPU Architecture

The NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used for HPC and AI applications. NVIDIA Grace Hopper Superchip is a breakthrough integrated CPU+GPU for giant-scale AI and HPC applications. For CPU-only HPC applications, the NVIDIA Grace CPU Superchip provides the highest performance, memory bandwidth, and energy efficiency compared to today’s leading server chips.

BlueField DPU Architecture

The NVIDIA® BlueField® data processing unit (DPU) ignites unprecedented innovation for data centers and supercomputing infrastructures. By offloading, accelerating, and isolating a broad range of advanced networking, storage, and security services, BlueField DPUs provide a secure and accelerated infrastructure for any workload—in any environment, from the cloud to the data center to the edge.


Explore NVIDIA Solutions

NVIDIA DGX H100 Quick Tour

Explore DGX H100 with Jensen Huang as your guide. Check out the accelerated computing DGX engine behind the large language model (LLM) breakthrough, and learn why the NVIDIA DGX platform is the blueprint for half of the Fortune 100 customers building AI Infrastructure worldwide. 

NVIDIA DGX GH200 AI Supercomputer

Increasing compute demands in AI and HPC—including an emerging class of trillion-parameter models—are driving a need for multi-node, multi-GPU systems with seamless, high-speed communication between every GPU. To build the DGX GH200 computing platform, a fast, scalable interconnect—the NVLink Switch System—is employed to offer 144 terabytes (TB) of shared memory with linear scalability for giant AI models.

NVIDIA Hopper Architecture in Depth

The NVIDIA H100 Tensor Core GPU is our ninth-generation data center GPU designed to deliver an order-of-magnitude performance leap for large-scale AI and HPC over the prior-generation NVIDIA A100 Tensor Core GPU. Get an in-depth look at the new NVIDIA H100 GPU and the new features of the NVIDIA Hopper architecture.

NVIDIA GH200 Grace Hopper Architecture

The NVIDIA® GH200 Grace Hopper architecture brings together the groundbreaking performance of the NVIDIA Hopper GPU with the versatility of the NVIDIA Grace™ CPU, connected with a high bandwidth and memory coherent NVIDIA NVLink® Chip-2-Chip (C2C) interconnect in a single superchip, and support for the new NVLink Switch System. Learn more from this deep dive into the NVIDIA Grace Hopper Superchip Architecture.

NVIDIA Spectrum-X

The NVIDIA Spectrum™-X networking platform, featuring NVIDIA Spectrum-4 switches and BlueField®-3 data processing units (DPUs), is the world’s first ethernet fabric built for AI, accelerating generative AI performance by 1.7X over traditional ethernet fabrics. With Spectrum-X, cloud service providers can accelerate the development, deployment, and time to market of AI solutions, while improving return on investment.

Access VMware vSphere on NVIDIA BlueField DPU with NVIDIA LaunchPad

Apply to try vSphere on BlueField with NVIDIA LaunchPad. You’ll get access to VMware ESXi-optimized software running on BlueField DPU-accelerated infrastructure for prototyping and testing next-generation applications and workloads.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides over 15,000 worldwide members with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

NVIDIA Training

Our expert-led courses and workshops provide learners with the knowledge and hands-on experience necessary to unlock the full potential of NVIDIA solutions. Our customized training plans are designed to bridge technical skill gaps and provide relevant, timely, and cost-effective solutions for an organization's growth and development.

DGX Administrator Training

Learn how to administer the NVIDIA DGX platform for all clusters and systems. Unique courses for DGX H100 and A100DGX BasePODDGX SuperPOD, and even DGX Cloud offer attendees the knowledge to administer and deploy the platform successfully.

Register now to join NVIDIA at Hot Chips 35.