NVIDIA at Hot Chips 2025

This year, NVIDIA joins Hot Chips 2025 at Stanford University to showcase how the NVIDIA Accelerated Computing platform is reimagining the data center for the age of AI. Featuring three powerful architectures—GPU, DPU, and CPU—and a rich software stack, it’s built to take on the modern data center’s toughest challenges. 

For more information on this year’s sessions, visit the Hot Chips webpage.

News at Hot Chips

Hot Topics at Hot Chips: Inference, Networking, AI Innovation at Every Scale—All Built on NVIDIA

NVIDIA experts detail how NVIDIA NVLink™, Spectrum-X™, NVIDIA Blackwell, and NVIDIA CUDA™ accelerate inference for millions of AI workflows across the globe.

NVIDIA Introduces Spectrum-XGS Ethernet to Connect Data Centers for Giga-Scale AI

NVIDIA Spectrum™-XGS Ethernet unifies distributed data centers into AI super-factories with 1.9x higher NVIDIA Collective Communications Library (NCCL) performance.

Gearing Up for the Gigawatt Data Center Age

Take a look inside the AI factories powering the trillion‑parameter era—and why the network matters more than ever.

Scaling Up With NVLink Fusion for Greater AI Inference Performance and Flexibility

Learn how the performance and breadth of NVIDIA NVLink scale-up fabric technologies are made available through NVIDIA NVLink Fusion to address the growing demands of complex AI models.

NVIDIA Hardware Innovations and Open-Source Contributions Are Shaping AI

Our comprehensive approach to open source extends across the NVIDIA software stack—from fundamental data processing tools to development and deployment frameworks to models and datasets. Explore open-source projects on GitHub, access hundreds of models and datasets on Hugging Face, and dive deeper into NVIDIA’s open-source project catalog.

CPO Scaling AI Factories With Co-Packaged Optics for Better Power Efficiency

AI factories demand speed, scale, and efficiency. Learn how NVIDIA’s networking innovations—powered by co-packaged optics—are cutting power consumption, boosting reliability, and enabling the next generation of AI-driven data centers.

2025 Schedule at a Glance

Sunday, August 24  |  8:30 a.m. - 12:30 p.m.

Data Center Racks

Tutorial | TBD
Monday, August 25   | 3:45 p.m.

RTX 5090: Designed for the Age of Neural Rendering 

Talk | Marc Blackstein (NVIDIA)
Monday, August 25   | 6:15 p.m.

NVIDIA ConnectX-8 SuperNIC: A Programmable RoCE Architecture for AI Data Centers

Talk | Idan Burstein (NVIDIA)
Tuesday, August 26   | 10:00 a.m.

Co-Packaged Silicon Photonics Switches for Gigawatt AI Factories 

Talk | Gilad Shainer (NVIDIA)
Tuesday, August 26   | 4:45 p.m.

NVIDIA’s GB10 SoC: AI Supercomputer on Your Desk 

Talk | Andi Skende (NVIDIA)

Architectures for the Modern Data Center and AI Factory

NVIDIA Blackwell GPU Architecture

Explore the groundbreaking advancements the NVIDIA Blackwell architecture brings to generative AI and accelerated computing. Building upon generations of NVIDIA technologies, NVIDIA Blackwell defines the next chapter in generative AI with unparalleled performance, efficiency, and scale.

Grace CPU Architecture

The NVIDIA Grace™ CPU delivers high performance, power efficiency, and high-bandwidth connectivity that can be used for HPC and AI applications. NVIDIA Grace Hopper Superchip is a breakthrough integrated CPU+GPU for giant-scale AI and HPC applications. For CPU-only HPC applications, the NVIDIA Grace CPU Superchip provides the highest performance, memory bandwidth, and energy efficiency compared to today’s leading server chips.

Spectrum-X Network Architecture

Modern AI workloads operate at data center scale, relying heavily on fast, efficient connectivity between GPU servers. The NVIDIA Spectrum-X™ Ethernet networking platform enables the high-performance infrastructure necessary for the next wave of accelerated computing and AI.

Explore NVIDIA Solutions

NVIDIA Agentic AI

Agentic AI uses sophisticated reasoning and planning to solve complex, multi-step problems. Agentic AI systems ingest vast amounts of data from multiple data sources to analyze challenges, develop strategies, and complete tasks independently.

NVIDIA Grace Blackwell

NVIDIA GB200 NVL72 connects 36 Grace CPUs and 72 NVIDIA Blackwell GPUs in a rack-scale design. The GB200 NVL72 is a liquid-cooled, rack-scale solution that boasts a 72-GPU NVIDIA NVLink™ domain that acts as a single massive GPU and delivers 30x faster real-time trillion-parameter large language model inference.

AI Inference at Scale

Ever wonder how complex AI trade-offs translate into real-world outcomes? Explore different points across the performance curves below to see firsthand how innovations in hardware and deployment configurations impact data center efficiency and user experience.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides over 25,000 worldwide members with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

Training for Organizational Growth

Our expert-led courses and workshops equip learners with the knowledge and hands-on experience to fully leverage NVIDIA solutions.  Customized training plans help close technical skill gaps with relevant, timely, and cost-effective programs to support organizational growth and development.

AI Infrastructure and Operations Expertise

NVIDIA AI Infrastructure and Operations courses and certifications provide the knowledge and hands-on practice to help you come up to speed quickly to install, deploy, configure, operate, and troubleshoot AI Infrastructure and Operations.