NVIDIA At Hot Chips Conference

Join us online, August 21–23, 2022, for this year’s Hot Chips to learn how the data center is being reimagined for the age of AI with the NVIDIA accelerated computing platform. Featuring three powerful architectures—GPU, DPU, and CPU—and a rich software stack, it’s built to take on the modern data center’s toughest challenges. Attend our featured sessions on the agenda to learn more.

Tune in to Hot Chips to get the latest details on the NVIDIA Grace CPU, Hopper GPU, NVLink® Switch, and Jetson Orin module.

Schedule at a Glance

Monday, 8/22 Tuesday, 8/23
NVIDIA Hopper GPU: Scaling Performance ›
9:00–9:30 a.m. PT | Session | Jack Choquette and Ronny Krashinsky, NVIDIA
NVLink Network Switch—NVIDIA’s Switch Chip for High-Communication-Bandwidth SuperPODs ›
12:30–1:00 p.m. PT | Session | Alexander Ishii and Ryan Wells, NVIDIA
NVIDIA Orin System on Chip ›
3:00–3:30 p.m. PT | Session | Michael Ditty, NVIDIA
4:00–4:30 p.m. PT | Session | Jonathon Evans, NVIDIA

Technical Blogs

Inside NVIDIA Grace CPU: NVIDIA Amps Up Superchip Engineering for HPC and AI

Discover the key features and benefits of the NVIDIA Grace CPU, the first data center CPU developed by NVIDIA. It has been built from the ground up to create the world’s first superchips.

Upgrading Multi-GPU Interconnectivity with Third-Generation NVIDIA NVSwitch

Third-generation NVIDIA NVSwitch delivers the next big leap for high-bandwidth, low-latency communication between GPUs, both within a server as well as bringing all-to-all GPU communication at full NVLink speed between server nodes.

Architectures for the Modern Data Center

NVIDIA Hopper GPU Architecture

Hopper GPU Architecture

The NVIDIA Hopper architecture is powering the next generation of accelerated computing with unprecedented performance, scalability, and security for every data center. With the ability to securely scale diverse workloads—from small enterprise to exascale HPC and trillion-parameter AI—Hopper enables brilliant innovators to fulfill their life's work at the fastest pace in human history.

NVIDIA Grace CPU Architecture

Grace CPU Architecture

The NVIDIA Grace architecture is designed for a new type of emerging data center—AI factories that process and refine mountains of data to produce intelligence. These data centers run a variety of workloads, from AI training and inference, to HPC, to data analytics, digital twins, cloud graphics and gaming, and thousands of hyperscale cloud applications.

NVIDIA BlueField Data Processing Units

BlueField DPU Architecture

The NVIDIA® BlueField® data processing unit (DPU ignites unprecedented innovation for data centers and supercomputing infrastructures. By offloading, accelerating, and isolating a broad range of advanced networking, storage, and security services, BlueField DPUs provide a secure and accelerated infrastructure for any workload, in any environment, from cloud to data center to edge.

The Developer Conference
for the Era of AI and the

Join us this September for a GTC that will inspire your next big idea. This is a don't miss opportunity from experts and leaders in their fields on how AI is transforming industries and profoundly impacting the world. It all happens online SeptemberSep 19-22.

Explore NVIDIA Solutions

NVIDIA Data Center

NVIDIA Data Center Tour

Tour one of NVIDIA’s data centers containing our top 10 supercomputer, Selene. Get a behind-the-scenes look at where we conduct some of the world’s most advanced AI research, and learn about all of the technologies that went into building this world-class supercomputer. We built Selene from our NVIDIA DGX SuperPOD™ reference architecture in just under one month.

NVIDIA NVLink and NVSwitch

NVIDIA NVLink and NVSwitch

Increasing compute demands in AI and HPC—including an emerging class of trillion-parameter models—are driving a need for multi-node, multi-GPU systems with seamless, high-speed communication between every GPU. To build the most powerful, end-to-end computing platform that can meet the speed of business, a fast, scalable interconnect is needed.

NVIDIA Hopper Architecture

NVIDIA Hopper Architecture in Depth

The NVIDIA H100 Tensor Core GPU is our ninth-generation data center GPU designed to deliver an order-of-magnitude performance leap for large-scale AI and HPC over the prior-generation NVIDIA A100 Tensor Core GPU. Get an in-depth look at the new NVIDIA H100 GPU and the new features of the NVIDIA Hopper architecture.

Project Monterey on NVIDIA LaunchPad

Access Project Monterey on NVIDIA LaunchPad

Apply for a free trial of Project Monterey on NVIDIA LaunchPad. You’ll get access to VMware ESXi-optimized software running on BlueField DPU-accelerated infrastructure for prototyping and testing next-generation applications and workloads.

Get Hands-On Training with Technical Workshops

The Deep Learning Institute is offering 20 full-day hands-on workshops at GTC, September 19–22. Workshops are available in multiple languages and time zones. Early bird pricing of $99 ends August 29.

  • Fundamentals of Deep Learning

    Learn how deep learning works through hands-on exercises in computer vision and natural language processing. This workshop will teach you the fundamental techniques and tools required to train a deep learning model.
  • Building Transformer-Based Natural Language Processing Applications

    Learn how to use natural language processing transformer-based models for text classification tasks, such as identifying specific types of articles within a large library of articles or abstracts.
  • Applications of AI for Anomaly Detection

    Learn about multiple AI-based solutions that solve important telecommunications problems by identifying network intrusions. See how to implement the following  anomaly detection techniques: accelerated XGBoost, deep learning-based autoencoders, and generative adversarial networks (GANS).
Get Hands-On Training with Technical Workshops

Sign up to receive the latest news from NVIDIA.