NVIDIA at Hot Chips

Join us at this year’s online Hot Chips: A Symposium on High-Performance Chips to see how cutting-edge NVIDIA technologies are transforming industry and science. Explore featured sessions, the latest on the NVIDIA A100 Tensor Core GPU and NVIDIA DGX A100, and how NVIDIA DGX SuperPOD is powering the supercomputers of the future, fueling breakthroughs around the world.

NVIDIA Sessions

TIME SUNDAY 8/16 MONDAY 8/17
08:30 - 01:00pm (PDT)
Fundamentals of Scaling Out DL Training

Paulius Micikevicius, Architect

Scale-Out Systems—DGX A100 SuperPOD

Michael Houston, Chief Architect, AI Systems

Scale-Out Training Experiences—Megatron Language Model

Mohammad Shoeybi, Senior Research Manager

5:30 - 6:30pm (PDT)
NVIDIA’s A100 GPU: Performance and Innovation for GPU Computing

Jack Choquette, Senior Distinguished Engineer

Wishwesh Gandhi, Senior Director of Architecture

  • SUNDAY
    8/16
  • MONDAY
    8/17

SUNDAY 8/16

08:30 - 01:00pm (PDT)

Fundamentals of Scaling Out DL Training

Paulius Micikevicius, Architect

Scale-Out Systems—DGX A100 SuperPOD

Michael Houston, Chief Architect, AI Systems

Scale-Out Training Experiences—Megatron Language Model

Mohammad Shoeybi, Senior Research Manager

MONDAY 8/17

5:30 - 6:30pm (PDT)

NVIDIA’s A100 GPU: Performance and Innovation for GPU Computing

Jack Choquette, Senior Distinguished Engineer

Wishwesh Gandhi, Senior Director of Architecture

Featured Demos

NVIDIA A100 Tensor Core GPU

Running Multiple Workloads on a Single A100 GPU

With the NVIDIA A100 Tensor Core GPU, researchers and developers can use a dedicated GPU to run their workload, even if the workload only uses a fraction of the GPU's compute power. The A100  includes a groundbreaking feature called Multi-Instance GPU (MIG), which partitions the GPU into as many as seven instances, each with dedicated compute, memory, and bandwidth. This allows multiple users to run their workloads on the same GPU, maximizing utilization and user productivity. This demo runs AI and high-performance computing (HPC) workloads simultaneously on the same A100 GPU.

Multi-Instance GPU on the NVIDIA A100 Tensor Core GPU

Boosting Performance and Utilization with Multi-Instance GPU

MIG on the NVIDIA A100 Tensor Core GPU can guarantee performance for up to seven jobs running concurrently on the same GPU—and each instance is fully isolated with its own compute, memory, and bandwidth. This unique capability of the A100 offers the right-sized GPU for every job and maximizes data center utilization. This demo shows inference performance on a single slice of MIG and then scales linearly across the entire A100.

Get an Inside Look at the Ultimate AI Data Center

Get an Inside Look at the Ultimate AI Data Center

Tour the NVIDIA data center that houses our top 10 supercomputer, Selene. Built from the NVIDIA DGX SuperPOD reference architecture, which provides the information needed to assemble an AI data center quickly and in a variety of sizes, Selene was assembled in under one month and delivers record-breaking AI performance. From compute to networking to air flow, this video provides a behind-the-scenes look at the infrastructure NVIDIA uses to conduct leading-edge AI research. 

Deep Dive

NVIDIA Ampere Architecture

Inside the NVIDIA Ampere Architecture

Learn what’s new with the NVIDIA Ampere architecture and its implementation in the NVIDIA A100 GPU.

NVIDIA DGX SuperPOD MLPerf

Setting a New Bar in MLPerf

NVIDIA training and inference solutions deliver record-setting performance in MLPerf, the leading industry benchmark for AI performance.

Scalable infrastructure for AI Leadership

Scalable Infrastructure for AI Leadership

NVIDIA DGX SuperPOD built with NVIDIA DGX A100 systems is the next generation of AI supercomputing infrastructure, designed to solve the world's most challenging computational problems.

Sign up to receive the latest news from NVIDIA.