NVIDIA Webinar
AI factories are redefining data centers and enabling the next era of AI. Unlike traditional data centers built for general-purpose workloads, AI factories are purpose-built to extract value from AI. They orchestrate the full AI lifecycle from data ingestion to training, fine-tuning, and high-volume inference at scale.
As AI workloads grow in complexity, they demand robust infrastructure across every layer, from power and cooling to high-performance networking and system orchestration.
In this webinar series, NVIDIA experts will walk through the essential building blocks behind modern AI factories, covering both physical infrastructure and reference network architecture - each critical to supporting large-scale, multi-GPU systems for AI training and inference.
Webinar 1: Enable AI Factories with Optimized Data Center Infrastructure
This webinar focuses on the physical infrastructure required to deploy rack-scale compute systems, including floor planning, liquid cooling design, power delivery, and integration with building management systems - using NVIDIA NVL72 as a reference.
Webinar 2: Build AI Factories with NVIDIA RA and High-Performance Networking
This webinar introduces NVIDIA’s Cloud Partner Reference Architecture (NCP RA) for scalable AI factories, covering key components such as east-west networking, converged Ethernet for storage I/O, and in-band management. We’ll also cover rail-optimized topologies, repeatable building blocks that scale to tens of thousands of nodes, and control plane strategies to support multi-tenancy.
In webinar 1, you'll learn:
In webinar 2, you'll learn: