NVIDIA at OCP Global Summit 2025

San Jose McEnery Convention Center
October 13–16

Check out this year’s Open Compute Global Summit (OCP) to see all the latest advancements in the NVIDIA accelerated computing platform. We’ll feature a keynote, plus groundbreaking sessions from NVIDIA speakers on the latest in AI, AI infrastructure, networking, energy efficiency, security, and more.

News at OCP

NVIDIA Partners Drive Next-Gen Open, Efficient Gigawatt AI Factories

Partners are tapping NVIDIA Vera Rubin NVL144, NVIDIA MGX™, and NVIDIA Kyber for energy-efficient AI factories.

NVIDIA Spectrum-X™ Ethernet Switches Speed Up Networks for Meta and Oracle

Hyperscalers adopt NVIDIA open networking to drive gigascale AI data center performance.

Building the 800 VDC Ecosystem for Efficient, Scalable AI Factories

Generative AI has transformed traditional data centers into power-focused “AI factories,” requiring a complete redesign centered on efficiency, scalability, and high energy demands.

Schedule at a Glance

OCP 2025 spotlights the latest innovations driving the future of open data center design and accelerated infrastructure. Explore the powerful lineup of NVIDIA sessions featured in this year’s program to see how we’re shaping the next era of scalable, sustainable computing.

Monday 10/13 Tuesday 10/14 Wednesday 10/15 Thursday 10/16


Revolutionizing Networking for the Era of Agentic AI
9:15–9:30 a.m.
Concourse Level, 210CDGH
Technical Paths to the New Era of GPU-Initiated Storage
8:40–8:55 a.m
Concourse Level, 220C


Introducing...UQDv2.0!
9:45–10:00 a.m.
Concourse Level, 211
Device Ownership Transfer and Cryptographic Binding for Device Without Secure Storage
9:10–9:30 a.m.
Concourse Level, 211


Methods to Support Higher-Power Rack Cooling
10:25–10:40 a.m.
Concourse Level, 211
OCP Silent Data Corruption Workgroup Update
9:10–9:30 a.m.
Concourse Level, 212

Cooling Environments Panel
1:00–1:30 p.m.
Lower Level, LL20D
Panel: Progress in Optical Interconnects for AI Clusters
11:05–11:30 a.m.
Concourse Level, 210CDGH
Climate-Based Optimization of Cooling Strategies for Sustainable Data Centers
10:05–10:20 a.m.
Lower Level, LL20D

Guidelines for Pre-Commission Preparation of Technology Cooling System (TCS) Row Manifolds in Liquid-Cooled Data Centers Presentation
1:35–1:50 p.m.
Lower Level, LL20D
800 VDC MGX Accelerated Computing Rack & Energy Storage for Improved GPU Power Utilization
11:10–11:30 a.m.
Lower Level, LL20D
Panel: Standardizing GPU Management: Redfish, Telemetry, and Firmware Update Protocols
12:30–1:00 p.m.
Concourse Level, 212

OCP Cooling Environments—ASHRAE
2:15–2:45 p.m.
Lower Level, LL20D
Data Center Facility Project Highlights
12:30–12:45 p.m.
Concourse Level, 210CDGH
SONiC for AI Networking: Looking Back and Ahead
1:00–1:25 p.m.
Concourse Level, 230A

Empowering AI at Scale With Lenovo Neptune® Liquid Cooling
2:45–3:10 p.m.
Concourse Level, 210ABEF
Operating Temperatures to Support Performance, Energy Efficiency, and Heat Recovery
1:10–1:25 p.m.
Concourse Level, 210CDGH
Revisiting Jim Gray’s 5-Minute Rule in the Storage Next Era: A First-Principles Perspective
1:55–2:10 p.m.
Concourse Level, 210ABEF

The Value of Digital Twins: Real-World Case Studies From the OCP Community
3:35–3:50 p.m.
Lower Level, LL21B
Rack Emulator Design for Multigenerational IT Racks
3:50–4:05 p.m.
Concourse Level, 210CDGH
DC-Stack: Challenges and Lessons Learned
3:45–4:00 p.m.
Lower Level, LL20BC
Ian Buck Keynote: Shaping the Future of Open Infrastructure for AI
4:35–4:50 p.m.
Street Level - South Hall
Using MGX Racks and Digital Twins to Convert Any Data Center Into an AI Factory
3:55–4:10 p.m.
Lower Level, LL21B
Panel: Dirty Insights Into Liquid Cooling—Cleaning & Commissioning the TCS
4:30–5:00 p.m.
Concourse Level, 210CDGH

Architectures for the Modern Data Center

NVIDIA Blackwell GPU Architecture

Explore the groundbreaking advancements the NVIDIA Blackwell architecture brings to generative AI and accelerated computing. Building upon generations of NVIDIA technologies, Blackwell defines the next chapter in generative AI with unparalleled performance, efficiency, and scale.

NVIDIA Grace CPU Architecture

The NVIDIA Grace™ CPU delivers exceptional performance, power efficiency, and high-bandwidth connectivity for HPC and AI applications. The NVIDIA Grace Hopper™ Superchip is a breakthrough integrated CPU+GPU for giant-scale AI and HPC applications. For CPU-only HPC applications, the NVIDIA Grace CPU Superchip provides superior performance, memory bandwidth, and energy efficiency.

NVIDIA MGX Architecture

NVIDIA MGX™ is a modular reference architecture for accelerated computing that supports hundreds of GPU, DPU, CPU, storage, and networking combinations for AI, high-performance computing (HPC), and NVIDIA Omniverse™ workloads.

NVIDIA Spectrum-X Ethernet Network Platform

NVIDIA Spectrum-X™ Ethernet delivers the highest performance for AI. It connects compute fabrics within the data center and scales across multiple AI data centers with Spectrum-XGS Ethernet technology to form massive AI super-factories capable of giga-scale intelligence.


Explore NVIDIA Solutions

Think SMART. Think NVIDIA Inference.

As reasoning models generate exponentially more AI tokens, demand for compute surges. Full-stack inference optimization is the key to ensuring that you're thinking smart about scaling AI at AI factory scale.

NVIDIA AI Factories

Unlike traditional data centers, an AI factory is specifically designed to manufacture intelligence at scale and, with specialized AI infrastructure, support the intensive computational needs of AI workloads. These factories excel at AI reasoning, agentic AI, and physical AI, enabling faster, more accurate decision-making across industries.

NVIDIA GB300 NVL72

The NVIDIA GB300 NVL72 features a fully liquid-cooled, rack-scale design that unifies 72 NVIDIA Blackwell Ultra GPUs and 36 Arm®-based NVIDIA Grace CPUs in a single platform optimized for test-time scaling inference.

NVIDIA NVLink

NVIDIA NVLink™ and NVLink Switch Chip enable 130 TB/s of bandwidth in one 72-accelerator NVLink domain (NVL72). They also deliver 4x bandwidth efficiency with NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ FP8 support.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides over 15,000 worldwide members with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

DGX Administrator Training

Learn how to administer the NVIDIA DGX™ platform for all clusters and systems. Unique courses for DGX H100 and A100, DGX BasePOD, DGX SuperPOD, and even DGX Cloud offer attendees the knowledge to administer and deploy the platform successfully.

NVIDIA Training

Our expert-led courses and workshops provide learners with the knowledge and hands-on experience necessary to unlock the full potential of NVIDIA solutions. Our customized training plans are designed to bridge technical skill gaps and provide relevant, timely, and cost-effective solutions for an organization's growth and development.

Meet Our Partners

Register now to join NVIDIA at OCP.