NVIDIA at KubeCon and CloudNativeCon NA 2025

NVIDIA at KubeCon and CloudNativeCon NA 2025

November 10–13
Georgia World Congress Center
Atlanta, Georgia

This year, NVIDIA is a proud sponsor of KubeCon and CloudNativeCon North America 2025 at the Georgia World Congress Center November 10–13. The event will feature eight groundbreaking sessions from NVIDIA speakers highlighting our many contributions to the cloud-native computing ecosystem.

Schedule at a Glance

Explore a wide range of innovative sessions and demos in the field of AI, accelerated computing, networking, and more. Take a closer look at the scheduled NVIDIA sessions that are part of this year’s program.

Sun, 10:10–10:45 a.m. ET
Building B, Level 2, Room B206

Tending the Kubernetes Dependency Tree: Bonsai or Bonfire?

Davanum Srinivas, NVIDIA
Jordan Liggitt, Google


Mon, 9:40–9:50 a.m. ET
Building B, Level 4, B401– 402

Lightning Talk: Mind the Topology: Smarter Scheduling for AI Workloads on Kubernetes

Roman Baron, NVIDIA


Mon, 1:35–2 p.m. ET
Building B, Level 2, B211– 212

Blazing-Fast Container Deployments: Image Caching With Block Storage

Juana Nakfour, NVIDIA
Manu Bhadoria, NVIDIA


Mon, 5–5:10 p.m. ET
Building B, Level 4, B401–402

Lightning Talk: My Job Says ‘Running’ but Nothing’s Running: Kubernetes Status Reality Check

Ron Kahn, NVIDIA


Tues, 12–12:30 p.m. ET
Building C, Level 3, Georgia Ballroom 3

AdminNetworkPolicy: From Alpha to Beta and Beyond

Nadia Pinaeva, NVIDIA
Dan Winship, Red Hat
Surya Seetharaman, Red Hat
Bowei Du, Google


Tues, 3:15–3:45 p.m. ET
Building B, Level 3, B302–303

Hybrid-Confidential-Cloud: Democratize Secure AI With GPUs and Confidential Containers

Zvonko Kaiser, NVIDIA


Tues, 4:15–4:45 p.m. ET
Building C, Level 3, Georgia Ballroom 3

DRA Is GA! Kubernetes WG Device Management—GPUs, TPUs, NICs, and More With DRA

Kevin Klues, NVIDIA
Patrick Ohly, Intel


Tues, 4:15–5:30 p.m. ET
Building B, Level 5, Thomas Murphy Ballroom 2–3

Tutorial: A Cross-Industry Benchmarking Tutorial for Distributed LLM Inference on Kubernetes

Ganesh Kudleppanavar, NVIDIA
Jing Chen, IBM Research
Junchen Jiang, University of Chicago
Samuel Monson, Red Hat
Jason Kramberger, Google


Wed, 2:15–2:45 p.m. ET
Building B, Level 5, Thomas Murphy Ballroom 1

Tuning GenAI Workloads on Kubernetes: What Actually Works (and What Doesn’t)?

Brian Lockwood, NVIDIA
Ishaan Sehgal, Omnara


Wed, 4:45–5:15 p.m. ET
Building B, Level 4, B401–402

Partitionable Devices: Putting the “Dynamic” Back in Dynamic Resource Allocation

Jan-Philip Gehrcke, NVIDIA
Morten Jæger Torkildsen, Google


Wed, 6:15–8:15 p.m. ET
Top Draft Sports Lounge, 
190 Marietta St NW

Fireside Chat: The Future of AI and Kubernetes (RSVP)

Everett Lacey, NVIDIA
Joshua Bucknor,  JPMorganChase
Lukas Gentele, vCluster


Wed, 6:30–9:00 p.m. ET
The Painted Duck,
976 Brady Avenue

Southern Sips and Source - An Open Source Social (RSVP)

Nathan Taber, NVIDIA
Jesse Butler, Amazon


Explore NVIDIA Solutions

NVIDIA Blueprints

NVIDIA Blueprints are reference workflows for canonical generative AI use cases. Enterprises can build and operationalize custom AI applications—creating data-driven AI flywheels—using blueprints along with NVIDIA NIM™ microservices and NVIDIA NeMo™ framework, all part of the NVIDIA AI Enterprise Platform.

NVIDIA Dynamo—Dynamically Scale and Serve AI With Distributed Inference

NVIDIA Dynamo is an open-source inference software for accelerating AI model deployment at AI-factory scale. Using disaggregated serving, Dynamo breaks inference tasks into smaller components, dynamically routing and rerouting workloads to the most optimal compute resources available at that moment.

NVIDIA KAI Scheduler

NVIDIA KAI Scheduler is the open-source Kubernetes scheduler designed to manage large-scale GPU clusters and optimize resource allocation from interactive jobs to large-scale training and inference workloads. Built with advanced features like gang scheduling, hierarchical queues, and workload consolidation, it maximizes GPU utilization while maintaining fairness across teams.

NVIDIA Cloud-Native Technologies

From the data center and cloud to the desktop and edge, NVIDIA Cloud-Native Technologies provide the ability to run deep learning, machine learning, and other GPU-accelerated workloads managed by Kubernetes on systems with NVIDIA GPUs. They also allow the seamless deployment and development of containerized software on enterprise cloud-native management frameworks.

Programs and Technical Training

NVIDIA Program for Startups

NVIDIA Inception provides thousands of members worldwide with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

NVIDIA Training and Certification

Develop the skills you and your team need to do your life’s work in AI, accelerated computing, data science, graphics & simulation, and more. Validate your skills with technical certification from NVIDIA.

Access Free Tools, Training, and Experts Community

Join our free NVIDIA Developer Program to access training, resources, and tools that can accelerate your work and advance your skills. Get a free credit for one of our self-paced courses when you join.

Like No Place You’ve Ever Worked

Working at NVIDIA, you’ll solve some of the world’s hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. From healthcare to robots, self-driving cars to blockbuster movies, you’ll experience it all. Plus, there’s a growing list of new opportunities every single day. Explore all of our open roles, including internships and new college graduate positions.

Learn more about our current job openings, as well as university jobs.

Register now to join NVIDIA at KubeCon.