NVIDIA at KubeCon and CloudNativeCon Europe 2026

NVIDIA at KubeCon and CloudNativeCon Europe

March 23–26
Amsterdam, Netherlands

This year, NVIDIA is a proud sponsor of KubeCon and CloudNativeCon Europe 2026 at the RAI Amsterdam March 23–26. The event will feature groundbreaking sessions from NVIDIA speakers highlighting our many contributions to the cloud-native computing ecosystem. Stop by our booth #241 to chat with NVIDIA experts, and tell us what you’re working on.

Schedule at a Glance

Explore a wide range of innovative sessions and demos in the field of AI, accelerated computing, networking, and more. Take a closer look at the scheduled NVIDIA sessions that are part of this year’s program.

11:15-11:45 a.m. CET

Kevin Klues, NVIDIA
Marlow Warnicke, NVIDIA 
John Belamaric, Google
Janet Kuo, Google 
Patrick Ohly, Intel
Forum

2:05-2:45 p.m. CET

Sam Huang, NVIDIA
Thomas Chaton, Lightning.AI
Hall 7 | Room B

4:30-4:55 p.m. CET

11:00-11:30 p.m. CET

GPU Schedular That Actually Works: KAI Schedular + vCluster Live Demo

Hall 1,  Booth 520

11:00-2:00 p.m. CET

From Pipeline to GPU: ZenML + KAI-Scheduler

Demo at 11 a.m. and 2 p.m.
Hall 2, Booth 1341

11:15-11:45 a.m. CET

Ryan Hallisey, NVIDIA
Natalie Bandel, NVIDIA
Forum

1:50-2:45 p.m. CET

Multi-node Inference with ComputeDomains on GKE and GB200

Kevin Klues , NVIDIA
John Belamaric, Google
Google Booth, #310

3:15-3:45 p.m. CET

Kevin Klues , NVIDIA
Rajas Kakodkar, VMWare 
Amine Hilaly Amazon
Dawn Chen,  Google
Zach Shepherd, VMWare
Hall 7, Room C

4:15-4:45 p.m. CET

Sanjay Chatterjee, NVIDIA
Madhav Bhargava, SAP
Hall 7, Room C

4:15-4:45 p.m. CET

Marlow Warnicke, NVIDIA 
Praveen Krishna, Google
Auditorium

4:15-4:45 p.m. CET

5:00-5:30 p.m. CET

Nadia Pinaeva, NVIDIA 
Shane Utt, Red Hat
Haiyan Meng, Google 
David Martin, Red Hat
Etai Lev Ran, IBM
E103-105

7:00-9:15 p.m. CET

Everett Lacey, NVIDIA 
Lukas Gentele, vCluster
Felipe Del Piccolo, JPMorganChase
The Upside Down Amsterdam

11:00-2:00 p.m. CET

From Pipeline to GPU: ZenML + KAI-Scheduler

Demo at 11 a.m. and 2 p.m.
Hall 2, Booth 1341

12:15-12:30 p.m. CET

Lighting talk – “Slurm in Kubernetes: Orchestrating AI Inference Workloads with Slinky”

Tim Wickberg, NVIDIA
Google Booth, #310

2:15-2:45 p.m. CET

Ryan Hallisey, NVIDIA
Lucy Sweet, Uber
Filip Křepinský,  Red Hat
F002-005

4:45-5:15 p.m. CET

Marlow Warnicke, NVIDIA
Yuan Tang, Red Hat
Kante Yin, HivergeAI
Stephen Rust, Akamai Cloud
F002-005

5:30-6:00 p.m. CET

Nadia Pinaeva, NVIDIA
Joel Takvorian, Red Hat
G102-103

11:00-2:00 p.m. CET

From Pipeline to GPU: ZenML + KAI-Scheduler

Demo at 11 a.m. and 2 p.m.
Hall 2, Booth 1341

11:45 a.m. - 12:20 p.m. CET

Kevin Klues, NVIDIA 
Fan Zhang, NVIDIA
Alay Patel, NVIDIA
Cloud Native Theater, Halls 1-5

12:55-1:05 p.m. CET

1:45-2:15 p.m. CET

3:15-3:45 p.m. CET

Eduardo Arango Gutierrez, NVIDIA 
Chaoyi Huang, Huawei
Hall 8, Room D

10:00 a.m. - 6:00 p.m. CET

Configure AI cluster in minutes (AI Cluster Runtime)

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

Deploy an AI blueprint with NVIDIA NIM Operator

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

End to End Cluster Bring up on DGX Spark - Kubernetes on DGX Spark

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

K8s cluster resilience (NVSentinel)

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

Secure AI on Kubernetes with Confidential Containers

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

Time based fairshare with KAI scheduler: Giving “memory” to your workload scheduler

NVIDIA Booth #241

10:00 a.m. - 6:00 p.m. CET

Using NVIDIA Grove to deploy NVIDIA Dynamo for distributed inferencing on Kubernetes

NVIDIA Booth #241

Explore NVIDIA Solutions

NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community

NVIDIA is donating its Dynamic Resource Allocation Driver for GPUs to the Cloud Native Computing Foundation, enabling developers to manage high-performance AI workloads on Kubernetes with greater efficiency and scale. This open source contribution makes enterprise AI infrastructure more accessible through smarter GPU resource sharing, dynamic reconfiguration, and enhanced security for confidential computing.

Deploying Aggregated and Disaggregated Inference on Kubernetes

Disaggregated serving architectures like DeepSeek-R1 split inference into specialized roles with different scaling needs, pushing beyond what Kubernetes primitives handle well. This guide compares deploying aggregated and disaggregated models using LeaderWorkerSet versus Grove, showing when to use each and how Grove eliminates orchestration complexity.

Validate Kubernetes GPU Clusters with Layered, Reproducible Recipes

Discover NVIDIA’s open-source AI Cluster Runtime (AICR), which generates deterministic, version-locked Kubernetes recipes so you can standardize GPU clusters, streamline upgrades, and validate performance across any environment

NVIDIA GTC 2026: Live Updates on What’s Next in AI

Rolling coverage from San Jose, including NVIDIA CEO Jensen Huang’s keynote, breakout highlights, live demos and on‑the‑ground color through March 19.

Building a Zero-Trust Architecture for Confidential AI Factories

The "Trust Gap" is holding back AI's move to production on shared infrastructure. Discover NVIDIA's Confidential Containers (CoCo) reference architecture, which uses hardware-backed TEEs to cryptographically protect proprietary Model IP and sensitive data during inference, maintaining Kubernetes agility.

NVIDIA Cloud-Native Technologies

From the data center and cloud to the desktop and edge, NVIDIA Cloud-Native Technologies provide the ability to run deep learning, machine learning, and other GPU-accelerated workloads managed by Kubernetes on systems with NVIDIA GPUs. They also allow the seamless deployment and development of containerized software on enterprise cloud-native management frameworks.

NVIDIA Dynamo 1.0: Production‑Ready Inference at Scale

Discover how NVIDIA Dynamo 1.0 delivers low‑latency, high‑throughput, distributed inference for generative AI, making it easier to serve large models across multi‑GPU and multi‑node environments in production.

NVIDIA Dynamo—Dynamically Scale and Serve AI With Distributed Inference

NVIDIA Dynamo is an open-source inference software for accelerating AI model deployment at AI-factory scale. Using disaggregated serving, Dynamo breaks inference tasks into smaller components, dynamically routing and rerouting workloads to the most optimal compute resources available at that moment.

KAI Scheduler

KAI Scheduler is the open-source Kubernetes scheduler designed to manage large-scale GPU clusters and optimize resource allocation from interactive jobs to large-scale training and inference workloads. Built with advanced features like gang scheduling, hierarchical queues, and workload consolidation, it maximizes GPU utilization while maintaining fairness across teams.

Open Source at NVIDIA

NVIDIA drives open source innovation by releasing their technologies to the community. This equips developers—solo builders or scaling companies—with tools to create breakthrough applications using accelerated computing.

Programs and Technical Training

Accelerate Your Startup

NVIDIA Inception provides thousands of members worldwide with access to the latest developer resources, preferred pricing on NVIDIA software and hardware, and exposure to the venture capital community. The program is free and available to tech startups of all stages.

Grow Your Skills With NVIDIA Learning Paths

Build expertise in AI, accelerated computing, and graphics.​ Follow structured, role-based skill development​, learn through hands-on labs and real-world projects​, and earn industry-recognized certifications.

Get Certified. Get Ahead.

Get certified by NVIDIA and turn your skills into industry-recognized proof of expertise and commitment to continuous learning.

Like No Place You’ve Ever Worked

Working at NVIDIA, you’ll solve some of the world’s hardest problems and discover never-before-seen ways to improve the quality of life for people everywhere. From healthcare to robots, self-driving cars to blockbuster movies, you’ll experience it all. Plus, there’s a growing list of new opportunities every single day. Explore all of our open roles, including internships and new college graduate positions.

Learn more about our current job openings, as well as university jobs.

Register now to join NVIDIA at KubeCon.