NVIDIA AI Summit

More Hands-On Training After GTC Paris at VivaTech 2025

Stay ahead of the curve with full-day instructor-led workshops, self-paced courses, and opportunities to earn technical certifications

Training and Certification Catalog

Gain in-demand skills with NVIDIA’s extensive catalog of self-paced courses and instructor-led virtual workshops. Whether you're advancing in AI, deep learning, data science, HPC, or design and simulation, our expert-led training helps you stay ahead. Ready to validate your expertise? Earn an industry-recognized NVIDIA certification.

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

Deploying RAG Pipelines for Production at Scale

Learn how to move retrieval-augmented generation (RAG) systems from proof of concept to enterprise-grade deployments, focusing on scalability, auto-scaling, monitoring, and data management. The workshop will also feature NVIDIA NIM™ microservices for optimizing production-level performance.

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

NVIDIA Isaac for Accelerated Robotics

Accelerate your robotics innovation with this hands-on workshop focused on simulation-first development, AI-powered perception, and synthetic data generation using NVIDIA Isaac.

Newly offered at GTC Paris.

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

Fundamentals of Accelerated Computing With Modern CUDA C++

Discover how to write, compile, and run GPU-accelerated code, use NVIDIA® CUDA® core libraries to harness the power of massive parallelism provided by modern GPU accelerators, optimize memory migration between the CPU and GPU, and implement your own algorithms.

 

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

Building AI Agents With Multimodal Models

Learn how to build neural network agents that reason across multiple data types using advanced fusion techniques, optical character recognition (OCR), and NVIDIA AI Blueprints for real-world applications like robotics and healthcare.

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

Adding New Knowledge to LLMs

Learn to develop, deploy, and operate sovereign AI systems tailored to your specific requirements - from data preparation to production scaling.

Newly offered at GTC Paris.

Tuesday, June 10
9:00 a.m. – 5:00 p.m. CEST

Building Digital Twins for Physical AI With NVIDIA Omniverse

Learn to create simulation-ready digital twins from manufacturing data using OpenUSD and NVIDIA Omniverse™, simulate accurate sensor data, deploy NVIDIA Metropolis Video Search and Summarization (VSS), and train AI models for physical AI systems.

Join Two–Hour Training Labs

Complimentary training labs (scheduled on June 11 and 12) are included as a part of your conference pass.

Attention: Please bring your laptop to participate in the labs.

NEW: Accelerating External Aerodynamics Simulations Using NVIDIA PhysicsNeMo™
An Introduction to NVIDIA Cosmos™ for Physical AI
NEW: Introduction to GPU Programming With CUDA Python
Learn to Build Agentic AI Workflows for Enterprise Applications
Accelerating Clustering Algorithms to Achieve the Highest Performance
Fundamentals of Working with OpenUSD in a 3D Pipeline

Add to Schedule
Learn to Build AI Chatbots Using Retrieval Augmented Generation With NVIDIA AI Enterprise

Add to Schedule
Developing Industrial Inspection Workflows With NVIDIA Metropolis for Factories

Add to Schedule
Toward Trustworthy Automated Clinical Q&A: Grounding LLMs in Evidence With Retrieval Augmented Generation and Uncertainty Quantification

Add to Schedule
Simulating Custom Robots: A Hands-On Lab Using NVIDIA Isaac Sim and ROS2

Add to Schedule
Get Certified at GTC

Learn the Strategies for Success with NVIDIA Certification

Missed the certification sessions at GTC Paris?Join our global webinar on June 26 (Americas and EMEA & APAC-friendly time) to hear from NVIDIA experts on how to prepare for certification exams, understand the process, and set yourself or your team up for success. Register to attend and receive an exclusive promo code for your exam.

Thank you to our sponsors for supporting the workshops and training labs

Explore More Training From NVIDIA

Take advantage of a comprehensive range of resources tailored to various learning requirements, including learning materials, self-paced and live instructor-led training, and programs for educators. These resources ensure that individuals, teams, organizations, educators, and students can access the necessary tools to elevate their expertise in AI, accelerated computing, data science, graphics and simulation, and beyond.

DLI Training
DLI Training

Explore More Training From NVIDIA

Take advantage of a comprehensive range of resources tailored to various learning requirements, including learning materials, self-paced and live instructor-led training, and programs for educators. These resources ensure that individuals, teams, organizations, educators, and students can access the necessary tools to elevate their expertise in AI, accelerated computing, data science, graphics and simulation, and beyond.

Workshop Cancellation Policy

If you can’t make the event, submit your cancellation requests at GTC_registration@nvidia.com. The following processing fees will apply: 

  • Before 11:59 p.m. PT, Monday, May 19, 2025:
    • A €25 cancellation fee will apply for conference passes.
    • A €50 cancellation fee will apply for full-day workshop and conference passes.
  • After 11:59 p.m. PT, Monday, May 19, 2025: No refunds will be granted
  • No-shows are ineligible for workshop refunds.

Substitution requests will be granted but must be received by 11:59 pm PT, Friday, May 30, 2025. To transfer your registration to a colleague or if you’re having difficulty registering online, please email GTC_registration@nvidia.com, and we'll be happy to assist you. 

 

Building AI Agents With Multimodal Models

Just like how humans have multiple senses to perceive the world around them, more and more computer sensors are being developed to capture a wide variety of data. In the health industry, computed tomography (CT) scans provide a 3D representation to detect potentially dangerous abnormalities. In the robotics industry, lidars help robots see depth and navigate their complex environments. In this course, learners will develop neural network agents that can reason using many different data types by exploring multiple fusion techniques. 

You’ll learn about:

  • Different data types and how to make them ready for neural networks.
  • Model fusion, and the differences between early, late, and intermediate fusion.
  • Structure loss and how to avoid it.
  • The difference between modality and agent orchestration.

Upon completion, you'll be able to orchestrate several multimodal agents into an application.

Prerequisite(s)

  • Basic understanding of Python, including classes, objects, and decorators.
  • Basic understanding of neural networks, such as image convolution and sequential models

Deploying RAG Pipelines for Production at Scale

Retrieval-Augmented Generation (RAG) pipelines are revolutionizing enterprise operations. However, most existing tutorials stop at proof-of-concept implementations that falter when scaling. This workshop aims to bridge that gap, focusing on building scalable, production-ready RAG pipelines powered by NVIDIA NIM microservices and Kubernetes. Participants will gain hands-on experience deploying, monitoring, and scaling RAG pipelines with the NIM Operator and learn best practices for infrastructure optimization, performance monitoring, and handling high traffic volumes.

The workshop begins by building a simple RAG pipeline using the NVIDIA API catalog. Participants will deploy and test individual components in a local environment using Docker Compose. Once familiar with the basics, the focus will shift to deploying NIMs, such as LLM, NeMo Retriever Text Embedding, and NeMo Retriever Text Reranking, in a Kubernetes cluster using the NIM Operator. This'll include managing the deployment, monitoring, and scalability of NVIDIA NIM microservices. The workshop will focus on building a RAG pipeline that can be used in production. It'll also look at the NVIDIA AI Blueprint for PDF ingestion, and how to use it in the RAG pipeline.

To ensure operational efficiency, the workshop will introduce Prometheus and Grafana for monitoring pipeline performance, cluster health, and resource utilization. Scalability will be addressed through the use of the Kubernetes Horizontal Pod Autoscaler (HPA) for dynamically scaling NIMs based on custom metrics in conjunction with the NIM Operator. Custom dashboards will be created to visualize key metrics and interpret performance insights.

You'll be able to:

  • Build a simple RAG pipeline using API endpoints, deployed locally with Docker Compose.
  • Deploy a variety of NVIDIA NIM microservices in a Kubernetes cluster using the NIM Operator.
  • Combine NIMs into a cohesive, production-grade RAG pipeline and integrate advanced data ingestion workflows.
  • Monitor RAG pipelines and Kubernetes clusters with Prometheus and Grafana.
  • Scale NIMs to handle high traffic using the NIM Operator.
  • Create, deploy, and scale RAG pipelines for a variety of agentic workflows, including PDF ingestion.

Prerequisite(s)

  • Familiarity working with LLM-based applications
  • Familiarity with RAG pipelines
  • Familiarity working with Kubernetes
  • Familiarity working with Helm

Fundamentals of Accelerated Computing With Modern CUDA C++

This workshop provides a comprehensive introduction to general-purpose GPU programming with NVIDIA® CUDA®. You'll learn how to write, compile, and run GPU-accelerated code, use CUDA core libraries to tap into the power of massive parallelism provided by modern GPU accelerators, optimize memory migration between CPU and GPU, and implement your own algorithms.

At the conclusion of the workshop, you'll have an understanding of the fundamental concepts and techniques for accelerating C++ code with CUDA and be able to:

  • Write and compile code that runs on the GPU
  • Optimize memory migration between CPU and GPU
  • Use powerful parallel algorithms that simplify adding GPU acceleration to your code
  • Implement your own parallel algorithms by directly programming GPUs with CUDA kernels
  • Use concurrent CUDA streams to overlap memory traffic with compute
  • Know where, when, and how best to add CUDA acceleration to existing CPU-only applications

Prerequisite(s)

  • Basic C++ competency, including familiarity with lambda expressions, loops, conditional statements, functions, standard algorithms, and containers.
  • No previous knowledge of CUDA programming is assumed.

Adding New Knowledge to LLMs

In today's AI landscape, even powerful Large Language Models (LLMs) face limitations when confronted with specialized business knowledge, technical domains, or cultural contexts absent from their training data. While retrieval-augmented generation can mitigate some gaps, true domain mastery requires a deeper level of model adaptation.

This comprehensive workshop equips developers with hands-on skills to transform open-source LLMs into domain-specialized AI assets. Through five interconnected modules, you'll master the complete lifecycle of model customization:

  • Systematic Evaluation and Dataset Creation: Build custom evaluation benchmarks using NeMo Evaluator to identify model limitations and track engineering progress. Learn metrics that matter for your specific use case.
  • Advanced Data Curation: Implement state-of-the-art data cleaning pipelines with NeMo Curator to assemble high-quality domain-specific datasets that address your business or cultural requirements.
  • Targeted Knowledge Injection: Master multiple adaptation techniques, including in-context learning, Parameter-Efficient Fine-Tuning (PEFT), Continued Pre-Training (CPT), Supervised Fine-Tuning (SFT), and Reinforcement Learning from Human Feedback (RLHF).
  • Model Optimization: Apply distillation, quantization, and pruning techniques with NeMo Model Optimizer and TensorRT-LLM to dramatically reduce inference costs without sacrificing performance.
  • Production Deployment: Learn to deploy, monitor, and scale your custom models within Kubernetes environments using NVIDIA Inference Microservices (NIMs).

By workshop completion, you'll possess the complete technical skill set to develop, deploy, and operate sovereign AI systems tailored to your specific requirements—from data preparation to production scaling.

Prerequisite(s)

  • Intermediate Python programming skills
  • Previous work with LLM-based applications and understanding of prompt engineering principles
  • Experience with data processing pipelines and text preprocessing techniques
  • Understanding of fine-tuning, training/validation splits, and basic ML metrics
  • Basic knowledge of GPU acceleration for ML workloads (CUDA experience helpful but not required)
  • Familiarity with containerization and basic Kubernetes concepts (Optional but helpful)

Building Digital Twins for Physical AI With NVIDIA Omniverse

As manufacturing accelerates toward automation and AI-driven processes, digital twins are becoming essential for simulating, testing, and deploying intelligent systems in real-world factory environments.

In this workshop, you'll learn about:

  • Creating digital twins from manufacturing data using OpenUSD and Omniverse.
  • Simulating accurate sensor data with NVIDIA Cosmos and Sensor RTX ‌to power AI perception, localization, and decision-making.
  • Applying Metropolis VSS for semantic labeling, tracking, and classification.
  • Generating large-scale, labeled datasets from digital twins to train AI models and create closed-loop robot learning workflows.

By the end of this workshop, attendees will have hands-on experience building simulation-ready digital twins and learn how these twins support the development of physical AI systems in the manufacturing domain.

Topics Covered:

  • Create a digital twin from existing 3D data.
  • Generate synthetic sensor data to power AI perception, localization, and decision-making
  • Integrate Metropolis VSS for search and semantic labeling.
  • Generate large-scale datasets to train AI models.

Prerequisite(s)

  • Familiarity with 3D workflows or CAD tools commonly used in manufacturing
  • Basic understanding of AI concepts, such as perception, localization, and planning
  • Exposure to simulation, robotics, or automation workflow

NVIDIA Isaac for Accelerated Robotics

Unlock the power of simulation-driven robotics with NVIDIA Isaac Sim and Isaac Lab. This hands-on, full-day workshop is designed for robotics engineers, applied researchers, and R&D teams focused on building, training, and simulating robots in software.

In this workshop, you'll learn about:

  • Structuring modular robot assets using OpenUSD and importing CAD/URDF models
  • Integrating robots into virtual environments in Isaac Sim
  • Connecting ROS 2 for real-time robot control and software-in-the-loop testing
  • Accelerating perception and AI workloads using NVIDIA GPU-powered libraries
  • Generating physically accurate synthetic data for robust robot training

By the end of this workshop, attendees will have hands-on experience building robot simulations and integrating synthetic data pipelines to build scalable, simulation-first workflows.

Prerequisite(s)

  • Proficiency in Python programming
  • Familiarity with ROS 2 (Robot Operating System)
  • Familiarity with 3D workflows or CAD tools commonly used in robotics
  • Exposure to simulation and robotics workflows