Hands-on AI Training at GTC


The NVIDIA Deep Learning Institute (DLI) will host 60+ instructor-led training sessions, 30+ self-paced courses, and 6 full-day workshops offering developer certification at GTC 2020.  

Developers, data scientists, and researchers will learn how to apply deep learning and accelerated computing to solve the world’s most challenging problems in autonomous vehicles, robotics, digital content creation, healthcare, industrial inspection and more.

GTC 2019 training attendees can access instructor-led training content through March 2020 for any sessions attended. You can also access self-paced courses started onsite. Log into your NVIDIA Developer Program account at  courses.nvidia.com/join.

Instructor-led Workshops

Join us for a full-day workshop on Sunday, March 22nd led by a DLI-certified instructor. Get access to a GPU-accelerated server in the cloud to complete hands-on exercises alongside other developers and earn a certificate in AI or accelerated computing.

Fundamentals of Deep Learning for Multi-Gpus

Fundamentals of Deep Learning for Multi-Gpus

Prerequisites: Experience with stochastic gradient descent mechanics, network architecture, and parallel computing

The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

This workshop will teach you how to use multiple GPUs to train neural networks. You'll learn:

  • Approaches to multi-GPUs training
  • Algorithmic and engineering challenges to large-scale training
  • Key techniques used to overcome the challenges mentioned above

Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow. 

View Datasheet >

Fundamentals of Accelerated Computing with CUDA C/C++

Prerequisites: Basic C/C++ competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations.
Technologies: C/C++, CUDA

The CUDA computing platform enables the acceleration of CPU-only applications to run on the world’s fastest massively parallel GPUs. Experience C/C++ application acceleration by:

  • Accelerating CPU-only applications to run their latent parallelism on GPUs
  • Utilizing essential CUDA memory management techniques to optimize accelerated applications
  • Exposing accelerated application potential for concurrency and exploiting it with CUDA streams
  • Leveraging Nsight Systems to guide and check your work

Upon completion, you’ll be able to accelerate and optimize existing C/C++ CPU-only applications using the most essential CUDA techniques and Nsight Systems. You’ll understand an iterative style of CUDA development that will allow you to ship accelerated applications fast.

View Datasheet >

Fundamentals of Accelerated Computing with CUDA C/C++
Fundamentals of Accelerated Data Science with RAPIDS

Fundamentals of Accelerated Data Science with RAPIDS

Prerequisites: Experience with Python, ideally including pandas and NumPy
Technologies: RAPIDS, NumPy, XGBoost, DBSCAN, K-Means, SSSP, Python

RAPIDS is a collection of data science libraries that allows end-to-end GPU acceleration for data science workflows. In this training, you'll:

  • Use cuDF and Dask to ingest and manipulate massive datasets directly on the GPU
  • Apply a wide variety of GPU-accelerated machine learning algorithms, including XGBoost, cuGRAPH, and cuML, to perform data analysis at massive scale
  • Perform multiple analysis tasks on massive datasets in an effort to stave off a simulated epidemic outbreak affecting the UK

Upon completion, you'll be able to load, manipulate, and analyze data orders of magnitude faster than before, enabling more iteration cycles and drastically improving productivity.

View Datasheet >

Deep Learning for Autonomous Vehicles - Perception

Prerequisites: Experience with CNNs and C++
Technologies: TensorFlow, TensorRT, Python, CUDA C++, DIGITS

Learn how to design, train, and deploy deep neural networks for autonomous vehicles using the NVIDIA DRIVE™ development platform.

You'll learn how to:

  • ·Work with CUDA® code, memory management, and GPU acceleration on the NVIDIA DRIVE AGX™ System
  • Train a semantic segmentation neural network
  • Optimize, validate, and deploy a trained neural network using NVIDIA® TensorRT™

Upon completion, you'll be able to create and optimize perception components for autonomous vehicles using NVIDIA DRIVE.

View Datasheet >

Deep Learning for Autonomous Vehicles - Perception
Applications of AI for Anomaly Detection

Applications of AI for Anomaly Detection

Prerequisites: Experience with CNNs and Python
Technologies: RAPIDS, Keras, GANs, XGBoost

The amount of information moving through our world’s telecommunications infrastructure makes it one of the most complex and dynamic systems that humanity has ever built. In this workshop, you’ll implement multiple AI-based solutions to solve an important telecommunications problem: identifying network intrusions.

In this workshop, you’ll:

  • Implement three different anomaly detection techniques: accelerated XGBoost, deep learning-based autoencoders, and generative adversarial networks (GANs)
  • Build and compare supervised learning with unsupervised learning-based solutions
  • Discuss other use cases within your industry that could benefit from modern computing approaches

Upon completion, you'll be able to detect anomalies within large datasets using supervised and unsupervised machine learning.

View Datasheet >

Applications of AI for Predictive Maintenance

Prerequisites: Experience with Python and deep neural networks
Technologies: TensorFlow, Keras

Learn how to identify anomalies and failures in time-series data, estimate the remaining useful life of the corresponding parts, and use this information to map anomalies to failure conditions.

You’ll learn how to:

  • ·Leverage predictive maintenance to manage failures and avoid costly unplanned downtimes
  • Identify key challenges around identifying anomalies that can lead to costly breakdowns
  • Use time-series data to predict outcomes using machine learning classification models with XGBoost
  • Apply predictive maintenance procedures by using a long short-term memory ( LSTM)-based model to predict device failure
  • Experiment with autoencoders to detect anomalies by using the time-series sequences from the previous steps

Upon completion, you’ll understand how to use AI to predict the condition of equipment and estimate when maintenance should be performed.

View Datasheet >

Applications of AI for Predictive Maintenance

Instructor-led Training Sessions

DLI will host two-hour, instructor-led training sessions on March 23-26 for Conference and Training passholders. Here are a few popular hands-on sessions:

Optimization and Deployment of TensorFlow Models with TensorRT

Learn the fundamentals of generating high-performance deep learning models in the TensorFlow platform using built-in TensorRT library (TF-TRT) and Python.

Introduction to CUDA Python with Numba

Explore how to use Numba to accelerate NumPy ufuncs in your Python code and write custom CUDA kernels in Python.

Deep Autoencoders for Recommendation Systems

Learn how to build recommendation systems for your customers using deep autoencoders.


The NVIDIA Deep Learning Institute offers self-paced, online training powered by GPU-accelerated workstations in the cloud and instructor-led workshops onsite by request.