Hands-on AI Training at GTC


The NVIDIA Deep Learning Institute (DLI) will host 60+ instructor-led training sessions, 30+ self-paced courses, and 6 full-day workshops offering developer certification at GTC 2020.  

Developers, data scientists, and researchers will learn how to apply deep learning and accelerated computing to solve the world’s most challenging problems in autonomous vehicles, robotics, digital content creation, healthcare, industrial inspection and more.

GTC 2019 training attendees can access instructor-led training content through March 2020 for any sessions attended. You can also access self-paced courses started onsite. Log into your NVIDIA Developer Program account at  courses.nvidia.com/join.

Instructor-led Workshops

Join us for a full-day workshop on Sunday, March 22nd led by a DLI-certified instructor. Get access to a GPU-accelerated server in the cloud to complete hands-on exercises alongside other developers and earn a certificate in AI or accelerated computing.

Fundamentals of Deep Learning for Natural Language Processing

Prerequisites: Basic experience with neural networks and Python programming; familiarity with linguistics
TensorFlow, Keras

Learn the latest deep learning techniques to understand textual input using natural language processing (NLP). You’ll learn how to:

  • Convert text to machine-understandable representations and classical approaches
  • Implement distributed representations (embeddings) and understand their properties
  • Train machine translators from one language to another

Upon completion, you’ll be proficient in NLP using embeddings in similar applications.

View Datasheet >

Fundamentals of Deep Learning for Multi-Gpus

Prerequisites: Experience with stochastic gradient descent mechanics, network architecture, and parallel computing

The computational requirements of deep neural networks used to enable AI applications like self-driving cars are enormous. A single training cycle can take weeks on a single GPU or even years for larger datasets like those used in self-driving car research. Using multiple GPUs for deep learning can significantly shorten the time required to train lots of data, making solving complex problems with deep learning feasible.

This workshop will teach you how to use multiple GPUs to train neural networks. You'll learn:

  • Approaches to multi-GPUs training
  • Algorithmic and engineering challenges to large-scale training
  • Key techniques used to overcome the challenges mentioned above

Upon completion, you'll be able to effectively parallelize training of deep neural networks using TensorFlow. 

View Datasheet >

Fundamentals of Accelerated Computing With Cuda Python

Prerequisites: Basic Python competency including familiarity with variable types, loops, conditional statements, functions, and array manipulations. NumPy competency including the use of ndarrays and ufuncs.
Technologies: CUDA, Python, Numba, NumPy

This workshop explores how to use Numba—the just-in-time, type-specializing Python function compiler—to accelerate Python programs to run on massively parallel NVIDIA GPUs. You’ll learn how to:

  • Use Numba to compile CUDA kernels from NumPy universal functions (ufuncs)
  • Use Numba to create and launch custom CUDA kernels
  • Apply key GPU memory management techniques

Upon completion, you’ll be able to use Numba to compile and launch CUDA kernels to accelerate your Python applications on NVIDIA GPUs.

View Datasheet >


The NVIDIA Deep Learning Institute offers self-paced, online training powered by GPU-accelerated workstations in the cloud and instructor-led workshops onsite by request.