This workshop teaches you techniques for training deep neural networks on multi-GPU technology to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, you’ll learn concepts for implementing Horovod multi-GPUs to reduce the complexity of writing efficient distributed software and to maintain accuracy when training a model across many GPUs.

 

Learning Objectives


At the conclusion of the workshop, you’ll have an understanding of:
  • Stochastic gradient descent (SGD), a crucial tool in parallelized training
  • Batch size and its effect on training time and accuracy
  • Transforming a single-GPU implementation to a Horovod multi-GPU implementation
  • Techniques for maintaining high accuracy when training across multiple GPUs

Download workshop datasheet (PDF 73.3 KB)

Workshop Outline

Introduction
(15 mins)
  • Meet the instructor.
  • Create an account at courses.nvidia.com/join
Stochastic Gradient Descent and the Effects of Batch Size
(120 mins)
  • Understand the issues with sequential single-thread data processing and the theory behind speeding up applications with parallel processing.
  • Explore loss function, gradient descent, and SGD.
  • Learn the effect of batch size on accuracy and training time.
Break (60 mins)
Training on Multiple GPUs with Horovod
(120 mins)
  • Discover the benefits of training on multiple GPUs with Horovod.
  • Learn to transform single-GPU training on the Fashion-MNIST dataset to Horovod multi-GPU implementation.
Break (15 mins)
Maintaining Model Accuracy when Scaling to Multiple GPUs
(120 mins)
  • Understand why accuracy can decrease when parallelizing training on multiple GPUs.
  • Explore tools for maintaining accuracy when scaling training to multiple GPUs.
Final Review
(15 mins)
  • Review key learnings and answer questions.
  • Complete the assessment and earn a certificate.
  • Complete the workshop survey.
  • Learn how to set up your own AI application development environment.
 

Workshop Details

Duration: 8 hours

Price: Contact us for pricing.

Prerequisites: Experience with gradient descent model training

Technologies: TensorFlow, Keras, Horovod

Assessment Type: Code-based

Certificate: Upon successful completion of the assessment, participants will receive an NVIDIA DLI certificate to recognize their subject matter competency and support professional career growth.

Hardware Requirements: Desktop or laptop computer capable of running the latest version of Chrome or Firefox. Each participant will be provided with dedicated access to a fully configured, GPU-accelerated server in the cloud.

Languages: English, Korean, Simplified Chinese, Traditional Chinese

Upcoming Workshops

If your organization is interested in boosting and developing key skills in AI, accelerated data science, or accelerated computing, you can request instructor-led training from the NVIDIA DLI.

Questions?