This workshop teaches you techniques for training deep neural networks on multi-GPU technology to shorten the training time required for data-intensive applications. Working with deep learning tools, frameworks, and workflows to perform neural network training, you’ll learn concepts for implementing Horovod multi-GPUs to reduce the complexity of writing efficient distributed software and to maintain accuracy when training a model across many GPUs.
Learning Objectives
At the conclusion of the workshop, you’ll have an understanding of:
- Stochastic gradient descent (SGD), a crucial tool in parallelized training
- Batch size and its effect on training time and accuracy
- Transforming a single-GPU implementation to a Horovod multi-GPU implementation
- Techniques for maintaining high accuracy when training across multiple GPUs
Download workshop datasheet (PDF 73.3 KB)