Running Deep Learning Workloads in the Modern AI Data Centre


Considerations for Scaling GPU-Ready Data Centres Technical Overview

Enterprise and hyperscale data centres are increasingly being built around workloads using AI and deep neural networks (DNNs) with massive amounts of data. The level of computation required is significant—and would benefit greatly from the power of GPUs. Data centres that support GPU servers provide much higher efficiency and performance, use less power for advanced workloads, and require less floor space.

Learn the best practices for making a data centre “GPU-ready,” with a focus on power, cooling, and architecture, including rack layout, storage, and system and network architecture. By exploring computationally intensive workloads on NVIDIA® DGX-1 and NVIDIA® Tesla® V100 GPUs, this paper will guide you through how to minimise spend and how to run today’s advanced workloads at scale.


Please register to download

Download our overview on New Rules & Best Practices for running
deep learning workloads in the modern AI data centre.