The NGC container registry provides a comprehensive catalog of GPU-accelerated AI containers that are optimized, tested and ready-to-run on supported NVIDIA GPUs on-premises and in the cloud. AI containers from NGC, including TensorFlow, PyTorch, MXNet, NVIDIA TensorRT™, and more, give users the performance and flexibility to take on their most challenging projects with the power of NVIDIA AI. This helps data scientists and researchers rapidly build, train, and deploy AI models to meet continually evolving demands.
The NGC container registry features the top software for accelerated data science, machine learning, and analytics. Tap into powerful software for executing end-to-end data science training pipelines completely in the GPU, reducing training time from days to minutes.
Learn more about the optimizations to the top deep learning software such as TensorFlow, PyTorch, MXNet, NVIDIA TensorRT ™, Theano, Caffe2, Microsoft Cognitive Toolkit (CNTK), and more, in this brief.
NVIDIA GPU Cloud makes it easy to leverage GPU-optimized deep learning frameworks on-premises or in the cloud. See how to get started quickly with NGC and Amazon Elastic Compute Cloud (Amazon EC2).
You get access to a comprehensive catalog of fully integrated and optimized deep learning framework containers—all at no cost.
You can access the NGC container registry either as a guest or as registered user. As a guest, you’ll be able to browse the NGC container registry and download select containers such as CUDA. By signing up as a registered user at no charge, you can download all of the containers from the NGC container registry. To browse as a guest or sign up, visit https://ngc.nvidia.com.
Each container has the NVIDIA GPU Cloud Software Stack, a pre-integrated stack of GPU-accelerated software optimized for deep learning on NVIDIA GPUs. It includes a Linux OS, CUDA runtime, required libraries, and the chosen framework or application (TensorFlow, NVCaffe, NVIDIA DIGITS, etc.)—all tuned to work together immediately with no additional setup.
The NGC container registry has NVIDIA GPU-accelerated releases of the most popular frameworks: NVCaffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), NVIDIA DIGITS, MXNet, PyTorch, TensorFlow, Theano, Torch, CUDA (base level container for developers), as well as NVIDIA TensorRT inference accelerator.
The GPU-accelerated deep learning containers are tuned, tested, and certified by NVIDIA to run on NVIDIA TITAN V, TITAN Xp, TITAN X (Pascal), NVIDIA Quadro GV100, GP100 and P6000, NVIDIA DGX Systems, and on supported NVIDIA GPUs on Amazon EC2, Google Cloud Platform, Microsoft Azure, and Oracle Cloud Infrastructure.
Yes, the terms of use allow the NGC deep learning containers to be used on desktop PCs running NVIDIA Volta or Pascal-powered GPUs.
The deep learning containers on NGC are designed to run on NVIDIA Volta or Pascal™ powered GPUs. The cloud service providers (CSP) supported by NGC offer instance types that provide the appropriate NVIDIA GPUs for running the NGC containers. In order to run the containers, you will need to choose one of these instance types, instantiate the appropriate image file on it, and then access NGC from within that image. The steps to do this varies by CSP, but you can find step-by-step instructions for each CSP in the NVIDIA GPU Cloud documentation.
Monthly. The deep learning containers on NGC benefit from continuous R&D investment by NVIDIA and joint engineering with framework engineers to ensure that each deep learning framework is tuned for the fastest training possible. NVIDIA engineers continually optimize the software, delivering monthly container updates to ensure that your deep learning investment reaps greater returns over time.
Users get access to the NVIDIA DevTalk Developer Forum https://devtalk.nvidia.com, supported by a large community of deep learning and GPU experts from the NVIDIA customer, partner, and employee ecosystem.
NVIDIA is accelerating the democratization of AI by giving deep learning researchers and developers simplified access to GPU-accelerated deep learning frameworks. This makes it easy for them to run these optimized frameworks on NVIDIA GPUs in the cloud or on local systems.
There is no charge for the containers on the NGC container registry (subject to the terms of the TOU). However, each cloud service provider will have their own pricing for accelerated computing services.
Please see https://ngc.nvidia.com/legal/terms