Just Announced—Run Jupyter Notebooks on Google Cloud with NGC's New One Click Deploy Feature.  Read More

AI and HPC Containers

Develop and deploy applications faster with GPU-optimized containers from the NVIDIA NGC™ catalog.

What Are Containers?

A container is a portable unit of software that combines the application and all its dependencies into a single package that’s agnostic to the underlying host OS. It removes the need to build complex environments and simplifies the application development-to-deployment process.

The NVIDIA NGC catalog contains a host of GPU-optimized containers for deep learning, machine learning, visualization, and high-performance computing (HPC) applications that are tested for performance, security, and scalability.

Browse NGC Containers

Benefits of Containers from the NGC Catalog

Deploy and Run Application Icon

Deploy Easily

Built-in libraries and dependencies allow you to easily deploy and run applications. Deploy AI/ML containers to Vertex AI using the quick deploy feature in the NGC catalog.

Interface Icon

Train Faster

NVIDIA AI containers like TensorFlow and PyTorch provide performance-optimized monthly releases for faster AI training and inference.

Check Mark Approve Icon

Run Anywhere

Deploy the containers on multi-GPU/multi-node systems anywhere—in the cloud, on premises, and at the edge—on bare metal, virtual machines (VMs), and Kubernetes.

Check Mark Security Icon

Deploy with Confidence

Containers are scanned for common vulnerabilities and exposures (CVEs), come with security reports, and are backed by optional enterprise support through NVIDIA AI Enterprise.

Optimized for Performance

NVIDIA-built docker containers are updated monthly and third-party software is updated regularly to deliver the features needed to extract maximum performance from your existing infrastructure and reduce time to solution.

BERT-Large for Natural Language Processing

BERT-Large leverages mixed precision arithmetic and Tensor Cores on Volta V100 and Ampere A100 GPUs for faster training times while maintaining target accuracy.

BERT-Large and Training performance with TensorFlow on a single node 8x V100 (16GB) & A100 (40GB). Mixed Precision. Batch size for BERT: 3 (V100), 24 (A100)


Explore BERT-Large for PyTorch Explore BERT-Large for TensorFlow

ResNet50 v1.5 for Image Processing

This model is trained with mixed precision using Tensor Cores on Volta, Turing and NVIDIA Ampere GPU architectures for faster training.

ResNet 50 performance with TensorFlow on single-node 8x V100 (16GB) and A100 (40 GB). Mixed Precision. Batch size for ResNet50: 26


Explore ResNet50 for PyTorch Explore ResNet50 for TensorFlow

Matlab for Deep Learning

Continuous development of Matlab’s Deep Learning container improves performance for training and inference

Windows 10, Intel Xeon E5-2623 @2.4GHz, NVIDIA Titan V 12GB GPUs


Explore Matlab

Quick Deploy

The quick deploy feature in the NGC catalog automatically sets up the Vertex AI instance with an optimal configuration, preloads the dependencies, runs the software from NGC without any need to set up the infrastructure.


Deploy popular DL and ML containers, models, and SDKs directly from the NGC catalog.

Containers for Diverse Workloads

Get started today by selecting from over 80 containerized software applications and SDKs, developed by NVIDIA and our ecosystem of partners.

AI Containers

TensorFlow

TensorFlow is an open-source software library for high-performance numerical computation.

Explore container

PyTorch

PyTorch is a GPU-accelerated tensor computational framework with a Python front end.

Explore container

NVIDIA Triton Inference Server

NVIDIA Triton™ Inference Server is an open-source inference solution that maximizes utilization of and performance on GPUs.

Explore container

NVIDIA TensorRT

NVIDIA TensorRT® is a C++ library that facilitates high-performance inference on NVIDIA GPUs.

Explore container

Application Frameworks

NVIDIA Clara

NVIDIA Clara™ Train for medical imaging is an application framework with over 20 state-of-the-art pre-trained models, transfer learning and federated learning tools, AutoML, and AI-assisted annotation.

Explore container

DeepStream

DeepStream is the streaming analytics toolkit for AI-based video, audio, and image understanding for multi-sensor processing.

Explore container

NVIDIA Riva

NVIDIA Riva, is an application framework for multimodal conversational AI services that delivers real-time performance on GPUs.

Explore container

Merlin Training

Merlin HugeCTR, a component of NVIDIA Merlin™, is a deep neural network training framework designed for recommender systems.

Explore container

HPC Containers

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems.

Explore container

GROMACS

GROMACS is a popular molecular dynamics application used to simulate proteins and lipids.

Explore container

RELION

RELION implements an empirical Bayesian approach for analysis of cryogenic electron microscopy (cryo-EM).

Explore container

NVIDIA HPC SDK

The NVIDIA HPC SDK is a comprehensive suite of compilers, libraries, and tools for building, deploying, and managing HPC applications.

Explore container

Frequently Asked Questions

  • A diverse set of containers span a multitude of use cases with built-in libraries and dependencies for easy compiling of custom applications.
  • They offer faster training with Automatic Mixed Precision (AMP) and minimal code changes.
  • Reduced time to solution with the ability to scaleup from single-node to multi-node systems.
  • Extremely portable, allowing you to develop faster by running containers in the cloud, on premises, or at the edge.

Containers from the NGC catalog make it seamless for machine learning engineers and IT to deploy to production.

  • They are tested on various platforms and architectures, enabling seamless deployment on a wide variety of systems and platforms.
  • They can be deployed to run on bare metal, virtual machines (VMs), and Kubernetes, including various architectures such as x86, ARM, and IBM Power.
  • They can run easily on various container runtimes such as Docker, Singularity, cri-o, and containerd.
  • The container images are scanned for common vulnerabilities and exposures (CVEs) and are backed by optional enterprise support to troubleshoot issues for NVIDIA-built software.

NGC Catalog Resources

Developer Blogs

Learn how to use the NGC catalog with these step-by-step instructions.



Explore technical blogs

Developer News

Read about the latest NGC catalog updates and announcements.



Read news

GTC Sessions

Watch all the top NGC sessions on demand.



Watch GTC Sessions

Webinars

Walk through how to use the NGC catalog with these video tutorials.



Watch webinars

Accelerate your AI development with Containers from the NGC catalog.

Get Started