GPU-Optimized Software Hub Simplifying DL, ML, and HPC Workflows.
The NGC™ catalog is the hub for GPU-optimized software for deep learning (DL), machine learning (ML), and high-performance computing (HPC) that accelerates deployment to development workflows so data scientists, developers, and researchers can focus on building solutions, gathering insights, and delivering business value.
The NGC catalog accelerates productivity with easy-to-deploy, optimized AI frameworks and HPC application containers, so users can focus on building their solutions.
The NGC catalog lowers the barrier to AI adoption by taking care of the heavy lifting (expertise, time, compute resources) with pre-trained models and workflows with best-in-class accuracy and performance.
Run software from the NGC catalog on-prem, in the cloud, and edge or using hybrid and multi-cloud deployments. NGC catalog software can be deployed on bare metal servers, Kubernetes or on virtualized environments, maximizing utilization of GPUs, portability, and scalability of applications.
Enterprise-grade support for NVIDIA-Certified Systems provides direct access to NVIDIA's AI experts, minimizing risk, and maximizing system utilization and user productivity.
The NGC catalog provides a range of options that meet the needs of data scientists, developers, and researchers with various levels of AI expertise. Quickly deploy AI frameworks with containers, get a head start with pre-trained models or resources, and use domain-specific SDKs, use-case-based Collections, and Helm charts for the fastest AI implementations, giving faster time-to-solution.
Spanning AI, data science, and HPC, the NGC catalog features an extensive range of GPU-accelerated software for NVIDIA GPUs.
The NGC catalog hosts containers for the top AI and data science software, tuned, tested and optimized by NVIDIA, as well as fully tested containers for HPC applications and data analytics. NGC catalog containers provide powerful and easy-to-deploy software proven to deliver the fastest results, allowing users to build solutions from a tested framework, with complete control.
Many AI applications have common needs: classification, object detection, language translation, text-to-speech, recommender engines, sentiment analysis, and more. When developing applications with these capabilities, it is much faster to start with a model that is pre-trained and then tune it for a specific use case.
The NGC catalog offers pre-trained models for a variety of common AI tasks that are optimized for NVIDIA Tensor Core GPUs, and can be easily re-trained by updating just a few layers, saving valuable time.
The NGC catalog offers step-by-step instructions and scripts for creating deep learning models, with sample performance and accuracy metrics to compare your results. These scripts provide expert guidance on building DL models for image classification, language translation, text-to-speech and more. Data scientists can quickly build performance-optimized models by easily adjusting the hyperparameters.
Helm charts automate software deployment on Kubernetes clusters, allowing users to focus on using—rather than installing—their software.
The NGC catalog hosts Kubernetes-ready Helm charts that make it easy to deploy powerful third-party software. NGC Private Registry allows DevOps to push and share their Helm charts, so teams can take advantage of consistent, secure, and reliable environments to speed up development-to-production cycles.
NVIDIA GPU Operator Helm chart is a suite of NVIDIA drivers, container runtime, device plug-in, and management software that IT teams can install on Kubernetes clusters to give users faster access to run their workloads.
Collections are use-case based curated content in one easy-to-use package. Collections makes it easy to discover the compatible framework containers, models, Juptyer notebooks and other resources to get started faster. In addition, the respective collections provide detailed documentation to deploy all the content for specific use cases.
NGC catalog offers ready to use Collections for various applications including NLP, ASR, intelligent video analytics, and object detection.
The NGC catalog features NVIDIA Transfer Learning Toolkit, an SDK that allows deep learning application developers and data scientists to re-train object detection and image classification models, and is easily deployed with NVIDIA DeepStream SDK for Intelligent Video Analytics.
Accelerate Your Workflow with the NGC Catalog
Deploy software from the NGC catalog with confidence on any platform, including cloud and on-premises with NVIDIA-Certified Systems or at the edge, and maximize the investment with NGC Support Services.
NGC catalog software runs on a wide variety of NVIDIA GPU-accelerated platforms, including NVIDIA-Certified Systems, NVIDIA DGX™ systems, NVIDIA TITAN and NVIDIA Quadro® GPUs powered workstations, virtualized environments with NVIDIA Virtual Compute Server, and cloud platforms.
NGC Support Services provides enterprise-grade support to ensure NVIDIA-Certified Systems run optimally, maximizing system utilization and user productivity. The service gives enterprise IT direct access to NVIDIA subject matter experts to quickly address software issues and minimize system downtime.
The NGC catalog provides a comprehensive hub of GPU-accelerated containers for AI, machine learning and HPC that are optimized, tested and ready-to-run on supported NVIDIA GPUs on-premises and in the cloud. In addition, it provides pre-trained models, model scripts, and industry solutions that can be easily integrated in existing workflows.
Compiling and deploying DL frameworks from is time consuming and error-prone. Optimizing AI software requires expertise. Building models requires expertise, time and compute resources. The NGC catalog takes care of these challenges with GPU-optimized software and tools that data scientists, developers, IT and users can leverage so they can focus on building their solutions.
Each container has a pre-integrated set of GPU-accelerated software. The stack includes the chosen application or framework, NVIDIA CUDA Toolkit, accelerated libraries and other necessary drivers — all tested and tuned to work together immediately with no additional setup.
The NGC catalog features the top AI software such as TensorFlow, PyTorch, MXNet, NVIDIA TensorRT ™, RAPIDS and many more. Browse the NGC catalog to see the full list.
The NGC catalog containers run on PCs, workstations, HPC clusters, NVIDIA DGX systems, on NVIDIA GPUs on supported cloud providers, and NVIDIA-Certified Systems. The containers run in Docker and Singularity runtimes. View the NGC documentation for more information.
NVIDIA offers virtual machine image files in the marketplace section of each supported cloud service provider. To run an NGC container, simply pick the appropriate instance type, run the NGC image, and pull the container into it from the NGC catalog. The exact steps vary by cloud provider, but you can find step-by-step instructions in the NGC documentation.
The most popular deep learning software such as TensorFlow, PyTorch and MXNet are updated monthly by NVIDIA engineers to optimize the complete software stack and get the most from your NVIDIA GPUs.
There is no charge to download the containers from the NGC catalog (subject to the terms of the TOU). However, for running in the cloud, each cloud service provider will have their own pricing for GPU compute instances.
No, it is just a catalog that delivers GPU-optimized software stacks.
The NGC Private Registry was developed to provide users with a secure space to store and share custom containers, models, model scripts, and Helm charts within their enterprise. The Private Registry allows them to protect their IP while, at the same time, increasing collaboration.
Users get access to the NVIDIA DevTalk Developer Forum https://devtalk.nvidia.com, supported by a large community of AI and GPU experts from the NVIDIA customer, partner, and employee ecosystem.
In addition, NGC Support Services provides L1-L3 support on NVIDIA-Certified Systems, available through our OEM resellers.
NVIDIA-Certified Systems, consisting of EGX and HGX platforms, enable enterprises to confidently choose performance-optimized hardware and software solutions that securely and optimally run their AI workloads—both in smaller configurations and at-scale. See full list of NVIDIA-Certified Systems.
Please see https://ngc.nvidia.com/legal/terms