Join Netflix, Fidelity, and NVIDIA to learn best practices for building, training, and deploying modern recommender systems.    Register Free


Deep Learning

Deep learning is a subset of AI and machine learning that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, language translation, and others.



Deep learning differs from traditional machine learning techniques in that they can automatically learn representations from data such as images, video or text, without introducing hand-coded rules or human domain knowledge. Their highly flexible architectures can learn directly from raw data and can increase their predictive accuracy when provided with more data.

Deep learning is commonly used across apps in computer vision, conversational AI and recommendation systems. Computer vision apps use deep learning to gain knowledge from digital images and videos. Conversational AI apps help computers understand and communicate through natural language. Recommendation systems use images, language, and a user’s interests to offer meaningful and relevant search results and services.

Deep learning has led to many recent breakthroughs in AI such as Google DeepMind’s AlphaGo, self-driving cars, intelligent voice assistants and many more. With NVIDIA GPU-accelerated deep learning frameworks, researchers and data scientists can significantly speed up deep learning training, that could otherwise take days and weeks to just hours and days. When models are ready for deployment, developers can rely on GPU-accelerated inference platforms for the cloud, embedded device or self-driving cars, to deliver high-performance, low-latency inference for the most computationally-intensive deep neural networks.

Evoluton of Deep Learning



NVIDIA AI Platform for Developers



Developing AI applications start with training deep neural networks with large datasets. GPU-accelerated deep learning frameworks offer flexibility to design and train custom deep neural networks and provide interfaces to commonly-used programming languages such as Python and C/C++. Every major deep learning framework such as PyTorch, TensorFlow, JAX and others, are already GPU-accelerated, so data scientists and researchers can get productive in minutes without any GPU programming.



NVIDIA Tensor Cores
NVIDIA Tensor Cores

For AI researchers and application developers, NVIDIA Hopper and Ampere GPUs powered by tensor cores give you an immediate path to faster training and greater deep learning performance. With Tensor Cores enabled, FP32 and FP16 mixed precision matrix multiply dramatically accelerates your throughput and reduces AI training times.




For developers integrating deep neural networks into their cloud-based or embedded application, Deep Learning SDK includes high-performance libraries that implement building block APIs for implementing training and inference directly into their apps. With a single programming model for all GPU platform - from desktop to datacenter to embedded devices, developers can start development on their desktop, scale up in the cloud and deploy to their edge devices - with minimal to no code changes.

NVIDIA provides optimized software stacks to accelerate training and inference phases of the deep learning workflow. Learn more on the links below.


Deep Learning SDKs

For developers looking to build deep learning applications, NVIDIA Pretrained AI models eliminate the need of building models from scratch or experimenting with other open source models that fail to converge. These models are pretrained on high quality representative datasets to deliver state-of-the-art performance and production readiness for a variety of use cases like computer vision, speech AI, robotics, natural language processing, healthcare, cybersecurity, and many others.


Pretrained AI Models



Every AI Framework - Accelerated


Deep learning frameworks offer building blocks for designing, training and validating deep neural networks, through a high level programming interface. Every major deep learning framework such as PyTorch, TensorFlow, and JAX rely on Deep Learning SDK libraries to deliver high-performance multi-GPU accelerated training. As a framework user, it’s as simple as downloading a framework and instructing it to use GPUs for training. Learn more about deep learning frameworks and explore these examples to getting started quickly.


Deep Learning Frameworks

GNN Frameworks

logos of GNN frameworks

Unified Platform
Development to Deployment

Deep learning frameworks are optimized for every GPU platform from Titan V desktop developer GPU to data center grade Tesla GPUs. This allows researchers and data scientist teams to start small and scale out as data, number of experiments, models and team size grows. Since Deep Learning SDK libraries are API compatible across all NVIDIA GPU platforms, when a model is ready to be integrated into an application, developers can test and validate locally on the desktop, and with minimal to no code changes validate and deploy to Tesla datacenter platforms, Jetson embedded platform or DRIVE autonomous driving platform. This improves developer productivity and reduces chances of introducing bugs when going from prototype to production.


NVIDIA deep learning sdk and cuda



Get Started With Hands-On Training


The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers in AI and accelerated computing. Get certified in the fundamentals of Computer Vision through the hands-on, self-paced course online. Plus, check out two-hour electives on Digital Content Creation, Healthcare, and Intelligent Video Analytics.

 NVIDIA Deep learing institute (DLI)