NVIDIA CEO Jensen Huang

Watch NVIDIA CEO Jensen Huang Deliver the Keynote at MWC LA 2019

MWC logo

5G Meets AI

At Mobile World Congress (MWC), we explored how AI, machine learning, edge computing, and other cutting-edge technologies are transforming the telecommunications (telco) industry. NVIDIA’s keynote was filled with announcements for enterprises, telcos, and gamers. Read the summary blog post.

Booth Theater Talks

Booth Theater Talks

We showcased talks from ecosystem partners on a wide range of topics covering 5G, smart retail and AI at the edge. On demand recordings are now available.

NVIDIA Featured Speakers at the Applied AI Forum

Soma Velayutham

Global Business Development Lead for Telecoms

AI Executive Roundtable: Accelerating AI Development in Networks

Keith Strier

VP, Worldwide AI Initiatives

AI Panel: The Opportunity at the Edge and How to Monetize It

Get Hands-On Training in AI

Explore  the deep learning technology that’s powering seismic shifts in telecommunications, like 5G and IoT. 

  • Accelerating Data Science Workflows with RAPIDS

    Prerequisites: Advanced competency in Pandas, NumPy, and scikit-learn 

    The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows.

    Learn how to GPU-accelerate your data science applications by:

    • Utilizing key RAPIDS libraries like cuDF (GPU-enabled Pandas-like dataframes) and cuML (GPU-accelerated machine learning algorithms)
    • Learning techniques and approaches to end-to-end data science, made possible by rapid iteration cycles created by GPU acceleration
    • Understanding key differences between CPU-driven and GPU-driven data science, including API specifics and best practices for refactoring

    Upon completion, you'll be able to refactor existing CPU-only data science workloads to run much faster on GPUs and write accelerated data science workflows from scratch.

  • Deep Learning at Scale with Horovod

    Prerequisites: Competency in Python and professional experience training deep learning models in Python

    Learn how to scale deep learning training to multiple GPUs with Horovod, the open-source distributed training framework originally built by Uber and hosted by the LF AI Foundation. In this course, you'll:

    • Complete a step-by-step refactor of a Fashion-MNIST classification model to use Horovod and run on four NVIDIA V100 GPUs
    • Understand Horovod's MPI roots and develop an intuition for parallel programming motifs like multiple workers, race conditions, and synchronization
    • Use techniques like learning rate warmups that greatly impact scaled deep learning performance

    Upon completion, you'll be able to use Horovod to effectively scale deep learning training in new or existing code bases.

  • Optimization and Deployment of TensorFlow Models with TensorRT

    Prerequisites: Experience with TensorFlow  and Python

    Learn the fundamentals of generating high-performance deep learning models in the TensorFlow platform using built-in TensorRT library (TF-TRT) and Python. You'll explore:

    • How to pre-process classifications models and freeze graphs and weights in order to perform optimization
    • Get familiar with fundamentals of graph optimization and quantization using FP32, FP16 and INT8
    • Use TF-TRT API to optimize subgraphs and select optimization parameters that best fit your model
    • Design and embed custom operations in Python to mitigate the non-supporting layers problem and optimize detection models

    Upon completion, you'll understand how to utilize TF-TRT to achieve deployment-ready optimized models.

  • AI Workflows for Intelligent Video Analytics with DeepStream

    Prerequisites: Experience with C++ and Gstreamer

    The DeepStream 3.0 framework features hardware-accelerated building blocks of Intelligent Video Analytics (IVA) applications. This allows developers to focus on building core deep learning networks. The DeepStream SDK underpins a variety of use cases and offers flexibility on the deployment medium.

    You’ll learn how to:

    • Deploy DeepStream pipeline for parallel, multi-stream video processing and deliver applications with maximum throughput at scale
    • Configure the processing pipeline and create intuitive, graph-based applications. Leverage multiple deep network models to process video streams and achieve more intelligent insights

    Upon completion, you'll know how to create AI-based video analytics applications using DeepStream to transform video streams into actionable insights.

  • Signal Processing with DIGITS

    Prerequisites: Basic experience training neural networks

    Deep neural networks are better at classifying images than humans, which has implications beyond what we expect of computer vision. Learn how to convert radio frequency (RF) signals into images to detect a weak signal corrupted by noise. You’ll be trained how to:

    • Treat non-image data as image data
    • Implement a deep learning workflow (load, train, test, adjust) in DIGITS
    • Test performance programmatically and guide performance improvements

    Upon completion, you’ll be able to classify both image and image-like data using deep learning.

Learn more about the NVIDIA Deep Learning Institute.

Proudly Working with Our Partner Community

ASRock Rack
Dell Technologies
Red Hat