October 22–24, 2019 | Los Angeles, CA


Watch the NVIDIA Keynote at MWC LA

October 21, 2019 | 3:00 PM

MWC logo

Forging Stronger Connections

Join us at this year’s MWC Los Angeles to explore how AI, machine learning, edge computing, and other cutting-edge technologies are transforming the telecommunications industry. Our expert team of researchers and engineers will be on hand to show you how the world’s leading companies are optimizing networks and innovating faster with NVIDIA technology.

Booth Theater Talks

Booth Theater Talks

Visit our booth to listen to our partners and experts talk on a wide range of topics, from AI at the edge to how 5G can transform every industry.

NVIDIA Sessions at the Applied AI

Soma Velayutham

Global Business Development Lead for Telecoms

AI Executive Roundtable: Accelerating AI Development in Networks

Tuesday, 10/22 | 4:00 PM

Keith Strier

VP, Worldwide AI Initiatives

AI Panel: The Opportunity at the Edge and How to Monetize It

Wednesday, 10/23 | 2:55 PM

Get Hands-On Training in AI

Broaden your skill set at MWC Los Angeles with the NVIDIA Deep Learning Institute (DLI). Visit the NVIDIA Learning Center in South Hall for hands-on training with the latest tools in artificial intelligence, and improve your understanding of deep learning technology powering seismic shifts in telecommunications like 5G and IoT.

  • Accelerating Data Science Workflows with RAPIDS

    Prerequisites: Advanced competency in Pandas, NumPy, and scikit-learn 

    The open source RAPIDS project allows data scientists to GPU-accelerate their data science and data analytics applications from beginning to end, creating possibilities for drastic performance gains and techniques not available through traditional CPU-only workflows.

    Learn how to GPU-accelerate your data science applications by:

    • Utilizing key RAPIDS libraries like cuDF (GPU-enabled Pandas-like dataframes) and cuML (GPU-accelerated machine learning algorithms)
    • Learning techniques and approaches to end-to-end data science, made possible by rapid iteration cycles created by GPU acceleration
    • Understanding key differences between CPU-driven and GPU-driven data science, including API specifics and best practices for refactoring

    Upon completion, you'll be able to refactor existing CPU-only data science workloads to run much faster on GPUs and write accelerated data science workflows from scratch.

  • Deep Learning at Scale with Horovod

    Prerequisites: Competency in Python and professional experience training deep learning models in Python

    Learn how to scale deep learning training to multiple GPUs with Horovod, the open-source distributed training framework originally built by Uber and hosted by the LF AI Foundation. In this course, you'll:

    • Complete a step-by-step refactor of a Fashion-MNIST classification model to use Horovod and run on four NVIDIA V100 GPUs
    • Understand Horovod's MPI roots and develop an intuition for parallel programming motifs like multiple workers, race conditions, and synchronization
    • Use techniques like learning rate warmups that greatly impact scaled deep learning performance

    Upon completion, you'll be able to use Horovod to effectively scale deep learning training in new or existing code bases.

  • Optimization and Deployment of TensorFlow Models with TensorRT

    Prerequisites: Experience with TensorFlow  and Python

    Learn the fundamentals of generating high-performance deep learning models in the TensorFlow platform using built-in TensorRT library (TF-TRT) and Python. You'll explore:

    • How to pre-process classifications models and freeze graphs and weights in order to perform optimization
    • Get familiar with fundamentals of graph optimization and quantization using FP32, FP16 and INT8
    • Use TF-TRT API to optimize subgraphs and select optimization parameters that best fit your model
    • Design and embed custom operations in Python to mitigate the non-supporting layers problem and optimize detection models

    Upon completion, you'll understand how to utilize TF-TRT to achieve deployment-ready optimized models.

  • AI Workflows for Intelligent Video Analytics with DeepStream

    Prerequisites: Experience with C++ and Gstreamer

    The DeepStream 3.0 framework features hardware-accelerated building blocks of Intelligent Video Analytics (IVA) applications. This allows developers to focus on building core deep learning networks. The DeepStream SDK underpins a variety of use cases and offers flexibility on the deployment medium.

    You’ll learn how to:

    • Deploy DeepStream pipeline for parallel, multi-stream video processing and deliver applications with maximum throughput at scale
    • Configure the processing pipeline and create intuitive, graph-based applications. Leverage multiple deep network models to process video streams and achieve more intelligent insights

    Upon completion, you'll know how to create AI-based video analytics applications using DeepStream to transform video streams into actionable insights.

  • Signal Processing with DIGITS

    Prerequisites: Basic experience training neural networks

    Deep neural networks are better at classifying images than humans, which has implications beyond what we expect of computer vision. Learn how to convert radio frequency (RF) signals into images to detect a weak signal corrupted by noise. You’ll be trained how to:

    • Treat non-image data as image data
    • Implement a deep learning workflow (load, train, test, adjust) in DIGITS
    • Test performance programmatically and guide performance improvements

    Upon completion, you’ll be able to classify both image and image-like data using deep learning.

Learn more about the NVIDIA Deep Learning Institute.

Proudly Working with Our Partner Community

ASRock Rack
Dell Technologies
Red Hat

Don’t Miss a Beat

Keep up to date on everything we’ve planned for MWC Los Angeles, including our NVIDIA® Jetson Developer Kit giveaway. Follow us on Twitter and be the first to hear the details.