Connect with
the Experts

Speak With NVIDIA Experts

GTC Digital is all the great training, research, insights, and direct access to the brilliant minds of NVIDIA’s GPU Technology Conference, now online.  Schedule time to meet one-on-one with the brilliant minds behind NVIDIA’s products and research during a Connect with the Experts session.  These 60-minute live QA sessions provide an opportunity to ask questions, confirm hypotheses, and make suggestions.

I love being able to meet the people behind the products that I use. Thanks for being friendly and open to questions.

- GTC 2019 Connect with the Experts attendee

 

See GTC 2020 Connect with the Experts options below. Our NVIDIA experts will provide “best practices” and “how to” answers to your technical questions.

ACCELERATED DATA SCIENCE

AUTONOMOUS MACHINES

    •  
    •  
       

      NVIDIA Jetson is the world's leading computing platform for AI at the edge. High in performance and low in power, it's ideal for compute-intensive embedded applications like robots, drones, mobile medical imaging, and Intelligent Video Analytics (IVA). OEMs, independent developers, makers, and hobbyists can use Jetson developer kits and modules to explore the future of embedded computing and artificial intelligence. Have questions? Jetson experts will be available to discuss the platform capabilities, SDKs, and development tools, and answer questions to help you rapidly deploy AI-at-the-edge. Connect directly with NVIDIA engineers and experts from other organizations to get answers to all of your questions on topics ranging from AI and deep learning to accelerated data science. Visit us at the pods in the exhibit hall and ask a question.

  •  

AUTONOMOUS VEHICLES

    •  
    •  
       
      The NVIDIA DRIVE IX intelligent experience software stack runs on the DriveWorks middleware layer to enhance the driver’s situational awareness, assists in driving functions and provides natural language interactions between the vehicle and its occupants. Learn from our Developer Zone Forum experts how to create a safer, AI-powered cockpit experience using DriveWorks in this hour-long Q&A session.
  •  
  • CWE21185
    Using CUDA, TensorRT and DriveWorks on NVIDIA DRIVE AGX
    Josh Park,Solutions Architect
    Anurag Dixit,Deep Learning Software Engineer Yu-Te Chen, Deep Learning Software Engineer
    Yogesh Kini, GPU Graphics Architecture manager
    Karthik Raghavan Ravi, Software Engineering Manager

       
    •  
       
      Autonomous vehicles need fast, accurate perception to perform safely. NVIDIA CUDA software and TensorRT on DRIVE AGX can accelerate massive computation workloads in parallel and optimize DNN inference. Hear from our Developer Zone Forum experts on how to leverage these blocks, along with DriveWorks, and understand how to deal with safety when integrating CUDA into mission critical software.
  •  
    •  
    •  
       
      Learn how to leverage the DRIVE OS system software along with the DriveWorks middleware layer for efficient autonomous vehicle development. DRIVE OS provides a flexible end-to-end software and hardware development platform. Together with DriveWorks, it delivers plug-n-play components as well as the ability to customize applications for specialized use cases.

COMPUTER VISION / INTELLIGENT VIDEO ANALYTICS / VIDEO & IMAGE PROCESSING

  • CWE21120
    Using Video Codec SDK and Optical Flow SDK on NVIDIA GPUs Effectively
    Abhijit Patait, Director, System Software
    Roman Arzumanyan,Software Engineer
    Stefan Schoenefeld, DevTech Engineer and Manager

       
    •  
       
      NVIDIA GPUs starting with the NVIDIA Turing generation feature an optical flow hardware accelerator that enhances several applications, including AI/DL, object tracking, video frame interpolation, and and video analytics. The optical flow functionality is available for software developers using NVIDIA's optical flow SDK.
      Bring your questions, suggestions, and feature requests related to optical flow hardware and SDK to this session. Discover how it can be used in the applications above, and how you can leverage the optical flow with GPU inferencing capabilities to build amazing applications for various industries. Plus, you'll learn about new features and the roadmap of optical flow hardware and software.
      The session will be staffed by NVIDIA's technical staff responsible for multimedia software.
  •  
  • CWE21102
    Transfer Learning Toolkit
    Farzin Aghdasi, Sr. SW Manager - Deep Learning for IVA
    Zeyu Zhao, Software Engineer
    Subhashree Radhakrishnan , Deep Learning Engineer
    Varun Praveen , Sr. System Software Engineer
    Arihant Jain , Deep Learning Software Engineer

       
    •  
       

      The NVIDIA Transfer Learning Toolkit is ideal for deep learning application developers and data scientists seeking a faster, more efficient deep learning training workflow for Intelligent Video Analytics (IVA). Transfer Learning Toolkit abstracts and accelerates deep learning training by allowing developers to fine-tune NVIDIA-provided, pre-trained models that are domain specific instead of going through the time-consuming process of building Deep Neural Networks (DNNs) from scratch. The pre-trained models accelerate the developer's deep learning training process and eliminate the higher costs associated with large-scale data collection, labeling, and training models from scratch. We'll show how to train, prune, and optimize popular models and create TRT engines.

  •  
  • CWE22241
    Developing IVA Software Using NVIDIA DeepStream SDK
    Kaustubh Purandare, Director - System Software
    Zheng Liu, Senior System Software Engineer
    Paul Shin, Senior Software Engineer
    Prashant Gaikwad, Engineer, Systems Software

       
    •  
       

      NVIDIA’s DeepStream SDK delivers a complete streaming analytics toolkit for AI-based video and image understanding, as well as multi-sensor processing. DeepStream (DS) is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions for transforming pixels and sensor data to actionable insights. Ask us about developing intelligent video analytics software using DS, the basics of pipeline creation and design, Python bindings using DS, optimizing the DS SDK pipeline, and DS SDK internet-of-things cases.

  •  

DATA CENTER / CLOUD INFRASTRUCTURE HARDWARE AND SOFTWARE

    •  
    •  
       

      Learn about the end-to-end platform, resources, and how to connect with experts supporting the developer community. We'll discuss applications, benchmarking, and architecture fundamentals required for the AI infrastructure of self driving.

  •  
  • CWE21705
    Data Center Monitoring and Profiling
    Brent Stolle, Software Engineering Manager
    Scott McMillan, Solutions Architect

       
    •  
       

      Connect with developers from NVIDIA's Data Center GPU Manager software (https://developer.nvidia.com/dcgm) on how to effectively monitor NVIDIA GPUs in your data center. Ask questions and see demos of new Data Center Profiling (DCP) features that allow you to monitor high-resolution profiling counters across your data center. Additionally, we can help you strategize how to integrate DCGM monitoring into third-party tools like Kubernetes, Prometheus, Collectd, Telegraf, and other data collectors.

  •  
    •  
    •  
       

      This session is your opportunity to meet with NVIDIA experts one-on-one for questions on using GPU-accelerated software from NGC for deep learning, machine learning, and HPC. Get your questions answered on topics such as strategies for using NGC in your workflows; running NGC containers on different platforms (cloud service providers, DGX systems, NVIDIA TITAN, NVIDIA Quadro workstations, NGC-Ready systems); using NGC containers with Docker, Singularity, and Kubernetes; running on bare-metal or in virtualized environments.

  •  
  • CWE22195
    Containers Runtime, Orchestration and Monitoring
    Renaud Gaubert, Software Engineer
    Rajat Chopra, Principal Software Engineer
    Jon Mayo, Senior Software Engineer
    Pramod Ramarao, Senior Product Manager

       
    •  
       

      "Interactive session to answer any questions you might have regarding:
      - Using GPUs with Linux containers technologies (such as Docker, Podman, LXC, Rkt, Kata or Singularity)
      - Deploy GPU applications in your cluster with containers orchestrators (such as Kubernetes or Swarm)
      - Monitor your applications (DCGM, Prometheus and Grafana).
      - Fully containerized GPU stack (driver container, device plugin, dcgm-exporter)
      - NVIDIA GPU Operator (on click deploy, driver upgrades, container runtime enablement)
      We will also share tips on how to tune containers for high-performance applications."

  •  

DEEP LEARNING INFERENCE - OPTIMIZATION AND DEPLOYMENT

DEEP LEARNING TRAINING AT SCALE

  • CWE21282
    Deep Learning Profiling Technologies
    Poonam Chitale, Senior Product Manager
    David Zier, Engineer, ASIC

       
    •  
       

      In this session, we'll provide guidance to data scientists and deep learning researchers who are trying to optimize their networks to take advantage of the high performance that GPUs have to offer. NVIDIA has been working on profiling tools and technologies that make profiling part of the workflow. To construct high-quality models that train faster, you can profile them and understand which operations are taking up the most time and which iterations contribute to maximum utilization of tensor cores, then get recommendations on where performance can be improved. There are several technologies and they vary in the output reports they provide, as well as the visualization provided. Talk to experts to understand which tools to use when.

  •  
  • CWE21698
    Inter-GPU Communication with NCCL
    Sylvain Jeaugey, Senior Computing/Networking Engineer
    Sreeram Potluri, Systems Software Manager
    David Addison, Senior Software Engineer

       
    •  
       

      NCCL (NVIDIA Collective Communication Library) optimizes inter-GPU communication on PCI, NVIDIA NVLink and Infiniband, powering large-scale training for most DL frameworks, including Tensorflow, PyTorch, MXNet, and Chainer.
      Come discuss NCCL's performance, features, and latest advances.

  •  
  • CWE21758
    Fast AI Data Pre-Processing with NVIDIA Data Loading Library (DALI)
    Maitreyi Roy, Product Manager, Deep Learning Software
    Przemek Tredak, Sr Developer Technology Engineer, DL Framework (MXNet)

       
    •  
       

      Come ask your AI/DL data loading, augmentation and pre-processing questions to experts in the field. Learn about NVIDIA's Data Loading Library (DALI) and the strategies you can employ to avoid IO and memory limitation.
      With every generation of GPU it becomes increasingly more difficult to keep the data pipeline full so that the GPU can be fully utilized. NVIDIA Data Loading Library (DALI) is our response to that problem. It is a portable, open source library for decoding and augmenting images and videos to accelerate deep learning applications. In this session, you will learn more about DALI and how it can address your needs.

  •  

FRAMEWORKS (DL AND NON - DL) / LIBRARIES / RUNTIMES

  • CWE21216
    NVIDIA Math Libraries
    Harun Bayraktar, Manager, CUDA Math Libraries
    Lung Chien, Sr. Software Engineer
    Lukasz Ligowski, CUDA FFT Library Lead
    Mahesh Khadatare, Sr CUDA Math Library Engineer
    Zoheb Khan, Senior Software Engineer

       
    •  
       
      Come meet the engineers that create the NVIDIA Math Libraries to get answers to your questions, give your feedback on existing functionality, or request new functionality. We'll have engineers from linear algebra libraries: cuBLAS, cuSOLVER, cuSPARSE, cuTENSOR; and signal and image processing libraries: cuFFT, NPP and nvJPEG. They'll be happy to talk to you about single and multi-GPU libraries, as well as the new device libraries.
  •  
  • CWE21218
    How can I Leverage Tensor Cores through NVIDIA Math Libraries?
    Harun Bayraktar, Manager, CUDA Math Libraries
    Piotr Majcher, Deep Learning Software Engineer
    Azzam Haidar, Senior Math Libraries Engineer
    Paul Springer, Senior Developer Technology Engineer

       
    •  
       

      NVIDIA GPUs and math libraries offer a continuously expanding array of tensor core-accelerated, mixed-precision linear algebra functionality. Come ask our library engineers questions that can help you get the most out of this technology in your applications.

  •  
  • CWE22315
    DL Basics
    Michael O'Connor, Director of Deep Learning
    Cliff Woolley, Director, DL Frameworks Engineering
    Davide Onofrio, Senio Deep Learning Technical Marketing Engineer
    Kaixi Hou, Software Engineer

       
    •  
       

      This is an introductory session where we discuss the basics of deep learning. We will answer questions on training, inference, DL models and performance optimization on the GPU.

  •  

HPC and AI

  • CWE21106
    RTcore for Compute
    Vishal Mehta, Developer Technology

       
    •  
       

      Learn about recent advances in the RTcore architecture, how to exploit RTcore using Optix for general-purpose compute, and algorithmic patterns that can be accelerated using RTcore. NVIDIA's recent development in RTcore has dramatically impacted graphics applications using ray tracing. There are many other applications that have similar computation patterns and can be accelerated using RTcore and the RTX software stack. This session is geared towards understanding how to harness the RTX capabilities of GPUs for applications in HPC and machine learning.

  •  

PERFORMANCE OPTIMIZATION AND PROFILING

PERSONALIZATION / RECOMMENDATION

  • CWE21747
    Accelerating Recommender System Training and Inference on NVIDIA GPUs
    Chirayu Garg, AI Developer Technology Engineer
    Zehuan Wang, DevTech Engeinner
    Even Oldridge, Sr. Applied Research Scientist

       
    •  
       

      Come and learn about how you can use NVIDIA technologies to accelerate your recommender system training and inference pipelines. We've been doing some ground-breaking work on optimizing performance for many stages of recommender system, including ETL of tabular data, training with terabyte-size embeddings for CTR models on multiple nodes, low-latency inference for Wide & Deep, and more. Running on NVIDIA GPUs, many of these are more than an order of magnitude faster than conventional CPU implementations. We'd be thrilled to learn from you how these accelerated components may apply to your setup and, if not, what's missing. We'd also like to hear the roles recommenders play in your products, the types of systems you're building, and the challenges you face. This session is ideal for data scientists and engineers who are responsible for developing, deploying, and scaling their recommender pipelines. Please join us for what's sure to be an interesting series of discussions.

  •  

PROGRAMMING LANGUAGES, COMPILERS & TOOLS

  • CWE21284
    Future of ISO and CUDA C++
    Bryce Adelstein Lelbach, CUDA C++ Core Libraries Lead
    Olivier Giroux, Distinguished Architect
    David Olsen, Senior Software Engineer

       
    •  
       
      Curious about the future of C++? Interested in learning about the C++ committee's roadmap for safety critical, concurrent, parallel, and heterogeneous programming?
      Come join Olivier Giroux (chair of the C++ committee's Concurrency and Parallelism group), Bryce Adelstein Lelbach (chair of the C++ committee's Library Evolution Incubator and Tooling groups), and the rest of NVIDIA's ISO C++ committee delegation for a Q&A session about the future of the C++ programming language.
  •  
  • CWE21285
    Thrust, CUB, and libcu++ Users Forum
    Bryce Adelstein Lelbach, CUDA C++ Core Libraries Lead
    Michal Dominiak, Software Engineer

       
    •  
       
      Come join NVIDIA's CUDA C++ Core Libraries team for a Q&A session on:
      - Thrust—CUDA C++'s high-productivity general-purpose library and parallel algorithms implementation
      - CUB—CUDA C++'s high-performance collective algorithm toolkit
      - libcu++—the CUDA C++ standard library (introduced in CUDA 10.2)
      Usage questions, feature requests, and bug reports are most welcome!
  •  
  • CWE21103
    Multi-GPU Programming
    Jiri Kraus, Senior Developer Technology Engineer
    Akshay Venkatesh, Senior Software Engineer

       
    •  
       
      Wondering how to scale your code to multiple GPUs in a node or cluster? Need to discuss NVIDIA CUDA-aware MPI details? This is the right session for you to ask your beginner or expert questions on multi-GPU programming, GPUDirect, NVSHMEM, and MPI. Connect with the Experts offers a range of informal sessions where you can ask experts from NVIDIA and other organizations your burning questions about a specific subject.
  •  
  • CWE21165
    CUDA and Ray Tracing Developer Tools
    Rafael Campana, Director of Engineering, Developer Tools
    Bob Knight, Software Engineer
    Magnus Strengert, Senior Software Engineer, Developer Tools

       
    •  
       

      With the advances in accelerated GPU computing and rendering comes new development challenges. The new NVIDIA Nsight developer tools portfolio enables developers to embrace new CUDA features like CUDA graphs and accelerated ray tracing rendering with NVIDIA OptiX, DX12/DXR, or Vulkan Raytracing. Stop by to talk to the developer tools engineering team to learn more about how Nsight tools can help you. Share your wish list or challenges so we can shape the future of our tools accordingly.

  •  
  • CWE21742
    Accelerating Python with CUDA
    Keith Kraus, AI Infrastructure Manager
    Ashwin Trikuta Srinath, Senior Library Software Engineer
    Stanley Seibert, Sr. Director of Community Innovation
    Dante Gama Dessavre, Senior Data Scientist

       
    •  
       

      The Python ecosystem is composed of a rich set of powerful libraries that work wonderfully well together, providing coherent, beautiful, *Pythonic* APIs that let developers think less about programming and more about solving problems. On the other hand, Python is known to have performance limitations, and the scale of today's problems have pushed users to look for more efficient solutions. This session focuses on Python-CUDA capabilities, including libraries available in the ecosystem as options and strategies for building your own Python interface for custom solutions. We have a team of experts who architect, develop, and maintain open-source Python-CUDA libraries who are ready to discuss how to take advantage of CUDA and GPUs from Python.

  •  
  • CWE21815
    Directive-Based GPU programming with OpenACC
    Stefan Maintz, Senior Development Technology Engineer
    Markus Wetzstein, HPC Development Technology Engineer
    Alexey Romanenko, Senior Developer Technology Engineer

       
    •  
       

      OpenACC is a programming model designed to help scientists and developers to start with GPUs faster and be more efficient by maintaining a single code source for multiple platforms. Come join OpenACC experts to ask about how to start accelerating your code on GPUs, continue optimizing your GPU code, start teaching OpenACC, host or participate in a hackathon, and more!

  •  
  • CWE21914
    CUDA Graphs
    Alan Gray, Senior Developer Technology Engineer
    Jeff Larkin,Senior DevTech Software Engineer

       
    •  
       

      This is an opportunity to find out more about CUDA Graphs, discuss why they may be advantageous to your particular application, and get help on any problems you may have faced when trying to use them.
      Graphs present a new model for work submission in CUDA. A graph is a series of operations, such as kernel launches, connected by dependencies, which is defined separately from its execution. This allows a graph to be defined once and then launched repeatedly. Separating out the definition of a graph from its execution enables a number of optimizations. First, CPU launch costs are reduced compared to streams because much of the setup is done in advance. Second, presenting the whole workflow to CUDA enables optimizations that might not be possible with the piecewise work submission mechanism of streams.
      This session will be staffed by representatives from the NVIDIA CUDA team developing graphs, as well as NVIDIA application specialists with experience using graphs.

  •  

VIRTUALIZATION

    •  
    •  
       
      Meet NVIDIANs that have helped customers deploy NVIDIA vGPUs and learn from their experience. Ask questions, get help deploying an NVIDIA vGPU, give feedback, or just chat with us!
  •  
  • CWE21942
    GPU Virtualization in the Modern Enterprise
    Konstantin Cvetanov, Sr. Solution Architect
    Randall Siggers, Senior Solution Architect
    Jimmy Rotella, Senior Solutions Architect

       
    •  
       
      In this Connect the Experts session, NVIDIA vGPU Solution Architects will share their knowledge and expertise with the GTC community about how our customers and partners are implementing virtual GPUs to accelerate graphics (VDI) and compute (AI/ML) workloads in a virtualized data center. Discussions will cover a variety of topics, including best practices for vGPU deployments for various use cases in AEC, M&E, Healthcare, Financial Services, Higher Ed, Retail, Oil & Gas, and many other industries, as well as how to get started with technology stacks such as NVIDIA RTX Server and vComputeServer.
  •  

Explore the Session Catalog