Connect with
the Experts

Meet NVIDIA Experts Face to Face

Connect with the Experts is a series of 60-minute Q&A sessions at GTC where attendees can drop into scheduled "office hours" with NVIDIA engineers and researchers to get questions answered on specific topics.

I love being able to meet the people behind the products that I use. Thanks for being friendly and open to questions.

- GTC 2019 Connect with the Experts attendee

 

See GTC 2020 Connect with the Experts options below. Our NVIDIA experts will provide “best practices” and “how to” answers to your technical questions.

ACCELERATED DATA SCIENCE

  • CWE21728
    Accelerated Data Science on GPUs using RAPIDS
    Shankara Rao Thejaswi Nanditale, Engineer, Compute Devtech
    Corey Nolet, Sr Data Scientist & Engineer
    Dante Gama Dessavre, Senior Data Scientist

       
    •  
       
      Parallelizing ML workloads on NVIDIA GPUs helps to analyze data and make decisions more efficiently. Come to this session to learn how to use GPUs to accelerate your ML workloads using the cuML RAPIDS project.
  •  
  • CWE21752
    GPU-Accelerate Your Data Science Pipeline
    Keith Kraus, AI Infrastructure Manager
    Richard Gelhausen, RAPIDS Performance Engineering Manager
    Robert Evans, Distinguished Engineer
    Alessandro Bellina, Senior Software Engineer
    Kuhu Shukla, Sr. Software Engineer

       
    •  
       
      GPUs and GPU platforms have been responsible for the dramatic advancement of deep learning and other neural net methods in the past several years. At the same time, traditional machine learning workloads, which comprise the majority of business use cases, continue to use a combination of single-threaded tools (e.g., Pandas and Scikit-Learn) or large, multi-CPU distributed solutions (e.g., Spark and PySpark). Today, the computational limits of CPUs have been realized, and we can take advantage of GPUs to accelerate the data pipeline. We have a team of experts who live and breath GPU-accelerated data science ready to discuss ideas, challenges, and solutions to help you accelerate your data science pipeline.
  •  
  • CWE21753
    Accelerated Data ETL and Analytics: Implementations and Algorithms
    Nikolay Sakharnykh, Devtech Engineer
    David Wendt, Programmer
    Jake Hemstad, Compute DevTech Engineer
    Mark Harris, Principal System Software Engineer

       
    •  
       
      Modern data science and analytics applications have high memory bandwidth and computational demands. GPUs are well equipped for this challenge, processing large amounts of data at high speed. This session focuses on implementations and algorithms for data analytics, such as parallel joins, aggregations, and other data manipulation techniques, as well as string operations, distributed systems, and more. We have a team of experts who architect, develop, and maintain core RAPIDS libraries ready to discuss the nuts and bolts of GPU-accelerated Data ETL & Data Analytics.
  •  

AUTONOMOUS MACHINES

  • CWE21167
    AI Enabled Intelligent Robotics with Isaac SDK and Isaac Sim
    Christoph Fritsch, Director Solution Architecture
    Swapnesh Wani, Solutions Architect
    Teresa Conceicao, Solutions Architect | Robotics Engineer

       
    •  
       
      The NVIDIA Isaac Software Development Kit (SDK) is a developer toolbox for accelerating the development and deployment of AI-powered robots. It includes a unique and powerful simulation platform—Isaac Sim—for developing, training, and testing AI-enabled perception and navigation skills in next-generation robots. The SDK accelerates robot development for manufacturers, researchers, and startups by making it easier to add Artificial Intelligence (AI) for perception and navigation. If you want to understand more about Isaac's capabilities or have questions, then join us to connect with our experts!
  •  
    •  
    •  
       

      NVIDIA Jetson is the world's leading computing platform for AI at the edge. High in performance and low in power, it's ideal for compute-intensive embedded applications like robots, drones, mobile medical imaging, and Intelligent Video Analytics (IVA). OEMs, independent developers, makers, and hobbyists can use Jetson developer kits and modules to explore the future of embedded computing and artificial intelligence. Have questions? Jetson experts will be available to discuss the platform capabilities, SDKs, and development tools, and answer questions to help you rapidly deploy AI-at-the-edge. Connect directly with NVIDIA engineers and experts from other organizations to get answers to all of your questions on topics ranging from AI and deep learning to accelerated data science. Visit us at the pods in the exhibit hall and ask a question.

  •  

AUTONOMOUS VEHICLES

    •  
    •  
       
      The NVIDIA DRIVE IX intelligent experience software stack enhances the driver's situational awareness, assists in driving functions, and provides natural language interactions between the vehicle and its occupants. Learn how to create a safer, AI-powered cockpit experience in this hour-long Q&A session.
  •  
    •  
    •  
       
      NVIDIA DRIVE Perception enables robust perception of obstacles, paths, and wait conditions. Together with DRIVE Networks, it forms an end-to-end perception pipeline for autonomous driving. In this hour-long Q&A session, hear how DRIVE Perception makes it possible for developers to innovate mapping and/or planning, control, and actuation in autonomous vehicle development.
  •  
  • CWE21185
    Using CUDA, TensorRT and DriveWorks on NVIDIA DRIVE AGX
    Josh Park,Solutions Architect
    Anurag Dixit,Deep Learning Software Engineer

       
    •  
       
      Autonomous vehicles need fast, accurate perception to perform safely. NVIDIA CUDA software and TensorRT on DRIVE AGX can accelerate massive computation workloads in parallel and optimize DNN inference. Hear from the foremost experts on how to leverage these blocks, along with DriveWorks, for robust autonomous vehicle development.
  •  
    •  
    •  
       
      Learn how to leverage the DRIVE OS system software along with the DriveWorks middleware layer for efficient autonomous vehicle development. DRIVE OS provides a flexible end-to-end software and hardware development platform. Together with DriveWorks, it delivers plug-n-play components as well as the ability to customize applications for specialized use cases.

COMPUTER VISION / INTELLIGENT VIDEO ANALYTICS / VIDEO & IMAGE PROCESSING

  • CWE21120
    How to use Optical Flow Hardware Accelerator on NVIDIA GPUs effectively
    Abhijit Patait, Director, System Software
    Roman Arzumanyan,Software Engineer
    Stefan Schoenefeld, DevTech Engineer and Manager

       
    •  
       
      NVIDIA GPUs starting with the NVIDIA Turing generation feature an optical flow hardware accelerator that enhances several applications, including AI/DL, object tracking, video frame interpolation, and and video analytics. The optical flow functionality is available for software developers using NVIDIA's optical flow SDK.
      Bring your questions, suggestions, and feature requests related to optical flow hardware and SDK to this session. Discover how it can be used in the applications above, and how you can leverage the optical flow with GPU inferencing capabilities to build amazing applications for various industries. Plus, you'll learn about new features and the roadmap of optical flow hardware and software.
      The session will be staffed by NVIDIA's technical staff responsible for multimedia software.
  •  
     
  • CWE21102
    Transfer Learning Toolkit
    Farzin Aghdasi, Sr. SW Manager - Deep Learning for IVA
    Zeyu Zhao, Software Engineer
    Subhashree Radhakrishnan , Deep Learning Engineer
    Varun Praveen , Sr. System Software Engineer
    Arihant Jain , Deep Learning Software Engineer

       
    •  
       

      The NVIDIA Transfer Learning Toolkit is ideal for deep learning application developers and data scientists seeking a faster, more efficient deep learning training workflow for Intelligent Video Analytics (IVA). Transfer Learning Toolkit abstracts and accelerates deep learning training by allowing developers to fine-tune NVIDIA-provided, pre-trained models that are domain specific instead of going through the time-consuming process of building Deep Neural Networks (DNNs) from scratch. The pre-trained models accelerate the developer's deep learning training process and eliminate the higher costs associated with large-scale data collection, labeling, and training models from scratch. We'll show how to train, prune, and optimize popular models and create TRT engines.

  •  

DATA CENTER / CLOUD INFRASTRUCTURE HARDWARE AND SOFTWARE

    •  
    •  
       

      Learn about the end-to-end platform, resources, and how to connect with experts supporting the developer community. We'll discuss applications, benchmarking, and architecture fundamentals required for the AI infrastructure of self driving.

  •  
  • CWE21700
    Go Fast Now - NVIDIA DGX POD Reference Architecture
    Hans Mortensen, Senior Solutions Architect
    Craig Tierney, Solution Architect
    Sumit Kumar, Solution Architect
    Scott Ellis, Solutions Engineer, Manager

       
    •  
       

      Join us to learn the ins and outs of what it takes to build out the infrastructure that your AI developer teams need. Learn from NVIDIA's own experience designing, deploying, and operationalizing AI infrastructure for our own research teams. We'll cover the server, network, storage, power, and cooling for the systems, as well as software stacks essential for effective utilization of the resources.

  •  
  • CWE21705
    Data Center Monitoring and Profiling
    Brent Stolle, Software Engineering Manager
    Scott McMillan, Solutions Architect

       
    •  
       

      Connect with developers from NVIDIA's Data Center GPU Manager software (https://developer.nvidia.com/dcgm) on how to effectively monitor NVIDIA GPUs in your data center. Ask questions and see demos of new Data Center Profiling (DCP) features that allow you to monitor high-resolution profiling counters across your data center. Additionally, we can help you strategize how to integrate DCGM monitoring into third-party tools like Kubernetes, Prometheus, Collectd, Telegraf, and other data collectors.

  •  
  • CWE21722
    NVIDIA NGC for Deep Learning, Machine Learning, and HPC
    Chintan Patel, Sr. Manager, Product Marketing
    Scott McMillan, Solutions Architect

       
    •  
       

      This session is your opportunity to meet with NVIDIA experts one-on-one for questions on using GPU-accelerated software from NGC for deep learning, machine learning, and HPC. Get your questions answered on topics such as strategies for using NGC in your workflows; running NGC containers on different platforms (cloud service providers, DGX systems, NVIDIA TITAN, NVIDIA Quadro workstations, NGC-Ready systems); using NGC containers with Docker, Singularity, and Kubernetes; running on bare-metal or in virtualized environments.

  •  

DEEP LEARNING INFERENCE - OPTIMIZATION AND DEPLOYMENT

DEEP LEARNING TRAINING AT SCALE

  • CWE21282
    Deep Learning Profiling Technologies
    Poonam Chitale, Senior Product Manager
    David Zier, Engineer, ASIC

       
    •  
       

      In this session, we'll provide guidance to data scientists and deep learning researchers who are trying to optimize their networks to take advantage of the high performance that GPUs have to offer. NVIDIA has been working on profiling tools and technologies that make profiling part of the workflow. To construct high-quality models that train faster, you can profile them and understand which operations are taking up the most time and which iterations contribute to maximum utilization of tensor cores, then get recommendations on where performance can be improved. There are several technologies and they vary in the output reports they provide, as well as the visualization provided. Talk to experts to understand which tools to use when.

  •  
  • CWE21698
    Inter-GPU Communication with NCCL
    Sylvain Jeaugey, Senior Computing/Networking Engineer
    Sreeram Potluri, Systems Software Manager
    David Addison, Senior Software Engineer

       
    •  
       

      NCCL (NVIDIA Collective Communication Library) optimizes inter-GPU communication on PCI, NVIDIA NVLink and Infiniband, powering large-scale training for most DL frameworks, including Tensorflow, PyTorch, MXNet, and Chainer.
      Come discuss NCCL's performance, features, and latest advances.

  •  
  • CWE21758
    Fast AI Data Pre-Processing with NVIDIA Data Loading Library (DALI)
    Maitreyi Roy, Product Manager, Deep Learning Software
    Przemek Tredak, Sr Developer Technology Engineer, DL Framework (MXNet)

       
    •  
       

      Come ask your AI/DL data loading, augmentation and pre-processing questions to experts in the field. Learn about NVIDIA's Data Loading Library (DALI) and the strategies you can employ to avoid IO and memory limitation.
      With every generation of GPU it becomes increasingly more difficult to keep the data pipeline full so that the GPU can be fully utilized. NVIDIA Data Loading Library (DALI) is our response to that problem. It is a portable, open source library for decoding and augmenting images and videos to accelerate deep learning applications. In this session, you will learn more about DALI and how it can address your needs.

  •  

DESIGN & ENGINEERING

  • CWE21283
    Studio Workflows with Omniverse: From Virtual Production to Shipping Titles
    Omer Shapira, Engineer, Artist
    Damien Fagnou, Senior Director, Software
    Kevin Margo, Creative Director, NVIDIA

       
    •  
       

      NVIDIA's Omniverse ecosystem opens vast possibilities for collaboration inside and between studios. To take advantage of collaboration-by-default and no boundaries between "Production" and "Release", studios may need to update their workflows.
      This session brings together experts who have shipped games, films, and high-performance applications with components of Omniverse.
      We'll discuss moving your tools pipeline to the cloud, virtual production, and migrating game and film data into Omniverse. We'll also answer questions about available Omniverse tools.

  •  

FRAMEWORKS (DL AND NON - DL) / LIBRARIES / RUNTIMES

  • CWE21216
    NVIDIA Math Libraries
    Harun Bayraktar, Manager, CUDA Math Libraries
    Lung Chien, Sr. Software Engineer
    Lukasz Ligowski, CUDA FFT Library Lead
    Mahesh Khadatare, Sr CUDA Math Library Engineer
    Zoheb Khan, Senior Software Engineer

       
    •  
       
      Come meet the engineers that create the NVIDIA Math Libraries to get answers to your questions, give your feedback on existing functionality, or request new functionality. We'll have engineers from linear algebra libraries: cuBLAS, cuSOLVER, cuSPARSE, cuTENSOR; and signal and image processing libraries: cuFFT, NPP and nvJPEG. They'll be happy to talk to you about single and multi-GPU libraries, as well as the new device libraries.
  •  
     
  • CWE21218
    How can I Leverage Tensor Cores through NVIDIA Math Libraries?
    Harun Bayraktar, Manager, CUDA Math Libraries
    Piotr Majcher, Deep Learning Software Engineer
    Azzam Haidar, Senior Math Libraries Engineer
    Paul Springer, Senior Developer Technology Engineer

       
    •  
       

      NVIDIA GPUs and math libraries offer a continuously expanding array of tensor core-accelerated, mixed-precision linear algebra functionality. Come ask our library engineers questions that can help you get the most out of this technology in your applications.

  •  

GRAPHICS - PRODUCTION RENDERING / RAY TRACING

HPC and AI

  • CWE21106
    RTcore for Compute
    Vishal Mehta, Developer Technology

       
    •  
       

      Learn about recent advances in the RTcore architecture, how to exploit RTcore using Optix for general-purpose compute, and algorithmic patterns that can be accelerated using RTcore. NVIDIA's recent development in RTcore has dramatically impacted graphics applications using ray tracing. There are many other applications that have similar computation patterns and can be accelerated using RTcore and the RTX software stack. This session is geared towards understanding how to harness the RTX capabilities of GPUs for applications in HPC and machine learning.

  •  
    •  
    •  
       

      Whether you're exploring mountains of geological data, researching solutions to complex scientific problems, training neural networks, or racing to model fast-moving financial markets, you need a computing platform that provides the highest throughput and lowest latency possible. GPUs are widely recognized for providing the tremendous horsepower required by compute-intensive workloads. However, GPUs consume data much faster than CPUs and as the computing horsepower of GPU increases, so does the demand for IO bandwidth.
      Using GPUDirect Storage, multiple GPUs, User space File systems, Traditional distributed File systems, solid-state drives (SSDs), and now NVMe drives can directly read and write CUDA host and device memory. This eliminates unnecessary memory copies, dramatically lowering CPU overhead, and reducing latency, resulting in significant performance improvements in data transfer times for applications running on NVIDIA Tesla and Quadro products.

  •  
    •  
    •  
       

      In this session, we will discuss ways to combine deep learning and artificial intelligence with traditional HPC to accelerate the pace of scientific discovery—from High-Energy Physics to Life Sciences and Healthcare. The traditional paradigm uses large-scale simulation at the core, where data analytics is used for pre- and post-processing of the data. More recently, AI and large-scale simulation are applied on a more cooperative basis where the strengths of each converge to form a powerful new tool for science. Both paradigms can be discussed in this session.

  •  

IOT / 5G / EDGE COMPUTING

  • CWE21130
    Securely Deploying AI to the Edge (EGX)
    Sanford Russell, Sr Dir Product Marketing, EGX and Enterprise Servers

       
    •  
       

      AI is now being deployed outside the data center and cloud to the edge of enterprises (factories, stores, hospitals, and cities). Learn from NVIDIA experts on how to design trusted edge architectures and software deployment strategies to securely deploy AI work to your enterprise's remote (edge) locations. Security with containers and Kubernetes, as well as control plane designs, will be covered in these sessions.

  •  

PERFORMANCE OPTIMIZATION AND PROFILING

PERSONALIZATION / RECOMMENDATION

  • CWE21747
    Accelerating Recommender System Training and Inference on NVIDIA GPUs
    Chirayu Garg, AI Developer Technology Engineer
    Zehuan Wang, DevTech Engeinner
    Even Oldridge, Sr. Applied Research Scientist

       
    •  
       

      Come and learn about how you can use NVIDIA technologies to accelerate your recommender system training and inference pipelines. We've been doing some ground-breaking work on optimizing performance for many stages of recommender system, including ETL of tabular data, training with terabyte-size embeddings for CTR models on multiple nodes, low-latency inference for Wide & Deep, and more. Running on NVIDIA GPUs, many of these are more than an order of magnitude faster than conventional CPU implementations. We'd be thrilled to learn from you how these accelerated components may apply to your setup and, if not, what's missing. We'd also like to hear the roles recommenders play in your products, the types of systems you're building, and the challenges you face. This session is ideal for data scientists and engineers who are responsible for developing, deploying, and scaling their recommender pipelines. Please join us for what's sure to be an interesting series of discussions.

  •  

PROGRAMMING LANGUAGES, COMPILERS & TOOLS

  • CWE21284
    Future of ISO and CUDA C++
    Bryce Adelstein Lelbach, CUDA C++ Core Libraries Lead
    Olivier Giroux, Distinguished Architect
    David Olsen, Senior Software Engineer

       
    •  
       
      Curious about the future of C++? Interested in learning about the C++ committee's roadmap for safety critical, concurrent, parallel, and heterogeneous programming?
      Come join Olivier Giroux (chair of the C++ committee's Concurrency and Parallelism group), Bryce Adelstein Lelbach (chair of the C++ committee's Library Evolution Incubator and Tooling groups), and the rest of NVIDIA's ISO C++ committee delegation for a Q&A session about the future of the C++ programming language.
  •  
  • CWE21285
    Thrust, CUB, and libcu++ Users Forum
    Bryce Adelstein Lelbach, CUDA C++ Core Libraries Lead
    Michal Dominiak, Software Engineer

       
    •  
       
      Come join NVIDIA's CUDA C++ Core Libraries team for a Q&A session on:
      - Thrust—CUDA C++'s high-productivity general-purpose library and parallel algorithms implementation
      - CUB—CUDA C++'s high-performance collective algorithm toolkit
      - libcu++—the CUDA C++ standard library (introduced in CUDA 10.2)
      Usage questions, feature requests, and bug reports are most welcome!
  •  
  • CWE21103
    Multi-GPU Programming
    Jiri Kraus, Senior Developer Technology Engineer
    Akshay Venkatesh, Senior Software Engineer

       
    •  
       
      Wondering how to scale your code to multiple GPUs in a node or cluster? Need to discuss NVIDIA CUDA-aware MPI details? This is the right session for you to ask your beginner or expert questions on multi-GPU programming, GPUDirect, NVSHMEM, and MPI. Connect with the Experts offers a range of informal sessions where you can ask experts from NVIDIA and other organizations your burning questions about a specific subject.
  •  
  • CWE21165
    CUDA and Ray Tracing Developer Tools
    Rafael Campana, Director of Engineering, Developer Tools
    Bob Knight, Software Engineer
    Magnus Strengert, Senior Software Engineer, Developer Tools

       
    •  
       

      With the advances in accelerated GPU computing and rendering comes new development challenges. The new NVIDIA Nsight developer tools portfolio enables developers to embrace new CUDA features like CUDA graphs and accelerated ray tracing rendering with NVIDIA OptiX, DX12/DXR, or Vulkan Raytracing. Stop by to talk to the developer tools engineering team to learn more about how Nsight tools can help you. Share your wish list or challenges so we can shape the future of our tools accordingly.

  •  
  • CWE21742
    Accelerating Python with CUDA
    Keith Kraus, AI Infrastructure Manager
    Ashwin Trikuta Srinath, Senior Library Software Engineer
    Stanley Seibert, Sr. Director of Community Innovation
    Dante Gama Dessavre, Senior Data Scientist

       
    •  
       

      The Python ecosystem is composed of a rich set of powerful libraries that work wonderfully well together, providing coherent, beautiful, *Pythonic* APIs that let developers think less about programming and more about solving problems. On the other hand, Python is known to have performance limitations, and the scale of today's problems have pushed users to look for more efficient solutions. This session focuses on Python-CUDA capabilities, including libraries available in the ecosystem as options and strategies for building your own Python interface for custom solutions. We have a team of experts who architect, develop, and maintain open-source Python-CUDA libraries who are ready to discuss how to take advantage of CUDA and GPUs from Python.

  •  
  • CWE21815
    Directive-Based GPU programming with OpenACC
    Stefan Maintz, Senior Development Technology Engineer
    Markus Wetzstein, HPC Development Technology Engineer
    Alexey Romanenko, Senior Developer Technology Engineer

       
    •  
       

      OpenACC is a programming model designed to help scientists and developers to start with GPUs faster and be more efficient by maintaining a single code source for multiple platforms. Come join OpenACC experts to ask about how to start accelerating your code on GPUs, continue optimizing your GPU code, start teaching OpenACC, host or participate in a hackathon, and more!

  •  
  • CWE21914
    CUDA Graphs
    Alan Gray, Senior Developer Technology Engineer
    Jeff Larkin,Senior DevTech Software Engineer

       
    •  
       

      This is an opportunity to find out more about CUDA Graphs, discuss why they may be advantageous to your particular application, and get help on any problems you may have faced when trying to use them.
      Graphs present a new model for work submission in CUDA. A graph is a series of operations, such as kernel launches, connected by dependencies, which is defined separately from its execution. This allows a graph to be defined once and then launched repeatedly. Separating out the definition of a graph from its execution enables a number of optimizations. First, CPU launch costs are reduced compared to streams because much of the setup is done in advance. Second, presenting the whole workflow to CUDA enables optimizations that might not be possible with the piecewise work submission mechanism of streams.
      This session will be staffed by representatives from the NVIDIA CUDA team developing graphs, as well as NVIDIA application specialists with experience using graphs.

  •  

VIRTUAL REALITY / AUGMENTED REALITY

  • CWE21225
    Rich and Immersive VR for HPC Simulations
    Ian Williams, Enterprise Products
    Arun Sabnis, Senior Product Manager
    Niveditha Krishnamoorthy, Alliance Manager CAE

       
    •  
       
      Learn how you can create amazing virtual reality experiences from your simulations and gain useful insights that can help you make better design decisions. Connect with NVIDIA engineers to get answers to all your questions and see how VR is a true value-add to your professional workflow.
  •  
  • CWE21198
    ProVR: Discuss Vulkan and OpenGL using Quadro GPUs
    Ingo Esser, Sr. Developer Technology Engineer
    Robert Menzel, Sr. Developer Technology Engineer

       
    •  
       
      Discuss anything ProVR, including but not limited to topics we presented in our talk "ProVR: Vulkan and OpenGL using Quadro GPUs". If you have any questions about professional VR, solutions for solving issues, or feedback about our VR solutions (Vulkan, OpenGL, OpenXR, etc.), you're in the right place.
  •  

VIRTUALIZATION

    •  
    •  
       
      Meet NVIDIANs that have helped customers deploy NVIDIA vGPUs and learn from their experience. Ask questions, get help deploying an NVIDIA vGPU, give feedback, or just chat with us!
  •  
  • CWE21942
    GPU Virtualization in the Modern Enterprise
    Konstantin Cvetanov, Sr. Solution Architect
    Randall Siggers, Senior Solution Architect
    Jimmy Rotella, Senior Solutions Architect

       
    •  
       
      In this Connect the Experts session, NVIDIA vGPU Solution Architects will share their knowledge and expertise with the GTC community about how our customers and partners are implementing virtual GPUs to accelerate graphics (VDI) and compute (AI/ML) workloads in a virtualized data center. Discussions will cover a variety of topics, including best practices for vGPU deployments for various use cases in AEC, M&E, Healthcare, Financial Services, Higher Ed, Retail, Oil & Gas, and many other industries, as well as how to get started with technology stacks such as NVIDIA RTX Server and vComputeServer.
  •  
  • CWE21729
    GPU-Accelerated Scientific Visualization
    Peter Messmer, Sr. Manager Dev Tech/HPC Vis
    Tim Biedert, Senior Developer Technology Engineer
    Mathias Hummel, Senior Developer Technology Engineer
    Nick Leaf, Developer Technology Engineer - HPC Visualization

       
    •  
       
      Visualization is a powerful method to understand and communicate complex data. However, the wealth of tools and technologies in the scientific visualization space can be overwhelming. A lot of NVIDIA visualization technologies are going to be introduced in depth at GTC in various specialized sessions. The goal of this Connect with the Experts session is to help people find the right presentation and discuss with experts in the various visualization technologies how to apply the tools most efficiently.
  •  

Explore the Session Catalog