The Media and Entertainment sessions will focus on breakthroughs in GPU technology across broadcast and filmmaking, highlighting innovations in virtual reality that power immersive cinematic experiences, and GPU-accelerated advances in production.
GTC features more than twenty sessions from media and entertainment industry experts on topics ranging from cinematic VR and GPU-accelerated rendering to live broadcast graphics.
PIXAR ANIMATION STUDIOS
Advances in Real-Time Graphics at Pixar
Director, Technical R&D
How GPUs Power Comcast's X1 Voice Remote and Smart Video Analytics
Exploring Machine Learning in Visual Effects
Next generation GPU rendering: High-End Production Features on GPU
The Future of GPU Rendering in 2017 and Beyond
Blasting Sand with CUDA: MPM Sand Simulation for VFX
Streaming 10K Video Using GPUs and the Open Projection Format
Production-Quality, Final-Frame Rendering on a GPU
Directors of Research
The NVIDIA Iray Light Transport Simulation and Rendering System
PIXVANA, Co-Founder/Product Owner
S7574 - Streaming 10K Video Using GPUs and the Open Projection Format
Pixvana has developed a cloud-based system for processing VR video that can stream up to 12K video at HD bit rates. The process is called field-of-view adaptive streaming (FOVAS). FOVAS converts equirectangular spherical format VR video into tiles on AWS in a scalable GPU cluster. Pixvana's scalable cluster in the cloud delivers over an 80x improvement in tiling and encoding times. The output is compatible with standard streaming architectures and the projection is documented in the Open Projection Format. We'll cover the cloud-architecture, GPU processing, Open Projection Format, and current customers using the system at scale.
ABOUT THE SPEAKER: Sean Safreed is a veteran of the computer graphics industry and has been a product manager, strategist, developer, and marketing manager for leading software packages used by hundreds of thousands of professional filmmakers and motion graphics designers. Among his software credits are Commotion, Knoll Light Factory, QuickTime VR, and the industry-leading desktop color correction suite Magic Bullet. Prior to Pixvana, Sean co-founded Red Giant, which has grown to offer more than 50 products with a team that spans the United States and Canada. In addition to being very successful in the film and video industry for its product offering and user experience, Red Giant has financed and produced small independent films that have gone on to win awards, serve as tutorials, and inspire the storytelling community. Before founding Red Giant in 2002, Sean spent the '90s on the Apple QuickTime team and at Silicon Graphics, where he was part of the Open GL product team.
COMCAST, Director, Technical R&D
S7618 - How GPUs Power Comcast's X1 Voice Remote and Smart Video Analytics
We'll describe the deep learning models behind Comcast's X1 Voice Remote and Smart Video Analytics and how we use GPUs to train and run these models. We'll explain how we can accurately parse the millions of voice queries we receive every day, how we automatically determine the domain of a query (TV, sports, billing, etc.), and how deep learning helps us understand what is happening on TV at any given moment. We'll also go into detail about how our distributed multi-GPU clusters speed up training the models and enable inference on millions of voice commands and hundreds of thousands video clips every day.
ABOUT THE SPEAKER: Jan Neumann is a director at Comcast Labs in Washington D.C., where he leads the research team. His team combines large-scale machine learning, deep learning, natural language processing, and computer vision to develop novel algorithms and product concepts that improve the experience of Comcast's customers. Before Comcast, Jan worked for Siemens Corporate Research on various computer vision-related projects such as driver assistance systems and video surveillance. He has published over 20 paper in scientific conferences and journals, and is a frequent speaker on machine learning and data science. Jan holds a Ph.D. in computer science from the University of Maryland, College Park.
DIGITAL DOMAIN, CEO
S7688 - Exploring Machine Learning in Visual Effects
Some aspects of visual effects production are ideally suited to using machine learning technology. Whether it's coming from the digital cameras on set or from motion capture session or other sources, huge amounts of data are captured during the production of a movie. Models are built to modify this data or create new effects from it. Instead of building these models by hand, can machine learning systems be trained to do the same thing? We'll present active research projects where we are using machine learning to either accelerate a process in visual effects or allow the artists to create novel visual effects. This is definitely a work in progress report, some of the techniques show promise but are not fully developed at this time.
ABOUT THE SPEAKER: Doug Roble is the Director of Software R&D at Digital Domain. He's been working at Digital Domain for over 23 years. Along the way he's written a computer vision toolkit that won a Sci/Tech Academy Award (1998), a motion capture editing suite, a couple of fluid simulation packages (another Sci/Tech Award in 2007) and lots more. He's been the Editor-in-Chief of the Journal of Graphics Tools (2006-2011) and is co-chair of the digital arm of the Academy's Sci/Tech Awards committee and a member of the Academy's Sci/Tech Council. This all started with a Ph.D. in Computer Science from the Ohio State University way back in 1992.
PIXAR ANIMATION STUDIOS, Software Engineer
S7482 - Advances in Real-Time Graphics at Pixar
Explore how real-time graphics are used at Pixar Animation Studios. In this session our engineers will describe the unique needs for film production and our custom solutions, including : Presto and our open-source projects Universal Scene Description (USD), OpenSubdiv, Hydra. Don't miss this great opportunity to learn about graphics, algorithms, and movies!
ABOUT THE SPEAKER: Dirk Van Gelder joined Pixar Animation Studios in 1997 as an software engineer for Academy Award® nominated film "A Bug's Life." and winning short film "Geri's Game", working on animation software and the studio's first use of subdivision surfaces. Dirk has worked on software for every Pixar movie since, including the ground-up rewrite of the studio's proprietary animation system Presto. Currently Dirk leads the Presto Character team within in the Pixar Studio Tools Department.
NVIDIA, Directors of Research
S7328 - The NVIDIA Iray Light Transport Simulation and Rendering System
We reason about the design decisions that led to the system architecture of NVIDIA Iray. The scalable parallelization from single devices to clusters of GPU systems required new approaches to motion blur simulation, anti-aliasing, and fault tolerance, which are based on consistent sampling that at the same time enables push-button rendering with only a minimal set of user parameters. We then dive into technical details about light transport simulation, especially on how Iray deals with geometric light sources, importance sampling, decals, and material evaluation in order to be efficient on GPUs. It is remarkable, how well the physically based system extends to modern workflows like for example light path expressions and matte objects. The separation of material definition and implementation has been key to the superior performance and rendering quality and resulted in the emerging standard MDL (material definition language).
ABOUT THE SPEAKER: Alexander Keller is a director of research at NVIDIA, leading advanced rendering research. Before, he A1:H4 been the chief scientist at Mental Images, responsible for research and conception of products and strategies, including the design of the NVIDIA Iray renderer. Prior to industry, Alexander worked as a full professor for computer graphics and scientific computing at Ulm University, where he co-founded the UZWR (Ulmer Zentrum fur wissenschaftliches Rechnen) and received an award for excellence in teaching. Alexander holds a Ph.D. in computer science, has authored more than 27 granted patents, and has published more than 50 papers, mainly in the area of quasi-Monte Carlo methods and photorealistic image synthesis using ray tracing.
Carsten Waechter is a senior software engineer at NVIDIA, based in Berlin, and one of the leading contributors to the NVIDIA Iray rendering system and co-writer of its prototype. Carsten is an expert in GPU programming, quasi-Monte Carlo methods, and light transport simulation, including ray tracing. He holds a Ph.D. in computer science, which he received in 2007 from the University of Ulm, Germany, for his dissertation "Quasi-Monte Carlo Light Transport Simulation by Efficient Ray Tracing." His diploma thesis treated "Realtime Ray Tracing."
REDSHIFT RENDERING, CTO
S7466 - Production-Quality, Final-Frame Rendering on a GPU
We'll discuss the latest features of Redshift, the GPU-accelerated renderer running on NVIDIA GPUs that is redefining the industry's perception towards GPU final-frame rendering. A few customer work examples will be demonstrated. This talk will be of interest to industry professionals who want to learn more about GPU-accelerated production-quality rendering as well as software developers who are interested in GPU-accelerated rendering.
ABOUT THE SPEAKER: Panagiotis (Panos) Zompolas is a video game industry veteran driven by a passion for computer graphics and hardware. Panos has worked with GPUs since the days of the 3dfx and has closely followed the GPU compute revolution since its inception in the mid-2000s. Panos' career in the video game industry includes leading companies like Sony Computer Entertainment Europe and Double Helix Games (now Amazon Games). He has led teams of graphics programmers in the creation of render engines, spanning several generations of hardware. This experience, tied with his passion for the industry, is one of the key pillars of Redshift's success.
NVIDIA, Research Scientist
S7497 - Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification
We'll present a novel framework to combine multiple layers and modalities of deep neural networks for video classification, which is fundamental to intelligent video analytics, including automatic categorizing, searching, indexing, segmentation, and retrieval of videos. We'll first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by the proposed feature aggregation methods. We'll further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales. In particular, for modeling the long-term temporal information, we propose a new structure, FC-RNN, to effectively transform the pre-trained fully connected layers into recurrent layers. A robust boosting model is then introduced to optimize the fusion of multiple layers and modalities in a unified way. In the extensive experiments, we achieve state-of-the-art results on benchmark datasets.
ABOUT THE SPEAKER: Xiaodong Yang is a research scientist at NVIDIA. His research interests include computer vision, machine learning, deep learning, and multimedia analytics. He has been working on large-scale image and video classification, hand gesture and activity recognition, dynamic facial analysis, video surveillance event detection, multimedia search, and computer vision-based assistive technology. He received his Ph.D. from City University of New York in 2015 and B.S. from Huazhong University of Science and Technology in 2009.
CHAOS SOFTWARE, Lead Developer
Next Generation GPU Rendering: High-End Production Features on GPU
Take a look at the next generation of GPU-accelerated rendering. See how advances such as MDL materials, procedural shading and adaptive lighting algorithms are changing how high-end CG productions are created.
ABOUT THE SPEAKER: Blagovest Taskov is the lead of the V-Ray RT GPU developers team at Chaos Group. He works on the some of the latest advancements in V-Ray RT GPU, including improved OpenCL support, performance optimizations, and many rendering features.
DREAMWORKS ANIMATION, Software Engineer
S7298 - Blasting Sand with CUDA: MPM Sand Simulation for VFX
We'll present our challenges and solutions for creating a material point method (MPM)-based simulation system that meets the production demands of fast turnaround for artistic look development. Our method fully utilizes the GPU and performs an order of magnitude faster than the latest published results. With this improvement, the technique's main limiting factor - its speed - has been eliminated, making MPM appealing for a wider range of VFX applications. Practitioners in computational physics and related fields are likely to benefit from attending the session as our techniques are applicable to other hybrid Eulerian-Lagrangian simulations.
ABOUT THE SPEAKER: Gergely Klar has received his Ph.D. from the University of California, Los Angeles. During his graduate studies he worked on a range of physically based animation projects, including MPM, SPH, and FEM simulations. Gergely joined the DreamWorks Animation's FX Research and Development team where he is helping the artists to create more magnificent effects. He is a Fulbright Science and Technology alumnus, an avid sailor, and a father of two.