Events

Subscribe
BRINGING AI TO THE GRAPHICS INDUSTRY
Siggraph 2017 | July 30 – August 3, 2017 | Los Angeles, CA
Bringing Deep Learning and AI to the Graphics Industry

Schedule

 
Filters:
Reset
 
SUNDAY, July 30
10:00am - 10:50am
Thomas True (Senior Applied Engineer, Professional Video and Image Processing)
We'll provide an introduction to high-dynamic range and describe application programming techniques for HDR rendering and display on NVIDIA GPUs.
Talk
Room #404 AB
11:00am - 11:50am
Chris Hebert (Devtech Engineer, NVIDIA)
Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)
Khronos released Vulkan 1.0 last year to provide application developers a high-efficiency API for compute and graphics intended for modern GPUs.
Talk
Room #404 AB
12:00pm - 12:50pm
Evan Hart (Devtech Engineer, NVIDIA)
Across graphics, audio, video, and physics, the NVIDIA VRWorks suite of technologies helps developers maximize performance and immersion for VR applications.
Talk
Room #404 AB
1:00pm - 1:50pm
Vladimir Klyazov (CTO, Chaos Group)
Take a look at the next generation of GPU-accelerated rendering.
Talk
Room #404 AB
2:00pm - 2:50pm
Sébastien Deguy (CEO, Allegorithmic)
Marc S Ellens (Senior Software Engineer and TAC Specialist, X-Rite Incorporated)
Nicolas Paulhac (Product Manager, Allegorithmic)
Worldwide leader for procedural texturing in the gaming industry with its Substance technology, Allegorithmic has largely expand its breadth to become a cornerstone of the material workflow for all industries.
Talk
Room #404 AB
3:00pm - 3:50pm
Nir Benty (Senior Graphics Software Engineer, NVIDIA)
Falcor is the primary prototyping framework used by the NVIDIA graphics research team.
Talk
Room #404 AB
4:00pm - 4:50pm
Alex Shepard (Software Developer, iNaturalist at California Academy of Sciences)
Imagine a real-time, handheld, accurate species identification tool helping land managers monitor and protect natural resources, farmers prevent crop pest and disease infestations, and law enforcement stop illegal wildlife trafficking.
Talk
Room #404 AB
5:00pm - 5:50pm
Thomas True (Senior Applied Engineer, Professional Video and Image Processing, NVIDIA)
We'll introduce VRWorks Video 360 - NVIDIA's implementation of a motion-flow-based, real-time, CUDA-accelerated , GPU-scalable, 360 Stereo Stitching SDK with support for both video and audio.
Talk
Room #404 AB
Load More
MONDAY, July 31
9:00am - 9:50am
Anjul Patney (Senior Research Scientist, NVIDIA)
We'll present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for VR applications. We'll present a list of open challenges in this area.
Talk
Room #404 AB
9:00am - 11:00am
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
This lab will guide students through the process of training a Generative Adversarial Network (GAN) to generate image contents in DIGITS.
Instructor-Led Lab
Room 513
Register Here
11:00am - 11:50am
Jan Jordan (Product Manager, MDL, NVIDIA)
Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)
We'll discuss the basics of NVIDIA's Material Definition Language, showing how a single material can be used to define matching appearances between different renderers and rendering techniques.
Talk and Technical Paper
Room #404 AB
12:00pm - 12:50pm
Dirk Van Gelder (Senior Software Engineer, PIXAR)
David Yu (Senior Graphics Software Engineer, PIXAR)
Pol Jeremias-Vila (Senior Graphics Engineer, PIXAR)
Explore how real-time graphics are used at Pixar Animation Studios.
Talk
Room #404 AB
12:00pm – 2:00pm
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
Learn how neural networks transfer the look and feel of one image to another image by extracting distinct visual features.
Instructor-Led Lab
Room 513
Register Here
1:00pm - 1:50pm
Neil Trevett (VP, NVIDIA)
Discover how over 100 companies cooperate at the Khronos Group to create open, royalty-free standards that enable developers to access the power of the GPU to accelerate demanding compute, graphics, and vision applications.
Talk
Room #404 AB
2:00pm - 2:50pm
Mark Kilgard (Principal System Software Engineer, NVIDIA)
Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)
OpenGL developers should attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs.
Talk
Room #404 AB
3:00pm - 3:50pm
Rama Hoetzlein (Graphics Research Engineer, NVIDIA)
We'll explore NVIDIA GVDB Voxels, a new open source SDK framework for generic representation, computation, and rendering of voxel-based data.
Talk
Room #404 AB
3:00pm – 5:00pm
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
See the possibilities of automatic character creation, including animation over various terrains, using neural networks.
Instructor-Led Lab
Room 513
Register Here
4:00pm - 4:50pm
Morgan McGuire (Distinguished Research Scientist, NVIDIA)
Video game 3D graphics are approaching cinema quality thanks to the mature platforms of massively parallel GPUs and the APIs that drive them.
Talk
Room #404 AB
5:00pm - 5:50pm
Alexander Keller (Director of Research, NVIDIA)
Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)
We reason about the design decisions that led to the system architecture of NVIDIA Iray.
Talk and Technical Paper
Room #404 AB
TUESDAY, August 1
9:30am - 9:55am
Josh Peterson (Head of Product Management, HP Workstations & Immersive Computing, HP)
Come by the HP sponsored NVIDIA booth theater to hear about the latest collaborative innovation.
Talk
Booth Theater 403
9:00am - 11:00am
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
This lab will guide students through the process of training a Generative Adversarial Network (GAN) to generate image contents in DIGITS.
Instructor-Led Lab
Room 513
Register Here
10:00am - 10:25am
Nir Benty (Senior Graphics Software Engineer, NVIDIA)
NVIDIA and partners are joining together to accelerate rendering R&D by providing solutions to common problems: access to realistic 3D content, (re-) implementing published rendering algorithms, and complexity of modern graphics APIs.
Talk
Booth Theater 403
10:30am - 10:55am
Scott Metzger (Chief Creative Officer/Co-Founder, Nurulize Inc.)
The talk will be about point cloud rendering for VR.
Talk
Booth Theater 403
11:00am - 11:25am
Xiaoguang Han (Ph.D., University of Hong Kong)
Face modeling has been paid much attention in the field of visual computing.
Talk
Booth Theater 403
11:30am - 11:55am
Timo Aila, Tero Karras & Samuli Laine (Principal Research Scientist, NVIDIA)
We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency.
Talk and Technical Paper
Booth Theater 403
12:00pm - 12:25pm
Paul Kruszewski (CEO, wrnch)
We'll provide a brief overview of how to apply GPU-based deep learning techniques to extract 3D human motion capture from standard 2D RGB video.
Talk
Booth Theater 403
12:00pm – 2:00pm
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
Learn how neural networks transfer the look and feel of one image to another image by extracting distinct visual features.
Instructor-Led Lab
Room 513
Register Here
12:30pm - 12:55pm
Adam Myhill (Director of Photography and CG Supervisor, Unity)
Unity launched Timeline and Cinemachine, innovative storytelling and scene composition tools that use the power of real-time rendering to create scenes and narratives with both synthetic and live images.
Talk
Booth Theater 403
1:00pm - 1:25pm
Kyle Szostek (Senior VDC Engineer, Gilbane Building Company)
Ken Grothman (Senior Virtual Construction Engineer, Gilbane Building Company)
We'll dive headfirst into some of the current challenges of the construction industry, how we're addressing them, and how we're planning to utilize virtual/augmented reality and real-time GPU computing to address them.
Talk
Booth Theater 403
1:30pm - 1:55pm
Eric Kam (Product Marketing and Community Manager, ESI Group)
Vikram Bapat (Product Manager, Immersive Experience, ESI Group)
Hear visualization experts explain why people in professional visualization, in particular virtual engineering, are great candidates to unleash the full potential of HMDs and how close today's technology pushes application developers to the finish line of discovering massive datasets with HMDs.
Talk
Booth Theater 403
2:00pm - 2:25pm
Robert Slater (Immersive Experience, VP Engineering, Redshift)
We'll discuss the latest features of Redshift, the GPU-accelerated renderer running on NVIDIA GPUs that is redefining the industry's perception towards GPU final-frame rendering.
Talk
Booth Theater 403
2:30pm - 2:55pm
Daniel Holden (Animation Researcher, Ubisoft, Montreal)
Producing animation systems for AAA video games requires thousands of hours of work - hundreds of animators, programmers and powerful tools dedicated to handling the huge complexity of the task.
Talk
Booth Theater 403
3:00pm - 3:25pm
Kevin Smith (VFX Supervisor, Weta Digital)
Behind the scenes of Guardians of the Galaxy Vol. 2 VFX Breakdown
Talk
Booth Theater 403
3:00pm – 5:00pm
Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)
See the possibilities of automatic character creation, including animation over various terrains, using neural networks. We start from data preparation for training using motion capture data and a simple neural network.
Instructor-Led Lab
Room 513
Register Here
3:30pm - 3:55pm
Mark Kilgard (Principal System Software Engineer, NVIDIA)
Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development.
Talk
Booth Theater 403
4:00pm - 4:25pm
Christoph Sprenger (Director&Co-Founder Vortechs FX, Vortechs FX)
Eddy is a new volumetric compositing, simulation and rendering framework that runs entirely on NVIDIA GPUs.
Talk
Booth Theater 403
4:30pm - 4:55pm
Eric Risser (CTO, Artomatix)
This talk introduces a powerful new parametric model for optimization based texture synthesis using neural networks.
Talk
Booth Theater 403
5:00pm - 5:25pm
David Weinstein (Director Pro VR, NVIDIA)
NVIDIA is committed to the advancement of next-generation Virtual Reality, complete with stunning hi-fidelity, dynamic physical behaviors, and real-time social interactions.
Talk
Booth Theater 403
5:30pm - 5:55pm
Joel Pennington (AR & VR Strategist, Autodesk)
VR is disrupting many design professions from architecture to engineering, construction to manufacturing and of course, in Media & Entertainment.
Talk
Booth Theater 403
WEDNESDAY, AUGUST 2
9:00am 12:00pm
Mark Schoennagel (Evangelist, Unity)
During this hands-on training, attendees will learn how to build a brand new 3D, VR/AR-ready game from start to finish while touching upon many of the diverse systems and tools that Unity offers.
Instructor-Led Lab
Room 513
Register Here
9:30am - 9:55am
Alex Dunn (Developer Engineer, NVIDIA)
Come learn about what NVIDIA is doing to aid in debugging GPU crashes/hangs/TDRs during development, and long after release.
Talk
Booth Theater 403
10:00am - 10:25am
Florian Hecht (Technical Director, Pixar)
Peter Roe (Technical Director, Pixar)
Pixar's new look-development tool Flow is enabling artists to create amazing shaders in a fully interactive environment.
Talk
Booth Theater 403
10:30am - 10:55am
Anders Langlands (VFX Supervisor, Weta Digital)
Behind the scenes of War for the Planet of the Apes
Talk
Booth Theater 403
11:00am - 11:25am
Francesco Giordana (Researcher, MPC Film)
We'll present our journey to create a real-time VR experience leveraging film VFX workflows and assets.
Talk
Booth Theater 403
11:30am - 11:55am
Scott DeWoody (Firmwide Creative Media Manager, Gensler)
Learn how Gensler is using the latest technology in virtual reality across all aspects of the design process for the AEC industry.
Talk
Booth Theater 403
12:00pm - 12:25pm
Xavier Melkonian (Director CATIA Design Portfolio, Dassault Systems)
Dassault Systemes is the worldwide PLM leader, its 3DEXPERIENCE platform being used to think, design and produce from the smallest objects to the most complex aerospace rockets or even full cities.
Talk
Booth Theater 403
12:30pm - 12:55pm
Richard Zhang (Ph.D., University of California, Berkeley)
Jun-Yan Zhu (Ph.D. student, Berkeley AI Research Lab(BAIR))
We propose a deep learning approach for user-guided image colorization.
Talk
Booth Theater 403
1:00pm - 1:25pm
Andrew Edelsten (Senior Developer Technologies Manager, NVIDIA)
Recently deep learning has revolutionized computer vision and other recognition problems.
Talk and Technical Paper
Booth Theater 403
1:30pm - 1:55pm
Gary Radburn (Director, Virtual and Augmented Reality, Dell)
VR is not just the domain of media and entertainment any more.
Talk
Booth Theater 403
2:00pm - 2:25pm
Jules Urbach (CEO, OTOY)
We'll discuss OTOY's cutting-edge light field rendering toolset and platform, which allows for immersive experiences on mobile HMDs and next-gen displays, making it ideal for VR and AR.
Talk
Booth Theater 403
2:00pm – 5:00pm
Wes Bunn (Sr. Technical Writer, Epic Games)
Luis Cataldi (Director, Education & Learning Resources, Epic Games)
Join Epic Games for a VR starter session, a live training tutorial where participants will learn the basics of creating VR projects and VR best practices in Unreal Engine (UE4).
Instructor-Led Lab
Room 513
Register Here
2:30pm - 2:55pm
Ken Dahm (Research Scientist, NVIDIA)
We show that the equations for reinforcement learning and light transport simulation are related integral equations.
Talk
Booth Theater 403
3:00pm - 3:25pm
Nicolas Dalmasso (Innovation Director, Optis)
Optis has been involved in advanced optical simulation for the past 25 years and has recently invested in VR for virtual prototyping.
Talk
Booth Theater 403
3:30pm - 3:55pm
Ben Widdowson (Pre Sales Consultant, Lightworks)
Learn how manufacturers are automating and in-housing their digital photorealistic and VR/AR visualization pipelines out of Siemens Teamcenter and NX through JT.
Talk
Booth Theater 403
4:00pm - 4:25pm
Jacopo Pantaleoni (Senior Research Scientist, NVIDIA)
In this manuscript, inspired by a simpler reformulation of primary sample space Metropolis light transport, we derive a novel family of general Markov chain Monte Carlo algorithms called \emph{charted Metropolis-Hastings}
Talk and Technical Paper
Booth Theater 403
4:30pm - 4:55pm
Wes McDermot (Substance Integration Product Manager / Training, Allegorithmic)
Substance Designer is a powerful tool for creating physically-based procedural materials and MDLs.
Talk
Booth Theater 403
5:00pm - 5:25pm
Mitch Muncy (Senior Product Line Manager, Simulation, Autodesk)
Autodesk Project Dreamcatcher takes the next step in the world of computation, artificial intelligence, and machine learning by harnessing the power of computing to deliver on the promise of Computer Aided Design.
Talk
Booth Theater 403
5:30pm - 5:55pm
Martin-Karl Lefrancois (DevTech Software Engineer Lead, NVIDIA)
Learn about the new AI denoiser that will be available in the OptiX 5.0 SDK. We will explain why we chose to apply AI to the denoising problem.
Talk and Technical Paper
Booth Theater 403
THURSDAY, AUGUST 3
10:00am - 10:25am
Martin-Karl Lefrancois (DevTech Software Engineer Lead, NVIDIA)
Learn about the new AI denoiser that will be available in the OptiX 5.0 SDK.
Talk
Booth Theater 403
10:30am - 10:55am
Chris Evans (Senior Character Technical Director, Epic)
In 1998 Stanford University teamed up with the Soprintendenza ai beni artistici e storici per le province di Firenze to laser scan Michelangelo's David.
Talk
Booth Theater 403
11:00am - 11:25am
Martin Hill (VFX Supervisor, Weta Digital)
Behind the scenes of Valerian and the City of a Thousand Planets
Talk
Booth Theater 403
11:30am - 11:55am
Tristan Lorach (Manager, DevTech Professional Visualization Group, NVIDIA)
This presentation is intended for Graphics developers who are developing against the Vulkan API using low level C++ code.
Talk
Booth Theater 403
12:00pm - 12:25pm
Tom-Michael Thamm (Director for Software Product Management, NVIDIA)
We'll present an overview about OptiX 5.0, MDL 2017.1, and NVIDIA IndeX 1.5
Talk
Booth Theater 403
12:30pm - 12:55pm
Nathan Watanabe (TBA, Icon4x4)
We'll go over how scan data, NVIDIA technology, and fusion 360 allow us to design and constantly improve vehicle design.
Talk
Booth Theater 403
1:00pm - 1:25pm
Julian Reyes (Director of VR/AR, Fusion Media Group Labs)
Mars 2030 is an interactive virtual reality simulation that offers a breathtaking look into the life of an astronaut hard at work studying and exploring the Martian landscape.
Talk
Booth Theater 403
1:30pm - 2:20pm
Alexey Panteleev (Senior Developer Technology Engineer, NVIDIA)
360 video is a new and exciting way to share immersive content with other people, but rendering such video with high quality and high performance is difficult.
Talk
Booth Theater 403
VR Technical Talk
SUNDAY, JULY 30 | ROOM #404A & 404B | 9:00am - 9:50am

TBA

TBA (TBA, TBA)

TBA

Programming for High Dynamic Range Rendering and Display on NVIDIA GPUs
SUNDAY, JULY 30 | ROOM #404A & 404B | 10:00am - 10:50am

We'll provide an introduction to high-dynamic range and describe application programming techniques for HDR rendering and display on NVIDIA GPUs. We'll discuss concepts such as colorspaces, expanding chromaticity versus luminance, and scene- and display-referred imaging. For application developers, takeaways will include methods to query and set GPU and display capabilities for HDR as well as OpenGL and DirectX programming to render and display HDR imagery.

Thomas True (Senior Applied Engineer, Professional Video and Image Processing, NVIDIA)

Thomas True is a senior applied engineer for Professional Video and Image Processing at NVIDIA. Thomas has an M.S. in computer science from the Graphics Lab at Brown University and a B.S. from the Rochester Institute of Technology.

NVIDIA Vulkan Support for 2017
SUNDAY, JULY 30 | ROOM #404A & 404B | 11:00am - 11:50am

Khronos released Vulkan 1.0 last year to provide application developers a high-efficiency API for compute and graphics intended for modern GPUs. NVIDIA has been hard at work extending Vulkan to support multiple GPUs, allow more efficient shader management, and incorporate NVIDIA's latest GPU features for virtual reality and advanced blending & rasterization. Hear from NVIDIA's experts on Vulkan how to best use the API. Also see how NVIDIA's Nsight developer tools make Vulkan development easier for you. Learn how your application can benefit from NVIDIA advancing Vulkan as a cross-platform, open industry standard.

Chris Hebert (Devtech Engineer, NVIDIA)

Chris Hebert has worked with real rime rendering and data visualization for 20 years across the games and pro-vis industries. He has worked with algorithm development for path rendering, real time ray tracking and fluid simulation. Chris joined NVIDIA in March 2015 and now specializes in rendering optimization for 2D/3D graphics and compute.

Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)

Jeff Kiel is a senior manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD and ShaderPerf. Previously, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. He has given presentations at many GDC and SIGGRAPH conferences and contributed articles to graphics-related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill, where he received his B.S. in mathematical sciences.

NVIDIA Vulkan Support for 2017
SUNDAY, JULY 30 | ROOM #404A & 404B | 11:00am - 11:50am

Khronos released Vulkan 1.0 last year to provide application developers a high-efficiency API for compute and graphics intended for modern GPUs. NVIDIA has been hard at work extending Vulkan to support multiple GPUs, allow more efficient shader management, and incorporate NVIDIA's latest GPU features for virtual reality and advanced blending & rasterization. Hear from NVIDIA's experts on Vulkan how to best use the API. Also see how NVIDIA's Nsight developer tools make Vulkan development easier for you. Learn how your application can benefit from NVIDIA advancing Vulkan as a cross-platform, open industry standard.

Chris Hebert (Devtech Engineer, NVIDIA)

Chris Hebert has worked with real rime rendering and data visualization for 20 years across the games and pro-vis industries. He has worked with algorithm development for path rendering, real time ray tracking and fluid simulation. Chris joined NVIDIA in March 2015 and now specializes in rendering optimization for 2D/3D graphics and compute.

NVIDIA Vulkan Support for 2017
SUNDAY, JULY 30 | ROOM #404A & 404B | 11:00am - 11:50am

Khronos released Vulkan 1.0 last year to provide application developers a high-efficiency API for compute and graphics intended for modern GPUs. NVIDIA has been hard at work extending Vulkan to support multiple GPUs, allow more efficient shader management, and incorporate NVIDIA's latest GPU features for virtual reality and advanced blending & rasterization. Hear from NVIDIA's experts on Vulkan how to best use the API. Also see how NVIDIA's Nsight developer tools make Vulkan development easier for you. Learn how your application can benefit from NVIDIA advancing Vulkan as a cross-platform, open industry standard.

Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)

Jeff Kiel is a senior manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD and ShaderPerf. Previously, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. He has given presentations at many GDC and SIGGRAPH conferences and contributed articles to graphics-related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill, where he received his B.S. in mathematical sciences.

Accelerating your VR Applications with VRWorks
SUNDAY, JULY 30 | ROOM #404A & 404B | 12:00pm - 12:50pm

Across graphics, audio, video, and physics, the NVIDIA VRWorks suite of technologies helps developers maximize performance and immersion for VR applications. We'll explore the latest features of VRWorks, explain the VR-specific challenges they address, and provide application-level tips and tricks to take full advantage of these features. Special focus will be given to the details and inner workings of our latest VRWorks feature, Lens Matched Shading, along with the latest VRWorks integrations into Unreal Engine and Unity.

Evan Hart (Devtech Engineer, NVIDIA)

Evan Hart is a seasoned developer technology engineer with 18 years' experience helping developers make their apps better. He has worked with everything from VR to HDR.

Next-Generation GPU Rendering: High-End Production Features on GPU
SUNDAY, JULY 30 | ROOM #404A & 404B | 1:00pm - 1:50pm

Take a look at the next generation of GPU-accelerated rendering. See how advances such as MDL materials, procedural shading, and adaptive lighting algorithms are changing how high-end CG productions are created.

Vladimir Klyazov (CTO, Chaos Group)

Vladimir Koylazov (Vlado) has more than 15 years of software development experience, the majority of which he spent developing and improving the render engine V-Ray. Passionate about 3D graphics and programming, Vlado is the driving force behind Chaos Group's software solutions. Vladimir is CTO of Chaos Software and one of the original creators of the V-Ray renderer.

From Scan to materials with X-Rite TAC7, Substance and MDL
SUNDAY, JULY 30 | ROOM #404A & 404B | 2:00pm - 2:50pm

Worldwide leader for procedural texturing in the gaming industry with its Substance technology, Allegorithmic has largely expand its breadth to become a cornerstone of the material workflow for all industries. After adding multi-layered capability through MDL support, Substance Designer now natively support X-Rite TAC7 scan data format and provides a full set of tools to turn it into actual PBR materials for your engine, would it be real-time or advanced ray-traced MDL capable. During the session, Sébastien and Francis will explain the full "scan to material" workflow with actual industrial use cases. X-Rite worldwide reference for material scanner is now adding to MDL and Substance Designer native libraries and editing capabilities, bringing the real life world into an infinite of possibilities, would it be solving complex tiling and resolution challenges or creating a brand new material from actual scanned data.

Sébastien Deguy (CEO, Allegorithmic)

Sebastien Deguy is the Founder of Allegorithmic SAS and serves as its Chief Executive Officer and President. Dr. Deguy has a computer science background with a specialization in mathematics, random processes, simulation, computer vision, and image synthesis. He is also a musician and an award-winning director and producer of traditional and animated short films

Francis Lamy (CTO & SVP, X-Rite)

TBA

Nicolas Paulhac (Product Manager, Allegorithmic)

Nicolas Paulhac is a product manager and color, material and finish designer at Allegorithmic. He studied design in France and worked as an industrial designer for 10 years in consumer electronics at Acer Computers. Nicolas worked on smart home automation at Hager. He specializes in color, materials and manufacturing process design as creative material specialist at Nokia's Color and Material Advanced Studio in London and later at Microsoft.

From Scan to materials with X-Rite TAC7, Substance and MDL
SUNDAY, JULY 30 | ROOM #404A & 404B | 2:00pm - 2:50pm

Worldwide leader for procedural texturing in the gaming industry with its Substance technology, Allegorithmic has largely expand its breadth to become a cornerstone of the material workflow for all industries. After adding multi-layered capability through MDL support, Substance Designer now natively support X-Rite TAC7 scan data format and provides a full set of tools to turn it into actual PBR materials for your engine, would it be real-time or advanced ray-traced MDL capable. During the session, Sébastien and Francis will explain the full "scan to material" workflow with actual industrial use cases. X-Rite worldwide reference for material scanner is now adding to MDL and Substance Designer native libraries and editing capabilities, bringing the real life world into an infinite of possibilities, would it be solving complex tiling and resolution challenges or creating a brand new material from actual scanned data.

Sébastien Deguy (CEO, Allegorithmic)

Sebastien Deguy is the Founder of Allegorithmic SAS and serves as its Chief Executive Officer and President. Dr. Deguy has a computer science background with a specialization in mathematics, random processes, simulation, computer vision, and image synthesis. He is also a musician and an award-winning director and producer of traditional and animated short films

From Scan to materials with X-Rite TAC7, Substance and MDL
SUNDAY, JULY 30 | ROOM #404A & 404B | 2:00pm - 2:50pm

Worldwide leader for procedural texturing in the gaming industry with its Substance technology, Allegorithmic has largely expand its breadth to become a cornerstone of the material workflow for all industries. After adding multi-layered capability through MDL support, Substance Designer now natively support X-Rite TAC7 scan data format and provides a full set of tools to turn it into actual PBR materials for your engine, would it be real-time or advanced ray-traced MDL capable. During the session, Sébastien and Francis will explain the full "scan to material" workflow with actual industrial use cases. X-Rite worldwide reference for material scanner is now adding to MDL and Substance Designer native libraries and editing capabilities, bringing the real life world into an infinite of possibilities, would it be solving complex tiling and resolution challenges or creating a brand new material from actual scanned data.

Marc S Ellens, (Senior Software Engineer and TAC Specialist, X-Rite Incorporated)

Marc S. Ellens is a Senior Research Scientist with X-Rite in Grand Rapids, MI. He received PhD in Computer Aided Geometric Design from the University of Utah. Employed at X-Rite for more than 10 years, he has been involved in research and development efforts toward the capture and reproduction of appearance. Ellens has presented at numerous conferences including the NVIDIA GPU Technology Conference, SPIE Color Image Conference, and the EI Materials Conference. He is named in three patents related to material visualization and reproduction and has been a member of ACM SIGGRAPH for more than 20 years.

From Scan to materials with X-Rite TAC7, Substance and MDL
SUNDAY, JULY 30 | ROOM #404A & 404B | 2:00pm - 2:50pm

Worldwide leader for procedural texturing in the gaming industry with its Substance technology, Allegorithmic has largely expand its breadth to become a cornerstone of the material workflow for all industries. After adding multi-layered capability through MDL support, Substance Designer now natively support X-Rite TAC7 scan data format and provides a full set of tools to turn it into actual PBR materials for your engine, would it be real-time or advanced ray-traced MDL capable. During the session, Sébastien and Francis will explain the full "scan to material" workflow with actual industrial use cases. X-Rite worldwide reference for material scanner is now adding to MDL and Substance Designer native libraries and editing capabilities, bringing the real life world into an infinite of possibilities, would it be solving complex tiling and resolution challenges or creating a brand new material from actual scanned data.

Nicolas Paulhac (Product Manager, Allegorithmic)

Nicolas Paulhac is a product manager and color, material and finish designer at Allegorithmic. He studied design in France and worked as an industrial designer for 10 years in consumer electronics at Acer Computers. Nicolas worked on smart home automation at Hager. He specializes in color, materials and manufacturing process design as creative material specialist at Nokia's Color and Material Advanced Studio in London and later at Microsoft.

Falcor: A Framework for Prototyping and Sharing Rendering Techniques
SUNDAY, JULY 30 | ROOM #404A & 404B | 3:00pm - 3:50pm

Falcor is the primary prototyping framework used by the NVIDIA graphics research team. It is an open-source framework, developed and maintained by a software team at NVIDIA Research. Falcor underlies the experimental code for over 25 papers, talks, and demos (including our award-winning 2016 SIGGRAPH Emerging Technology demo). This talk will present Falcor, its architecture and feature-set, and demonstrate examples of how it can improve prototyping productivity. Unlike a game engine, its codebase is simple to learn, understand, and use efficiently, while still providing a rich feature set capable of building a wide range of experimental prototypes. It was designed from the ground up to allow researchers and engineers to quickly develop, implement and share graphics techniques. Falcor supports DirectX12 and Vulkan, with a design close enough to these APIs' model to maintain their performance advantages while still greatly reducing coding overhead. Building on top of this layer, Falcor provides a rich set of common graphics features including a physically-based shading system, model and scene I/O, skinned animation, motion paths, profiling utilities, GUI widgets, screen and video capture, text rendering, VR support, and an internal debug layer. Falcor also includes a set of cutting-edge graphics effects, for use as examples or direct inclusion in your prototypes, including: modern shadow algorithms, post-processing techniques, ambient occlusion, temporal antialising, multi-layer alpha blending, specular antialiasing, and many more.

Nir Benty (Senior Graphics Software Engineer, NVIDIA)

Nir Benty is a graphics software engineer at NVIDIA Research, specializing in rendering-systems and graphics techniques. He has over 10 years of experience building and optimizing graphics tools, frameworks and engines. Nir joined NVIDIA 2.5 years ago, where he is leading the team that builds Falcor.

Deep Learning for Conservation - Lessons from the iNaturalist App
SUNDAY, JULY 30 | ROOM #404A & 404B | 4:00pm - 4:50pm

Imagine a real-time, handheld, accurate species identification tool helping land managers monitor and protect natural resources, farmers prevent crop pest and disease infestations, and law enforcement stop illegal wildlife trafficking. Advances in deep learning have made such a tool possible. On iNaturalist, species identifications for user-contributed photos are 'crowd-sourced' from a community of over 400,000 naturalists and experts. But average 'time to identification' has been on the order of days, not seconds. Working with NVIDIA and Visipedia, we used the iNaturalist library of over 5,000,000 labeled images to train a deep learning model to identify images to the species level. The iNaturalist iPhone app now uses this model along with spatio-temporal data to provide real-time species identifications. Based on a representative test sample of the kind of photos submitted to iNaturalist, this app suggested the correct species in a list of top 10 results 78% of the time. In addition, the app provided coarser 'recommendations' that fall between order and genus 77.4% of the time: "we're pretty sure this is in the ladybug family." These recommendations were correct 92.75% of the time.Incorporating real-time deep learning-enabled species identifications into iNaturalist has opened up new opportunities for conservation such as partnering with law-enforcement officers in Peru to preventing wildlife trafficking. However, it has also opened up new questions. For example, how will the iNaturalist community interact with these algorithms and how will the overall quantity and quality of data we're collecting be affected?

Alex Shepard (Software Developer, iNaturalist at California Academy of Sciences)

Alex Shepard is a software developer working on iNaturalist at California Academy of Sciences. In addition to working on the iNaturalist iOS app, Alex has led the team's exploration of computer vision and deep learning approaches to wildlife identification. Alex has a BA in History from the University of Washington and an MFA in Printmaking from the San Francisco Art Institute. Prior to joining iNaturalist in the fall of 2014, Alex led iOS development at Eyefi, a photography startup in Mountain View.

A Multi-GPU Scalable SDK for Real-Time Stereo Stitching of 360 Video and Audio
SUNDAY, JULY 30 | ROOM #404A & 404B | 5:00pm - 5:50pm

We'll introduce VRWorks Video 360 - NVIDIA's implementation of a motion-flow-based, real-time, CUDA-accelerated , GPU-scalable, 360 Stereo Stitching SDK with support for both video and audio. We will go over the overall stitching pipeline, show example videos from different stereo rigs stitched using the SDK, describe the APIs, explain the process of writing sample apps using the SDK, and analyze the do's-and-don'ts in obtaining a high-quality stitched output.

Thomas True (Senior Applied Engineer, Professional Video and Image Processing, NVIDIA)

Thomas True is a senior applied engineer for Professional Video and Image Processing at NVIDIA. Thomas has an M.S. in computer science from the Graphics Lab at Brown University and a B.S. from the Rochester Institute of Technology.

NVIDIA Research: Perceptual Insights into Foveated Virtual Reality
MONDAY, JULY 31 | ROOM #404A & 404B | 9:00am - 9:50am

We'll present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for VR applications. We'll present a list of open challenges in this area. Foveated rendering is a class of algorithms that increase the performance of virtual reality applications by reducing image quality in the periphery of a user's vision. We'll present results from our recent and ongoing work in understanding the perceptual nature of human peripheral vision, and its uses in improving the quality and performance of foveated rendering for VR applications. We'll also talk about open challenges in this area.

Anjul Patney (Senior Research Scientist, NVIDIA)

Anjul Patney is a senior research scientist at NVIDIA in Redmond, Washington, where he works in the area of high-performance real-time rendering. He received his M.S. and Ph.D. from the University of California, Davis, in 2013, and his B.Tech. in electrical engineering from the Indian Institute of Technology Delhi in 2007. Anjul's current interests lie in the area of high-performance mixed-reality rendering, perceptual 3D graphics, and gaze-contingent rendering.

Tree VR
MONDAY, JULY 31 | ROOM #404A & 404B | 10:00am - 10:50am

New Reality Co., the creative studio responsible for the critically-acclaimed 2016 VR drama Giant , will expand upon the ideation, production, and development of their latest piece Tree , an ambitious foray into building a photorealistic CG jungle using NVIDIA Quadro P6000 GPUs. Tree , co-directed by Milica Zec and Winslow Porter and created in collaboration with The Rainforest Alliance, debuted at Sundance New Frontier 2017 before sprouting installations at Tribeca, TED, Cannes, and Montreal's Phi Centre. Zec and Porter will elucidate the invigorating challenges of simulating real growth from a seed to an emergent tree, crafting real-time lighting and shadows, and experimenting with numerous different software pipelines to bring this unprecedented project to life.

Winslow Turner Porter III (Film Director and Producer, New Reality Company)

Winslow Porter is a Brooklyn based director, producer and creative technologist specializing in virtual reality and large-scale immersive installations. Winslow has always been fascinated with the possibilities of how the intersection of art and technology can elevate storytelling. He started out as a feature film editor, but pivoted to interactive music for modern dance and art/tech after graduating from NYU Tisch's Interactive Telecommunications Program (ITP) in 2010. With over six years of experiential work under his belt, he has helped create interactive art experiences for Google, Delta, Diesel and Wired to name a few. Winslow also produced the Tribeca Film Festival Transmedia Award-winning documentary CLOUDS, among other acclaimed new media projects. Winslow formed studio New Reality Company with Milica Zec in 2016, creating both Giant and Tree, and continues consulting on dozens of interactive/immersive projects with notable creatives and brands. He and Zec were both named designers in residence at A/D/O, a design center in Greenpoint Brooklyn; the two were recently selected to Adweek's Top 100 creatives as digital innovators.

Milica Zec (Film and Virtual Reality Director, Editor and Screenwriter, New Reality Company)

Milica Zec is an NY/LA based film and virtual reality director, editor, and screenwriter. Her directorial debut in virtual reality was a short narrative piece called Giant, which premiered at Sundance Film Festival Frontier 2016. Her second VR experience Tree was also selected to Sundance New Frontier, Tribeca Film Festival's Virtual Arcade, and TED2017: The Future You in Vancouver. Since creating Tree, Milica and her New Reality Company co-founder Winslow Porter were named as designers in residence at A/D/O, a creative center in Brooklyn exploring boundaries in design, and named to Adweek's top 100 creatives in the digital innovation category.

Dave Gougé (Head of Marketing & Publicity, Weta Digital)

Dave Gougé is in charge of managing Weta Digital's reputation. Dave ensures that the groundbreaking work and innovative technologies produced at Weta Digital, and the artists who create them, are recognized for their excellence. As part of this, he and his team work with journalists, media outlets and professional organizations around the world to highlight the company's achievements. In addition to international film activities, Dave manages Weta Digital's profile as a significant technology and entertainment company in New Zealand. Dave joined Weta Digital in 2010 after working as Autodesk's Senior Brand Manager for Media and Entertainment. Prior to Autodesk, he worked for over a decade in advertising and integrated marketing for technology and consumer brands. Dave serves on the New Zealand Board of the Visual Effects Society and is an active speaker and event organizer in the entertainment technology space.

Amy Minty (Marketing Manager, Weta Digital)

Amy Minty joined Weta Digital in February 2016.

Tree VR
MONDAY, JULY 31 | ROOM #404A & 404B | 10:00am - 10:50am

New Reality Co., the creative studio responsible for the critically-acclaimed 2016 VR drama Giant , will expand upon the ideation, production, and development of their latest piece Tree , an ambitious foray into building a photorealistic CG jungle using NVIDIA Quadro P6000 GPUs. Tree , co-directed by Milica Zec and Winslow Porter and created in collaboration with The Rainforest Alliance, debuted at Sundance New Frontier 2017 before sprouting installations at Tribeca, TED, Cannes, and Montreal's Phi Centre. Zec and Porter will elucidate the invigorating challenges of simulating real growth from a seed to an emergent tree, crafting real-time lighting and shadows, and experimenting with numerous different software pipelines to bring this unprecedented project to life.

Winslow Turner Porter III (Film Director and Producer, New Reality Company)

Winslow Porter is a Brooklyn based director, producer and creative technologist specializing in virtual reality and large-scale immersive installations. Winslow has always been fascinated with the possibilities of how the intersection of art and technology can elevate storytelling. He started out as a feature film editor, but pivoted to interactive music for modern dance and art/tech after graduating from NYU Tisch's Interactive Telecommunications Program (ITP) in 2010. With over six years of experiential work under his belt, he has helped create interactive art experiences for Google, Delta, Diesel and Wired to name a few. Winslow also produced the Tribeca Film Festival Transmedia Award-winning documentary CLOUDS, among other acclaimed new media projects. Winslow formed studio New Reality Company with Milica Zec in 2016, creating both Giant and Tree, and continues consulting on dozens of interactive/immersive projects with notable creatives and brands. He and Zec were both named designers in residence at A/D/O, a design center in Greenpoint Brooklyn; the two were recently selected to Adweek's Top 100 creatives as digital innovators.

Tree VR
MONDAY, JULY 31 | ROOM #404A & 404B | 10:00am - 10:50am

New Reality Co., the creative studio responsible for the critically-acclaimed 2016 VR drama Giant , will expand upon the ideation, production, and development of their latest piece Tree , an ambitious foray into building a photorealistic CG jungle using NVIDIA Quadro P6000 GPUs. Tree , co-directed by Milica Zec and Winslow Porter and created in collaboration with The Rainforest Alliance, debuted at Sundance New Frontier 2017 before sprouting installations at Tribeca, TED, Cannes, and Montreal's Phi Centre. Zec and Porter will elucidate the invigorating challenges of simulating real growth from a seed to an emergent tree, crafting real-time lighting and shadows, and experimenting with numerous different software pipelines to bring this unprecedented project to life.

Milica Zec (Film and Virtual Reality Director, Editor and Screenwriter, New Reality Company)

Milica Zec is an NY/LA based film and virtual reality director, editor, and screenwriter. Her directorial debut in virtual reality was a short narrative piece called Giant, which premiered at Sundance Film Festival Frontier 2016. Her second VR experience Tree was also selected to Sundance New Frontier, Tribeca Film Festival's Virtual Arcade, and TED2017: The Future You in Vancouver. Since creating Tree, Milica and her New Reality Company co-founder Winslow Porter were named as designers in residence at A/D/O, a creative center in Brooklyn exploring boundaries in design, and named to Adweek's top 100 creatives in the digital innovation category.

Tree VR
MONDAY, JULY 31 | ROOM #404A & 404B | 10:00am - 10:50am

New Reality Co., the creative studio responsible for the critically-acclaimed 2016 VR drama Giant , will expand upon the ideation, production, and development of their latest piece Tree , an ambitious foray into building a photorealistic CG jungle using NVIDIA Quadro P6000 GPUs. Tree , co-directed by Milica Zec and Winslow Porter and created in collaboration with The Rainforest Alliance, debuted at Sundance New Frontier 2017 before sprouting installations at Tribeca, TED, Cannes, and Montreal's Phi Centre. Zec and Porter will elucidate the invigorating challenges of simulating real growth from a seed to an emergent tree, crafting real-time lighting and shadows, and experimenting with numerous different software pipelines to bring this unprecedented project to life.

Dave Gougé (Head of Marketing & Publicity, Weta Digital)

Dave Gougé is in charge of managing Weta Digital's reputation. Dave ensures that the groundbreaking work and innovative technologies produced at Weta Digital, and the artists who create them, are recognized for their excellence. As part of this, he and his team work with journalists, media outlets and professional organizations around the world to highlight the company's achievements. In addition to international film activities, Dave manages Weta Digital's profile as a significant technology and entertainment company in New Zealand. Dave joined Weta Digital in 2010 after working as Autodesk's Senior Brand Manager for Media and Entertainment. Prior to Autodesk, he worked for over a decade in advertising and integrated marketing for technology and consumer brands. Dave serves on the New Zealand Board of the Visual Effects Society and is an active speaker and event organizer in the entertainment technology space.

Tree VR
MONDAY, JULY 31 | ROOM #404A & 404B | 10:00am - 10:50am

New Reality Co., the creative studio responsible for the critically-acclaimed 2016 VR drama Giant , will expand upon the ideation, production, and development of their latest piece Tree , an ambitious foray into building a photorealistic CG jungle using NVIDIA Quadro P6000 GPUs. Tree , co-directed by Milica Zec and Winslow Porter and created in collaboration with The Rainforest Alliance, debuted at Sundance New Frontier 2017 before sprouting installations at Tribeca, TED, Cannes, and Montreal's Phi Centre. Zec and Porter will elucidate the invigorating challenges of simulating real growth from a seed to an emergent tree, crafting real-time lighting and shadows, and experimenting with numerous different software pipelines to bring this unprecedented project to life.

Amy Minty (Marketing Manager, Weta Digital)

Amy Minty joined Weta Digital in February 2016.

NVIDIA Research: Sharing Physically Based Materials Between Renderers with MDL
MONDAY, JULY 31 | ROOM #404A & 404B | 11:00am - 11:50am

We'll discuss the basics of NVIDIA's Material Definition Language, showing how a single material can be used to define matching appearances between different renderers and rendering techniques. End users will learn how physically based definitions can be defined, while developers will learn what's entailed in supporting MDL within their own products or renderers.

Jan Jordan (Product Manager, MDL, NVIDIA)

Jan Jordan is the product manager for the NVIDIA Material Definition Language. Before joining NVIDIA, Jan's diverse professional experience spanned research work on practical VR applications to working as an art director in computer games. He is a long-time member of NVIDIA's Advanced Rendering team, where he focuses on enabling material workflows across many different applications and renderers. Jan is a graduate engineer of applied computer science from the Fachhochschule fur Wirtschaft und Technik Berlin/Germany and has a B.S. in computer science from the RTC Galway Ireland.

Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)

Lutz Kettner leads the design and engineering efforts for the Material Definition Language, MDL, and the Iray renderer from the NVIDIA Advanced Rendering Center. He has been working on leading software products in advanced rendering, language design, API design, and geometry for 19 years. He is known for his influential work on the open source Computational Geometry Algorithms Library, CGAL. He holds a Ph.D. in computer science from ETH Zurich, Switzerland, worked as a researcher at the University of North Carolina at Chapel Hill, and led a research group at the Max-Planck-Institute in Saarbrucken, Germany. He has also served on ISO and ECMA standardization committees.

NVIDIA Research: Sharing Physically Based Materials Between Renderers with MDL
MONDAY, JULY 31 | ROOM #404A & 404B | 11:00am - 11:50am

We'll discuss the basics of NVIDIA's Material Definition Language, showing how a single material can be used to define matching appearances between different renderers and rendering techniques. End users will learn how physically based definitions can be defined, while developers will learn what's entailed in supporting MDL within their own products or renderers.

Jan Jordan (Product Manager, MDL, NVIDIA)

Jan Jordan is the product manager for the NVIDIA Material Definition Language. Before joining NVIDIA, Jan's diverse professional experience spanned research work on practical VR applications to working as an art director in computer games. He is a long-time member of NVIDIA's Advanced Rendering team, where he focuses on enabling material workflows across many different applications and renderers. Jan is a graduate engineer of applied computer science from the Fachhochschule fur Wirtschaft und Technik Berlin/Germany and has a B.S. in computer science from the RTC Galway Ireland.

NVIDIA Research: Sharing Physically Based Materials Between Renderers with MDL
MONDAY, JULY 31 | ROOM #404A & 404B | 11:00am - 11:50am

We'll discuss the basics of NVIDIA's Material Definition Language, showing how a single material can be used to define matching appearances between different renderers and rendering techniques. End users will learn how physically based definitions can be defined, while developers will learn what's entailed in supporting MDL within their own products or renderers.

Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)

Lutz Kettner leads the design and engineering efforts for the Material Definition Language, MDL, and the Iray renderer from the NVIDIA Advanced Rendering Center. He has been working on leading software products in advanced rendering, language design, API design, and geometry for 19 years. He is known for his influential work on the open source Computational Geometry Algorithms Library, CGAL. He holds a Ph.D. in computer science from ETH Zurich, Switzerland, worked as a researcher at the University of North Carolina at Chapel Hill, and led a research group at the Max-Planck-Institute in Saarbrucken, Germany. He has also served on ISO and ECMA standardization committees.

Advances in Real-Time Graphics at Pixar
MONDAY, JULY 31 | ROOM #404A & 404B | 12:00pm - 12:50pm

Explore how real-time graphics are used at Pixar Animation Studios. We'll describe the unique needs for film production and our custom solutions, including Presto and our open-source projects Universal Scene Description (USD), OpenSubdiv, and Hydra. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

Dirk Van Gelder (Senior Software Engineer, PIXAR)

Dirk Van Gelder joined Pixar Animation Studios in 1997 as an software engineer for Academy Award® nominated film "A Bug's Life" and winning short film "Geri's Game," working on animation software and the studio's first use of subdivision surfaces. Dirk has worked on software for every Pixar movie since, including the ground-up rewrite of the studio's proprietary animation system Presto. Currently Dirk leads the Presto Character team within the Pixar Studio Tools Department.

David Yu (Senior Graphics Software Engineer, PIXAR)

David Yu is a senior graphics software engineering at Pixar.

Pol Jeremias-Vila (Senior Graphics Engineer, PIXAR)

Pol Jeremias-Vila is passionate about technology and art. He grew up near Barcelona and moved to California in 2006. Since then, Pol has researched computer graphics and worked in multiple games for companies such as LucasArts or SoMa Play. Today, he helps create movies at Pixar Animation Studios. In his spare time, he has co-founded Shadertoy.com and Beautypi. When he is not programming, you'll find him running, reading, or watching movies.

Advances in Real-Time Graphics at Pixar
MONDAY, JULY 31 | ROOM #404A & 404B | 12:00pm - 12:50pm

Explore how real-time graphics are used at Pixar Animation Studios. We'll describe the unique needs for film production and our custom solutions, including Presto and our open-source projects Universal Scene Description (USD), OpenSubdiv, and Hydra. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

Dirk Van Gelder (Senior Software Engineer, PIXAR)

Dirk Van Gelder joined Pixar Animation Studios in 1997 as an software engineer for Academy Award® nominated film "A Bug's Life" and winning short film "Geri's Game," working on animation software and the studio's first use of subdivision surfaces. Dirk has worked on software for every Pixar movie since, including the ground-up rewrite of the studio's proprietary animation system Presto. Currently Dirk leads the Presto Character team within the Pixar Studio Tools Department.

Advances in Real-Time Graphics at Pixar
MONDAY, JULY 31 | ROOM #404A & 404B | 12:00pm - 12:50pm

Explore how real-time graphics are used at Pixar Animation Studios. We'll describe the unique needs for film production and our custom solutions, including Presto and our open-source projects Universal Scene Description (USD), OpenSubdiv, and Hydra. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

David Yu (Senior Graphics Software Engineer, PIXAR)

David Yu is a senior graphics software engineering at Pixar.

Advances in Real-Time Graphics at Pixar
MONDAY, JULY 31 | ROOM #404A & 404B | 12:00pm - 12:50pm

Explore how real-time graphics are used at Pixar Animation Studios. We'll describe the unique needs for film production and our custom solutions, including Presto and our open-source projects Universal Scene Description (USD), OpenSubdiv, and Hydra. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

Pol Jeremias-Vila (Senior Graphics Engineer, PIXAR)

Pol Jeremias-Vila is passionate about technology and art. He grew up near Barcelona and moved to California in 2006. Since then, Pol has researched computer graphics and worked in multiple games for companies such as LucasArts or SoMa Play. Today, he helps create movies at Pixar Animation Studios. In his spare time, he has co-founded Shadertoy.com and Beautypi. When he is not programming, you'll find him running, reading, or watching movies.

Khronos API Ecosystem Update – Including Vulkan and OpenXR for Cross-Platform Virtual Reality
MONDAY, JULY 31 | ROOM #404A & 404B | 1:00pm - 1:50pm

Discover how over 100 companies cooperate at the Khronos Group to create open, royalty-free standards that enable developers to access the power of the GPU to accelerate demanding compute, graphics, and vision applications. Learn the very latest updates on a number of Khronos cross-platform standards, including the newly announced OpenXR for portable AR and VR, Vulkan, SPIR-V, OpenVX, OpenGL, and OpenCL. We'll also provide insights into how these open standards APIs are supported across NVIDIA's product families.

Neil Trevett (VP, NVIDIA, NVIDIA)

Neil Trevett has spent over 30 years in the 3D graphics industry. At NVIDIA, he works to enable applications to leverage advanced silicon acceleration, with a particular focus on augmented reality. Neil is also the elected president of the Khronos Group standards consortium, where he initiated the OpenGL ES API, now used on billions of mobile phones, and helped catalyze the WebGL project to bring interactive 3D graphics to the web. Neil also chairs the OpenCL working group defining the open standard for heterogeneous parallel computation and has helped establish and launch the OpenVX vision API, the new-generation Vulkan GPU API, and the OpenXR standard for portable AR and VR.

NVIDIA OpenGL in 2017
MONDAY, JULY 31 | ROOM #404A & 404B | 2:00pm - 2:50pm

OpenGL developers should attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs. Hear straight from an OpenGL expert at NVIDIA how the OpenGL standard continues to evolve with NVIDIA's support. Learn all the details about OpenGL's latest version. See how NVIDIA's Nsight developer tools make OpenGL development easier for you. Learn how your application can benefit from NVIDIA advancing OpenGL as a cross-platform, open industry standard.

Mark Kilgard (Principal System Software Engineer, NVIDIA)

Mark Kilgard is a Principal System Software Engineer at NVIDIA working on OpenGL, vector graphics, web page rendering, and GPU-rendering algorithms. Mark has over 25 years' experience with OpenGL including the specification of numerous important OpenGL extensions. He implemented the OpenGL Utility Toolkit (GLUT) library. Mark authored two books and is named on over 60 graphics-related patents.

Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)

Jeff Kiel is a senior manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD and ShaderPerf. Previously, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. He has given presentations at many GDC and SIGGRAPH conferences and contributed articles to graphics-related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill, where he received his B.S. in mathematical sciences.

NVIDIA OpenGL in 2017
MONDAY, JULY 31 | ROOM #404A & 404B | 2:00pm - 2:50pm

OpenGL developers should attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs. Hear straight from an OpenGL expert at NVIDIA how the OpenGL standard continues to evolve with NVIDIA's support. Learn all the details about OpenGL's latest version. See how NVIDIA's Nsight developer tools make OpenGL development easier for you. Learn how your application can benefit from NVIDIA advancing OpenGL as a cross-platform, open industry standard.

Mark Kilgard (Principal System Software Engineer, NVIDIA)

Mark Kilgard is a Principal System Software Engineer at NVIDIA working on OpenGL, vector graphics, web page rendering, and GPU-rendering algorithms. Mark has over 25 years' experience with OpenGL including the specification of numerous important OpenGL extensions. He implemented the OpenGL Utility Toolkit (GLUT) library. Mark authored two books and is named on over 60 graphics-related patents.

NVIDIA OpenGL in 2017
MONDAY, JULY 31 | ROOM #404A & 404B | 2:00pm - 2:50pm

OpenGL developers should attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs. Hear straight from an OpenGL expert at NVIDIA how the OpenGL standard continues to evolve with NVIDIA's support. Learn all the details about OpenGL's latest version. See how NVIDIA's Nsight developer tools make OpenGL development easier for you. Learn how your application can benefit from NVIDIA advancing OpenGL as a cross-platform, open industry standard.

Jeff Kiel (Senior Manager, Graphics Tools, NVIDIA)

Jeff Kiel is a senior manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD and ShaderPerf. Previously, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. He has given presentations at many GDC and SIGGRAPH conferences and contributed articles to graphics-related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill, where he received his B.S. in mathematical sciences.

Introduction and Techniques with NVIDIA GVDB Voxels
MONDAY, JULY 31 | ROOM #404A & 404B | 3:00pm - 3:50pm

We'll explore NVIDIA GVDB Voxels, a new open source SDK framework for generic representation, computation, and rendering of voxel-based data. We'll introduce the features of the new SDK and cover applications and examples in motion pictures, scientific visualization, and 3D printing. NVIDIA Voxels, based on GVDB Sparse Volume technology and inspired by OpenVDB, manipulates large volumetric datasets entirely on the GPU using a hierarchy of grids. The second part of the talk will cover in-depth use of the SDK, with code samples, and coverage of the design aspects of NVIDIA Voxels. A sample code walk-through will demonstrate how to build sparse volumes, render high-quality images with NVIDIA OptiX integration, produce dynamic data, and perform compute-based operations.

Rama Hoetzlein (Manager of Graphics Tools, NVIDIA)

Rama Hoetzlein is the lead architect of NVIDIA Voxels (GVDB) at NVIDIA, where he investigates applications of sparse volumes to 3D printing, scientific visualization, and motion pictures. In 2010, Rama's interdisciplinary thesis work in media arts at the University of California, Santa Barbara, explored creative support tools for procedural modeling. He studied compute science and fine arts at Cornell University, and co-founded the Game Design Initiative at Cornell in 2001.

The Virtual Frontier: Computer Graphics Challenges in Virtual & Augmented Reality
MONDAY, JULY 31 | ROOM #404A & 404B | 4:00pm - 4:50pm

Game graphics are maturing: near-cinema quality, on sophisticated APIs, game engines, and GPUs. Consumer virtual reality is the Wild West: exciting new opportunities and wide open research challenges. In this talk, Dr. McGuire will identify the most critical of these challenges and describe how NVIDIA Research is tackling them. The talk will focus on reducing latency, increasing frame rate and field of view, and matching rendering to both display optics and the human visual system.

Morgan McGuire (Distinguished Research Scientist, NVIDIA)

Morgan McGuire is the author of "The Graphics Codex" and co-author of "Computer Graphics: Principles & Practice and Creating Games." He cochaired the I3D'08, I3D'09, NPAR'10, and HPG'17 conferences, and was the founding editor and editor-in-chief of the Journal of Computer Graphics Techniques. Morgan contributed to many commercial products, including NVIDIA GPUs, the Unity game engine, and the game series "Titan Quest," "Marvel Ultimate Alliance," and "Skylanders." He is a former professor of computer science at Williams College and now works at NVIDIA as a Distinguished Research Scientist. He holds a B.S. and M.Eng. from MIT and an M.S. and Ph.D. from Brown University.

The Iray Light Transport Simulation and Rendering System.
MONDAY, JULY 31 | ROOM #404A & 404B | 5:00pm - 5:50pm

We reason about the design decisions that led to the system architecture of NVIDIA Iray. The scalable parallelization from single devices to clusters of GPU systems required new approaches to motion blur simulation, anti-aliasing, and fault tolerance, which are based on consistent sampling that at the same time enables push-button rendering with only a minimal set of user parameters. We then dive into technical details about light transport simulation, especially on how Iray deals with geometric light sources, importance sampling, decals, and material evaluation in order to be efficient on GPUs. It is remarkable how well the physically based system extends to modern workflows like, for example, light path expressions and matte objects. The separation of material definition and implementation has been key to the superior performance and rendering quality and resulted in the emerging standard MDL (material definition language).

Alexander Keller (Director of Research, NVIDIA)

Alexander Keller is a director of research at NVIDIA, leading advanced rendering research. Before, he had been the chief scientist at Mental Images, responsible for research and conception of products and strategies, including the design of the NVIDIA Iray renderer. Prior to industry, Alexander worked as a full professor for computer graphics and scientific computing at Ulm University, where he co-founded the UZWR (Ulmer Zentrum fur wissenschaftliches Rechnen) and received an award for excellence in teaching. Alexander holds a Ph.D. in computer science, has authored more than 27 granted patents, and has published more than 50 papers, mainly in the area of quasi-Monte Carlo methods and photorealistic image synthesis using ray tracing.

Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)

Lutz Kettner leads the design and engineering efforts for the Material Definition Language, MDL, and the Iray renderer from the NVIDIA Advanced Rendering Center. He has been working on leading software products in advanced rendering, language design, API design, and geometry for 19 years. He is known for his influential work on the open source Computational Geometry Algorithms Library, CGAL. He holds a Ph.D. in computer science from ETH Zurich, Switzerland, worked as a researcher at the University of North Carolina at Chapel Hill, and led a research group at the Max-Planck-Institute in Saarbrucken, Germany. He has also served on ISO and ECMA standardization committees.

The Iray Light Transport Simulation and Rendering System.
MONDAY, JULY 31 | ROOM #404A & 404B | 5:00pm - 5:50pm

We reason about the design decisions that led to the system architecture of NVIDIA Iray. The scalable parallelization from single devices to clusters of GPU systems required new approaches to motion blur simulation, anti-aliasing, and fault tolerance, which are based on consistent sampling that at the same time enables push-button rendering with only a minimal set of user parameters. We then dive into technical details about light transport simulation, especially on how Iray deals with geometric light sources, importance sampling, decals, and material evaluation in order to be efficient on GPUs. It is remarkable how well the physically based system extends to modern workflows like, for example, light path expressions and matte objects. The separation of material definition and implementation has been key to the superior performance and rendering quality and resulted in the emerging standard MDL (material definition language).

Alexander Keller (Director of Research, NVIDIA)

Alexander Keller is a director of research at NVIDIA, leading advanced rendering research. Before, he had been the chief scientist at Mental Images, responsible for research and conception of products and strategies, including the design of the NVIDIA Iray renderer. Prior to industry, Alexander worked as a full professor for computer graphics and scientific computing at Ulm University, where he co-founded the UZWR (Ulmer Zentrum fur wissenschaftliches Rechnen) and received an award for excellence in teaching. Alexander holds a Ph.D. in computer science, has authored more than 27 granted patents, and has published more than 50 papers, mainly in the area of quasi-Monte Carlo methods and photorealistic image synthesis using ray tracing.

The Iray Light Transport Simulation and Rendering System.
MONDAY, JULY 31 | ROOM #404A & 404B | 5:00pm - 5:50pm

We reason about the design decisions that led to the system architecture of NVIDIA Iray. The scalable parallelization from single devices to clusters of GPU systems required new approaches to motion blur simulation, anti-aliasing, and fault tolerance, which are based on consistent sampling that at the same time enables push-button rendering with only a minimal set of user parameters. We then dive into technical details about light transport simulation, especially on how Iray deals with geometric light sources, importance sampling, decals, and material evaluation in order to be efficient on GPUs. It is remarkable how well the physically based system extends to modern workflows like, for example, light path expressions and matte objects. The separation of material definition and implementation has been key to the superior performance and rendering quality and resulted in the emerging standard MDL (material definition language).

Lutz Kettner (Director, Rendering Software and MDL, NVIDIA)

Lutz Kettner leads the design and engineering efforts for the Material Definition Language, MDL, and the Iray renderer from the NVIDIA Advanced Rendering Center. He has been working on leading software products in advanced rendering, language design, API design, and geometry for 19 years. He is known for his influential work on the open source Computational Geometry Algorithms Library, CGAL. He holds a Ph.D. in computer science from ETH Zurich, Switzerland, worked as a researcher at the University of North Carolina at Chapel Hill, and led a research group at the Max-Planck-Institute in Saarbrucken, Germany. He has also served on ISO and ECMA standardization committees.

HP & NVIDIA Partner On New Innovation
TUESDAY, AUG 1 | Booth Theater 403 | 9:30am - 9:55am

Come by the HP sponsored NVIDIA booth theater to hear about the latest collaborative innovation. Attend this session to get your name in the drawing to win a NVIDIA P6000.

Josh Peterson (Head of Product Management, HP Workstations & Immersive Computing, HP)

Josh Peterson is currently Head of Product Management for HP's Workstation and Immersive Computing business. He has worldwide product portfolio responsibility for desktop workstations, mobile workstations, immersive computing, and all related accessories. Josh has spent over 16 years at HP and has led product management, strategic alliances, and fostering dedicated innovation efforts across multiple product categories including workstations, mobile workstations, consumer storage, and optical disk standards/technologies. Josh holds a Bachelor's Degree in Mechanical Engineering from the University of Colorado and a Master's Degree in Business Administration from Colorado State University.

Accelerating Rendering Research: R&D Tools and 3D Content
TUESDAY, AUG 1 | Booth Theater 403 | 10:00am - 10:25am

NVIDIA and partners are joining together to accelerate rendering R&D by providing solutions to common problems: access to realistic 3D content, (re-) implementing published rendering algorithms, and complexity of modern graphics APIs. This talk will describe new open/free solutions to these issues from NVIDIA, Amazon Lumberyard, and SpeedTree.

Nir Benty (Senior Graphics Software Engineer, NVIDIA)

Nir Benty is a graphics software engineer at NVIDIA Research, specializing in rendering-systems and graphics techniques. He has over 10 years of experience building and optimizing graphics tools, frameworks and engines. Nir joined NVIDIA 2.5 years ago, where he is leading the team that builds Falcor.

Atom View: VR Pixels
TUESDAY, AUG 1 | Booth Theater 403 | 10:30am - 10:55am

The talk will be about point cloud rendering for VR. A proposed name of the talk will be Atom View: VR Pixels. We will discuss Atom View technology, what we've been doing, how it can be used with CG or scan data, benefits of the point cloud workflow and practical application of the technology. We'll go over different types of data and workflows to get into VR. We will demo a real world environment (Giant Sequoia tree), a computer generated scene created with V-Ray rendering (a scene from the 1979 Alien movie), and a super high resolution human head performance capture all in points (our engineer Malik captured in our studio).

Scott Metzger (Product Manager & Senior Industrial Designer, Allegorithmic)

TBA

Create a 3D Caricature in Minutes with Deep Learning
TUESDAY, AUG 1 | Booth Theater 403 | 11:00am - 11:25am

Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this talk, I will introduce a deep learning based sketching system for 3D face and caricature modeling. The system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. The proposed system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.

Xiaoguang Han (Ph.D., University of Hong Kong)

Xiaoguang Han is currently a final-year Ph.D. student with the Department of Computer Science at the University of Hong Kong since 2013. He received his M.Sc. in Applied Mathematics (2011) from Zhejiang University, and his B.S. in Information and Computer Science (2009) from Nanjing University of Aeronautics and Astronautics, China. He was also a Research Associate of School of Creative Media at City University of Hong Kong during 2011 to 2013. His research interests include Computer Graphics, Computer Vision and Computational Geometry, especially on image/video editing, 3D reconstruction, discrete geodesic computing. His current research focuses on high-quality 3D modeling and reconstruction using deep neural networks.

NVIDIA Research: Audio-Driven Facial Animation by Joint End-to-End Learning of Pose and Emotion
TUESDAY, AUG 1 | Booth Theater 403 | 11:30am - 11:55am

We present a machine learning technique for driving 3D facial animation by audio input in real time and with low latency. Our deep neural network learns a mapping from input waveforms to the 3D vertex coordinates of a face model, and simultaneously discovers a compact, latent code that disambiguates the variations in facial expression that cannot be explained by the audio alone. During inference, the latent code can be used as an intuitive control for the emotional state of the face puppet.

Timo Aila, Tero Karras & Samuli Laine (Principal Research Scientist, NVIDIA)

Tero Karras is a principal research scientist at NVIDIA Research, which he joined in 2009. His research interests include machine learning for content creation, real time ray tracing, GPU computing, and parallel algorithms. He has had a pivotal role in NVIDIA's ray tracing research, especially related to efficient construction of acceleration structures.

Real-Time 3D Motion Capture Using Webcams and GPUs
TUESDAY, AUG 1 | Booth Theater 403 | 12:00pm - 12:25pm

We'll provide a brief overview of how to apply GPU-based deep learning techniques to extract 3D human motion capture from standard 2D RGB video. We'll describe in detail the stages of our CUDA-based pipeline from training to cloud-based deployment. Our training system is a novel mix of real world data collected with Kinect cameras and synthetic data based on rendering thousands of virtual humans generated in the Unity game engine. Our execution pipeline is a series of connected models including 2D video to 2D pose estimation and 2D pose to 3D pose estimation. We'll describe how this system can be integrated into a variety of mobile applications ranging from social media to sports training. We'll present a live demo using a mobile phone connected into an AWS GPU cluster.

Paul Kruszewski (CEO, wrnch)

As a serial entrepreneur, Paul Kruszewski has been hustling and hacking in visual computing since he was 12, when he leveraged a $250 livestock sale into a $1,000 TRS-80 Color Computer. Soon after he wrote his fist video game. Paul went on to obtain a Ph.D. in the probabilistic algorithmic analysis from McGill University. In 2000, he founded AI.implant and developed the world's first real-time navigation middleware for 3D humans. AI.implant was acquired in 2005 by Presagis, the world's leading developer of software tools for military simulation and training. In 2007, he founded GRIP and developed the world's first brain authoring system for video game characters. GRIP was acquired in 2011 by Autodesk, the world's leading developer of software tools for digital entertainment. In 2014, he founded wrnch to democratize computer vision technology.

Cinematics in Unity: Unleash the Real-Time Rendering Power of the Engine
TUESDAY, AUG 1 | Booth Theater 403 | 12:30pm - 12:55pm

Unity launched Timeline and Cinemachine, innovative storytelling and scene composition tools that use the power of real-time rendering to create scenes and narratives with both synthetic and live images. Learn more about real-time storytelling and all the interactive experiences you can create including what trailers, cutscenes and in game sequences.

Adam Myhill (Director of Photography and CG Supervisor, Unity)

Adam Myhill has spent almost two decades in video game and film worlds, working as a Director of Photography and CG supervisor at EA across multiple titles and as a feature film DP on a number of movies. Most recently at Blackbird Interactive working on the Homeworld prequel as DP doing cinematics, lighting and in-game cameras. His procedural cinematic and in-game camera system, Cinemachine, has been acquired by Unity. He now brings his cinematic, lighting and in-game camera experience to Unity as Head of Cinematics. Adam also holds a patent for procedural cinematography in the area of eSports on how to generate movie-like content from games or other variable scenarios in real-time.

GPU Computing for the Construction Industry: AR/VR for Learning, Planning, and Safety
TUESDAY, AUG 1 | Booth Theater 403 | 1:00pm - 1:25pm

We'll dive headfirst into some of the current challenges of the construction industry, how we're addressing them, and how we're planning to utilize virtual/augmented reality and real-time GPU computing to address them. To optimize the construction of a building, site logistics must be planned, and all systems analyzed and coordinated to confirm constructability. Along with the use of building information modeling (BIM) and the advent of inexpensive GPU and AR/VR hardware, we're building tools to redefine the planning and analysis process for construction management. No longer are virtual and augmented reality systems just for entertainment; they can help us plan faster, help confirm our client's design goals, and facilitate stronger communication among our team members before and during the construction process.

Kyle Szostek (Senior VDC Engineer, Gilbane Building Company)

Kyle Szostek is a senior virtual design and construction engineer who has been with Gilbane Building Company for the last four years, managing virtual design and construction services for over $2 billion of construction projects. He's focused his work on research and development of collaborative BIM workflows, visualization techniques, and AR/VR tools. With a background in 3D art and a bachelor's of architecture from the University of Arizona, Kyle brings a unique 3D visualization skillset to Gilbane's VDC team.

Ken Grothman (Senior Virtual Construction Engineer, Gilbane Building Company)

Ken Grothman is a senior virtual design and construction engineer who has been with Gilbane Building Company for two years, involved in over $1 billion of construction work, including high-end corporate, medical facilities, and mission-critical data infrastructure. Ken specializes in laser scanning and reality capture, and is an active member in the industry's laser scanning community. With a background in design|build architecture and a master's of architecture from the University of Kansas, Ken brings a pragmatic, problem-solving skillset to Gilbane's VDC team.

GPU Computing for the Construction Industry: AR/VR for Learning, Planning, and Safety
TUESDAY, AUG 1 | Booth Theater 403 | 1:00pm - 1:25pm

We'll dive headfirst into some of the current challenges of the construction industry, how we're addressing them, and how we're planning to utilize virtual/augmented reality and real-time GPU computing to address them. To optimize the construction of a building, site logistics must be planned, and all systems analyzed and coordinated to confirm constructability. Along with the use of building information modeling (BIM) and the advent of inexpensive GPU and AR/VR hardware, we're building tools to redefine the planning and analysis process for construction management. No longer are virtual and augmented reality systems just for entertainment; they can help us plan faster, help confirm our client's design goals, and facilitate stronger communication among our team members before and during the construction process.

Kyle Szostek (Senior VDC Engineer, Gilbane Building Company)

Kyle Szostek is a senior virtual design and construction engineer who has been with Gilbane Building Company for the last four years, managing virtual design and construction services for over $2 billion of construction projects. He's focused his work on research and development of collaborative BIM workflows, visualization techniques, and AR/VR tools. With a background in 3D art and a bachelor's of architecture from the University of Arizona, Kyle brings a unique 3D visualization skillset to Gilbane's VDC team.

GPU Computing for the Construction Industry: AR/VR for Learning, Planning, and Safety
TUESDAY, AUG 1 | Booth Theater 403 | 1:00pm - 1:25pm

We'll dive headfirst into some of the current challenges of the construction industry, how we're addressing them, and how we're planning to utilize virtual/augmented reality and real-time GPU computing to address them. To optimize the construction of a building, site logistics must be planned, and all systems analyzed and coordinated to confirm constructability. Along with the use of building information modeling (BIM) and the advent of inexpensive GPU and AR/VR hardware, we're building tools to redefine the planning and analysis process for construction management. No longer are virtual and augmented reality systems just for entertainment; they can help us plan faster, help confirm our client's design goals, and facilitate stronger communication among our team members before and during the construction process.

Ken Grothman (Senior Virtual Construction Engineer, Gilbane Building Company)

Ken Grothman is a senior virtual design and construction engineer who has been with Gilbane Building Company for two years, involved in over $1 billion of construction work, including high-end corporate, medical facilities, and mission-critical data infrastructure. Ken specializes in laser scanning and reality capture, and is an active member in the industry's laser scanning community. With a background in design|build architecture and a master's of architecture from the University of Kansas, Ken brings a pragmatic, problem-solving skillset to Gilbane's VDC team.

How to Bring Engineering Datasets on Head-Mounted Displays
TUESDAY, AUG 1 | Booth Theater 403 | 1:30pm - 1:55pm

Hear visualization experts explain why people in professional visualization, in particular virtual engineering, are great candidates to unleash the full potential of HMDs and how close today's technology pushes application developers to the finish line of discovering massive datasets with HMDs. Learn about new hardware (NVIDIA Pascal™-powered NVIDIA Quadro® GPUs), extensions, APIs (NVIDIA VRWorks™: NVIDIA SLI® VR, Single Pass Stereo), techniques (GPU culling), and next steps that enable ESI to create amazing VR experiences even with high node and triangle count.

Eric Kam (Product Marketing and Community Manager, ESI Group)

Eric Kam is the Product Marketing and Community Manager for ESI Software Group supporting their Immersive Experience (Virtual Reality) solutions. He is an outspoken advocate for the ongoing transformation in Computer Aided Design, Finite Element Analysis, and Computer Aided Engineering. He has spent the bulk of the last 17 years promoting the democratization of previously "analyst driven" technologies to bring the benefits of Virtual Engineering tools to the engineering and manufacturing practitioners themselves.

Vikram Bapat (Product Manager, Immersive Experience, ESI Group)

Vikram Bapat is a product manager at the immersive experience team of ESI. He has more than 8 years of experience in enabling Engineering teams to leverage ESI's Human Centric Functional DMU solution to explore, validate and resolve complex integration scenarios. He is responsible for managing ESI's solution built on advanced VR technology with the aim of providing a true virtual prototype for decision making.

How to Bring Engineering Datasets on Head-Mounted Displays
TUESDAY, AUG 1 | Booth Theater 403 | 1:30pm - 1:55pm

Hear visualization experts explain why people in professional visualization, in particular virtual engineering, are great candidates to unleash the full potential of HMDs and how close today's technology pushes application developers to the finish line of discovering massive datasets with HMDs. Learn about new hardware (NVIDIA Pascal™-powered NVIDIA Quadro® GPUs), extensions, APIs (NVIDIA VRWorks™: NVIDIA SLI® VR, Single Pass Stereo), techniques (GPU culling), and next steps that enable ESI to create amazing VR experiences even with high node and triangle count.

Eric Kam (Product Marketing and Community Manager, ESI Group)

Eric Kam is the Product Marketing and Community Manager for ESI Software Group supporting their Immersive Experience (Virtual Reality) solutions. He is an outspoken advocate for the ongoing transformation in Computer Aided Design, Finite Element Analysis, and Computer Aided Engineering. He has spent the bulk of the last 17 years promoting the democratization of previously "analyst driven" technologies to bring the benefits of Virtual Engineering tools to the engineering and manufacturing practitioners themselves.

How to Bring Engineering Datasets on Head-Mounted Displays
TUESDAY, AUG 1 | Booth Theater 403 | 1:30pm - 1:55pm

Hear visualization experts explain why people in professional visualization, in particular virtual engineering, are great candidates to unleash the full potential of HMDs and how close today's technology pushes application developers to the finish line of discovering massive datasets with HMDs. Learn about new hardware (NVIDIA Pascal™-powered NVIDIA Quadro® GPUs), extensions, APIs (NVIDIA VRWorks™: NVIDIA SLI® VR, Single Pass Stereo), techniques (GPU culling), and next steps that enable ESI to create amazing VR experiences even with high node and triangle count.

Vikram Bapat (Product Manager, Immersive Experience, ESI Group)

Vikram Bapat is a product manager at the immersive experience team of ESI. He has more than 8 years of experience in enabling Engineering teams to leverage ESI's Human Centric Functional DMU solution to explore, validate and resolve complex integration scenarios. He is responsible for managing ESI's solution built on advanced VR technology with the aim of providing a true virtual prototype for decision making.

Production-Quality, Final-Frame Rendering on a GPU
TUESDAY, AUG 1 | Booth Theater 403 | 2:00pm - 2:25pm

We'll discuss the latest features of Redshift, the GPU-accelerated renderer running on NVIDIA GPUs that is redefining the industry's perception towards GPU final-frame rendering. A few customer work examples will be demonstrated. This talk will be of interest to industry professionals who want to learn more about GPU-accelerated production-quality rendering as well as software developers who are interested in GPU-accelerated rendering

Robert Slater (Immersive Experience, VP Engineering, Redshift)

Robert Slater is a seasoned GPU software engineer and video game industry veteran, with a vast amount of experience in and passion for the field of programming. As a programmer, Rob has worked for companies such as Electronic Arts, Acclaim, and Double Helix Games (now Amazon Games). During this time, he was responsible for the core rendering technology at each studio, driving their creative and technical development. Rob's graphics engine programming experience and know-how ensures that Redshift is always at the forefront of new trends and advances in the industry.

Phase-Functioned Neural Networks for Character Control
TUESDAY, AUG 1 | Booth Theater 403 | 2:30pm - 2:55pm

Producing animation systems for AAA video games requires thousands of hours of work - hundreds of animators, programmers and powerful tools dedicated to handling the huge complexity of the task. What if all of this could be replaced with just one simple process? In this talk I show how a neural network can be trained for exactly this purpose - utilising gigabytes of raw motion capture data to create a compact animation controller that requires just a few megabytes of memory and milliseconds to compute and yet can adapt to complex situations such as traversing rough terrain, crouching, and jumping.

Daniel Holden (Animation Researcher, Ubisoft, Montreal)

Daniel Holden is an Animation Researcher at Ubisoft Montreal. He completed his PhD in 2017 with research focusing on how Deep Learning can be used to solve problems in the field of character animation.

Valerian and the City of a Thousand Planets
TUESDAY, AUG 1 | Booth Theater 403 | 11:00am - 11:25am

Luc Besson's futuristic sci-fi, Valerian and the City of a Thousand Planets is set in 'a Universe without boundaries' and this aptly describes the films diverse and abounding digital environments and creatures. Weta Digital worked on over 1300 shots, each packed with imaginative detail and saturated with unique digital characters. Weta Digital VFX Supervisor Martin Hill will specifically describe the key technologies, artistry and outside-the-box thinking that was required to pull off this eclectic array of creatures and characters.

Martin Hill (VFX Supervisor, Weta Digital)

Martin Hill Joined Weta Digital in 2004 for King Kong, working on the look development for Kong. As a Visual Effects Supervisor, Martin has worked on several films including, Prometheus, The Wolverine and Furious 7 (where he led the team in creating a digital Paul Walker to help finish the film). Martin was nominated for an Academy Award for Visual Effects on Prometheus and received an Academy Technical Achievement Award for the creation of the spherical harmonics based efficient lighting system. Martin is currently supervising Luc Besson's Valerian and the City of a Thousand Planets.

OpenGL and Vulkan Support for 2017
TUESDAY, AUG 1 | Booth Theater 403 | 3:30pm - 3:55pm

Learn how NVIDIA continues improving both Vulkan and OpenGL for cross-platform graphics and compute development. This high-level talk is intended for anyone wanting to understand the state of Vulkan and OpenGL in 2017 on NVIDIA GPUs. For OpenGL, the latest standard update maintains the compatibility and feature-richness you expect. For Vulkan, NVIDIA has enabled the latest NVIDIA GPU hardware features and now provides explicit support for multiple GPUs. And for either API, NVIDIA's SDKs and Nsight tools help you develop and debug your application faster.

Mark Kilgard (Principal System Software Engineer, NVIDIA)

Mark Kilgard is a Principal System Software Engineer at NVIDIA working on OpenGL, vector graphics, web page rendering, and GPU-rendering algorithms. Mark has over 25 years' experience with OpenGL including the specification of numerous important OpenGL extensions. He implemented the OpenGL Utility Toolkit (GLUT) library. Mark authored two books and is named on over 60 graphics-related patents.

Compiling a Volumetric Fluid Workflow in Eddy
TUESDAY, AUG 1 | Booth Theater 403 | 4:00pm - 4:25pm

Eddy is a new volumetric compositing, simulation and rendering framework that runs entirely on NVIDIA GPUs. In order to provide fast feedback to artists, Eddy compiles many of its separable volumetric operations into inlined optimized CUDA code. Eddy's general compiler approach provides a untyped python-style syntax, which is compiled and linked on the fly without noticeable delay for the artist. On the one hand Eddy utilizes this as a backend to compile a dependency graph of volume operations, which feeds the sparse fluid simulations as emission sources. On the other hand it is used as a frontend to directly script field functions and as a shading language to drive the volumetric rendering. This talk will show how dynamic compilation in CUDA enables powerful workflows for the artist while remaining efficient and user-friendly.

Christoph Sprenger (Director&Co-Founder Vortechs FX, Vortechs FX)

Christoph Sprenger is a co-founder of VortechsFX, a small New Zealand based company providing high-end volumetric workflows for visual effects, broadcast and commercials. He has been working in the film industry for the last 18 years in various different capacities at studios like Weta Digital, Animal Logic, Double Negative, Rising Sun Pictures and Scanline. During his career Christoph has been driven by the fusion of art and science and is excited to be continuing on this path.

Introducing a Stable and Controllable Parametric Model for Neural Texture Synthesis, Time-Varying Weathering and Style Transfer
TUESDAY, AUG 1 | Booth Theater 403 | 4:30pm - 4:55pm

This talk introduces a powerful new parametric model for optimization based texture synthesis using neural networks. This texture synthesis approach can be extended to also perform time-varying weathering and style transfer. This new parametric model utilizes histograms and co-variance of neural activations to achieve state-of-the-art results on all three applications. A multiscale coarse-to-fine synthesis strategy is also introduced, which improves both synthesis quality and speed. This approach is implemented in CUDA and takes advantage of the latest Pascal GPU's to achieve a 5x speedup over previous optimization based neural texture synthesis methods.

Eric Risser (CTO, Artomatix)

Eric Risser is an expert in the combined fields of artificial intelligence and computer graphics and is a pioneer in machine creativity. Before founding Artomatix, he authored six technical publications during his academic career at Columbia University and Trinity College Dublin. He has given talks at top industry/academic conferences such as Game Developers Conference (GDC) and Siggraph.

Introducing Project Holodeck
TUESDAY, AUG 1 | Booth Theater 403 | 5:00pm - 5:25pm

NVIDIA is committed to the advancement of next-generation Virtual Reality, complete with stunning hi-fidelity, dynamic physical behaviors, and real-time social interactions. Within Holodeck friends will be able to create and share games, families will be able to explore vacation plans & experiences, designers will be able to evaluate new models, and robots will be able to learn new complex tasks. We’ll discuss the Holodeck architecture and use-cases. Holodeck’s early access program will launch in the fall — come hear the talk, and then try Project Holodeck demos at the NVIDIA booth!

David Weinstein (Director Pro VR, NVIDIA)

David Weinstein is the Director for Professional Virtual Reality at NVIDIA. As Director of Pro VR, he is responsible for NVIDIA's Professional VR Products, Projects, and SDK's. Prior to joining NVIDIA, Dave founded and ran three tech start-up companies.

Beyond Media and Entertainment: How AR & VR are Changing Design
TUESDAY, AUG 1 | Booth Theater 403 | 5:30pm - 5:55pm

VR is disrupting many design professions from architecture to engineering, construction to manufacturing and of course, in Media & Entertainment. Tomorrow's designers, makers, and artists will simply rely on VR as core technology to get their job done. This is because immersive experiences help us perceive the design as though we're physically present: allowing creators to better predict and make more informed decisions about our designs and artistry. As a world leader in 3D design software and deep knowledge of Media & Entertainment, Autodesk, in conjunction with its partner NVIDIA, is in a unique position to leverage new creative immersive technology like VR, to build tools that will usher in the Future of Making Things.

Joel Pennington (AR & VR Strategist, Autodesk)

Joel Pennington drives Autodesk's AR & VR strategy. In 2014 Joel joined a team to lead research and design for democratizing AR & VR technology for Architects, Engineers, Construction Professionals, and Manufacturing companies. He was hired at Autodesk Singapore in 2010 to head design for MotionBuilder and lead a multi-year collaboration with James Cameron's company, Lightstorm, to rebuild Autodesk's software for the Avatar sequels. Prior to joining to Autodesk, Joel spent 10 years working on Virtual Production features for Disney and AAA game titles for Electronic Arts. He held Management, Art, and Technical positions throughout. Joel graduated from the Vancouver Film School in 2001. His short film, A Cheese Without Cause, won several awards at film festivals throughout North America.

Aftermath – A New Way of Debugging Crashes on the GPU
WEDNESDAY, AUG 2 | Booth Theater 403 | 9:30am - 9:55am

Come learn about what NVIDIA is doing to aid in debugging GPU crashes/hangs/TDRs during development, and long after release. With rendering APIs increasing in complexity we need to find a better way of debugging issues that present on the GPU. Currently, there's no way to debug a GPU crash after the fact, which is not only a problem during development of a game, but also after release. Many game developers are collecting telemetry about CPU crashes - but what about GPU crashes? This session presents a solution to that problem.

Alex Dunn (Developer Engineer, NVIDIA)

Alex Dunn, as a developer technology engineer for NVIDIA, spends his days passionately working toward advancing real-time visual effects in games. A former graduate of Abertay University's Games Technology course, Alex got his first taste of graphics programming on the consoles. Now working for NVIDIA, his time is spent working on developing cutting-edge rendering techniques to ensure the highest quality and best player experience possible is achieved.

Go with the Flow: Pixar's Interactive Look-Development Tool
WEDNESDAY, AUG 2 | Booth Theater 403 | 10:00am - 10:25am

Pixar's new look-development tool Flow is enabling artists to create amazing shaders in a fully interactive environment. The software allows them to create and visualize very complex procedural shading networks, as required by feature film production. Flow is built on-top of Rtp, an Nvidia OptiX-based real-time GPU path tracer developed at Pixar, as well as USD; Pixar's open-source Universal-Scene-Description. We'll show how these technologies are combined to create artist-focused workflows that are exploration-driven and more interactive than ever before. We'll also talk about how shading networks are implemented inside of our path tracer, which makes use of key OptiX features.

Florian Hecht (Technical Director, Pixar)

Florian Hecht is a graphics software engineer at Pixar working on speeding up rendering and shading workflows, in particular using GPU technology. He joined Pixar in 2011 after doing graphics research at UC Berkeley. He received a MS in computer science from the University of Karlsruhe, Germany as well as one from the Georgia Institute of Technology.

Peter Roe (Technical Director, Pixar)

Peter Roe started working at Pixar in 2010, coming from a 7-year career in video games in the UK, and has been a sets-shading TD on feature films and shorts, including Brave, Inside Out, The Good Dinosaur, The Blue Umbrella, Piper and Coco. Peter has helped design Flow and has been using it exclusively on his work for the upcoming Pixar film Coco.

Go with the Flow: Pixar's Interactive Look-Development Tool
WEDNESDAY, AUG 2 | Booth Theater 403 | 10:00am - 10:25am

Pixar's new look-development tool Flow is enabling artists to create amazing shaders in a fully interactive environment. The software allows them to create and visualize very complex procedural shading networks, as required by feature film production. Flow is built on-top of Rtp, an Nvidia OptiX-based real-time GPU path tracer developed at Pixar, as well as USD; Pixar's open-source Universal-Scene-Description. We'll show how these technologies are combined to create artist-focused workflows that are exploration-driven and more interactive than ever before. We'll also talk about how shading networks are implemented inside of our path tracer, which makes use of key OptiX features.

Florian Hecht (Technical Director, Pixar)

Florian Hecht is a graphics software engineer at Pixar working on speeding up rendering and shading workflows, in particular using GPU technology. He joined Pixar in 2011 after doing graphics research at UC Berkeley. He received a MS in computer science from the University of Karlsruhe, Germany as well as one from the Georgia Institute of Technology.

Go with the Flow: Pixar's Interactive Look-Development Tool
WEDNESDAY, AUG 2 | Booth Theater 403 | 10:00am - 10:25am

Pixar's new look-development tool Flow is enabling artists to create amazing shaders in a fully interactive environment. The software allows them to create and visualize very complex procedural shading networks, as required by feature film production. Flow is built on-top of Rtp, an Nvidia OptiX-based real-time GPU path tracer developed at Pixar, as well as USD; Pixar's open-source Universal-Scene-Description. We'll show how these technologies are combined to create artist-focused workflows that are exploration-driven and more interactive than ever before. We'll also talk about how shading networks are implemented inside of our path tracer, which makes use of key OptiX features.

Peter Roe (Technical Director, Pixar)

Peter Roe started working at Pixar in 2010, coming from a 7-year career in video games in the UK, and has been a sets-shading TD on feature films and shorts, including Brave, Inside Out, The Good Dinosaur, The Blue Umbrella, Piper and Coco. Peter has helped design Flow and has been using it exclusively on his work for the upcoming Pixar film Coco.

War for the Planet of the Apes
WEDNESDAY, AUG 2 | Booth Theater 403 | 10:30am - 10:55am

War for the Planet of the Apes is the stunning conclusion in the acclaimed Planet of the Apes trilogy. Bringing apes with intelligence and emotional resonance to the screen has successfully carved out a new standard and a new era in the realistic animation of digital lead characters. We'll explain how Weta Digital's high-performance computing solutions allowed them to facilitate every part of the film - from vast digital environments featuring thousands of digital characters, down to the small details in each of the ape characters like the way snow catches in their fur. We'll discuss Weta's advanced outdoor motion capture system, fur simulation tools, and facial capture and modelling technology.

Anders Langlands (VFX Supervisor, Weta Digital)

Anders Langlands recently joined Weta Digital as a Visual Effects Supervisor, and has just completed his first project at the company - the critically acclaimed CG blockbuster War for the Planet of the Apes. Prior to joining Weta, Anders was a Visual Effects Supervisor for 13 years at MPC. He has 17 film credits to his name including X-Men: Apocalypse, Wrath of the Titans, and X-Men: Days of Future Past, which earned him a BAFTA nomination. In 2016, he was nominated for Best Visual Effects at the Academy Awards® for his work on The Martian.

Passengers: Awakening VR, When Film Meets VR
WEDNESDAY, AUG 2 | Booth Theater 403 | 11:00am - 11:25am

We'll present our journey to create a real-time VR experience leveraging film VFX workflows and assets. We'll illustrate this by talking about our work to create the Passengers: Awakening VR Experience and also some work we're doing in the virtual production space. We'll detail some of the challenges the developers needed to overcome,from asset build technique complexity to major differences in offline rendering and 90 fps real-time VR workflows. Finally, we'll conclude on future work and discussion about where these VR workflows can directly apply to film VFX creation and virtual production.

Francesco Giordana (Researcher, MPC Film)

Francesco Giordana leads the development of real-time technologies for film at MPC with a special focus on virtual production and VR. Franceso started his career in real-time rendering and video games first in a research lab and then at Guerrilla Games. He then spent four years at Double Negative VFX, where he wrote a GPU-accelerated fur system and led an R&D team dedicated to the development of digital creatures for film. After that he joined ILM for two years, focusing on real-time digital acting, in particular facial performance capture and real-time rendering of characters.

Design with Virtual Reality in Architecture, Engineering & Construction
WEDNESDAY, AUG 2 | Booth Theater 403 | 11:30am - 11:55am

Learn how Gensler is using the latest technology in virtual reality across all aspects of the design process for the AEC industry. We'll cover how VR has added value to the process when using different kinds of VR solutions. Plus we'll talk about some of the challenges Gensler has faced with VR in terms of hardware, software, and workflows. Along with all of this, NVIDIA's latest VR visualization tools are helping with the overall process and realism of our designs.

Scott DeWoody (Firmwide Creative Media Manager, Gensler)

Scott DeWoody has always had an affinity for art and technology. His core focus is on lighting and rendering techniques using 3ds Max software, V-Ray, Iray, Substance Designer, and Adobe Photoshop. A day does not go by when he is not using one of these applications. Image quality and workflow are the top priorities in his work. He is constantly studying color theory, composition, and new ways to produce the most effective possible results. He has worked at M. Arthur Gensler Jr. & Associates, for the past nine years as a visualization artist and manager. He has worked for numerous clients, including NVIDIA Corporation, ExxonMobil, and many more. Currently, he is exploring the new possibilities of architecture in the interactive space with gaming platforms, augmented reality, and virtual reality.

The Power of Integrated High-End Visualization and VR in Product Design with 3DEXPERIENCE CATIA
WEDNESDAY, AUG 2 | Booth Theater 403 | 12:00pm - 12:25pm

Dassault Systemes is the worldwide PLM leader, its 3DEXPERIENCE platform being used to think, design and produce from the smallest objects to the most complex aerospace rockets or even full cities. The last R2017x version adds some major improvements on the graphics engine side. Leveraging native CAD or PLM data, designers are now able to use compelling texturing for life-like experience and review their design in VR without any data transform required. The talk will explain and show how from early simulation and automatic design optimization through final design review, the full product journey and how VR natively running on the platform can let designer think, experience and validate their products before they even exist.

Xavier Melkonian (Director CATIA Design Portfolio, Dassault Systems)

After over 25 years software industry experience, Xavier Melkonian joined DASSAULT SYSTEMES CATIA Brand in January 2009. As director of CATIA Design he is responsible for Dassault Systemes CATIA strategy for design roles solutions for the transportation and product design industries. The CATIA design promise and mission is to provide "Best in class" industrial design, Class-A Surface modeling, Visualization & Immersive solutions to support customers challenge and strategy to "gain the competitive advantage by design".

Real-Time User-Guided Image Colorization With Learned Deep Priors
WEDNESDAY, AUG 2 | Booth Theater 403 | 12:30pm - 12:55pm

We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and show large improvements in colorization quality with just a minute of use. In addition, we show that the framework can incorporate other user "hints" as to the desired colorization, showing an application to color histogram transfer.

Richard Zhang (Ph.D., University of California, Berkeley)

Richard Zhang is a PhD candidate in the EECS department at the University of California, Berkeley, advised by Professor Alexei (Alyosha) Efros. He obtained his Bachelor of Science and Master of Engineering degrees from Cornell University in Electrical and Computer Engineering (ECE). His research interests are in Computer Vision, Machine Learning, Deep Learning, and Sensor Fusion. He is a recipient of a 2017 Adobe Fellowship award.

Jun-Yan Zhu (Ph.D. student, Berkeley AI Research Lab(BAIR))

Jun-Yan Zhu is a Ph.D. student at the Berkeley AI Research (BAIR) Lab, working on computer vision, graphics and machine learning with Professor Alexei A. Efros. He received his B.E. from Tsinghua University in 2012, and was a Ph.D. student at CMU from 2012-13. His research goal is to build machines capable of recreating the visual world. Jun-Yan is currently supported by a Facebook Graduate Fellowship.

Real-Time User-Guided Image Colorization With Learned Deep Priors
WEDNESDAY, AUG 2 | Booth Theater 403 | 12:30pm - 12:55pm

We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and show large improvements in colorization quality with just a minute of use. In addition, we show that the framework can incorporate other user "hints" as to the desired colorization, showing an application to color histogram transfer.

Richard Zhang (Ph.D., University of California, Berkeley)

Richard Zhang is a PhD candidate in the EECS department at the University of California, Berkeley, advised by Professor Alexei (Alyosha) Efros. He obtained his Bachelor of Science and Master of Engineering degrees from Cornell University in Electrical and Computer Engineering (ECE). His research interests are in Computer Vision, Machine Learning, Deep Learning, and Sensor Fusion. He is a recipient of a 2017 Adobe Fellowship award.

Real-Time User-Guided Image Colorization With Learned Deep Priors
WEDNESDAY, AUG 2 | Booth Theater 403 | 12:30pm - 12:55pm

We propose a deep learning approach for user-guided image colorization. The system directly maps a grayscale image, along with sparse, local user "hints" to an output colorization with a Convolutional Neural Network (CNN). Rather than using hand-defined rules, the network propagates user edits by fusing low-level cues along with high-level semantic information, learned from large-scale data. We train on a million images, with simulated user inputs. To guide the user towards efficient input selection, the system recommends likely colors based on the input image and current user inputs. The colorization is performed in a single feed-forward pass, enabling real-time use. Even with randomly simulated user inputs, we show that the proposed system helps novice users quickly create realistic colorizations, and show large improvements in colorization quality with just a minute of use. In addition, we show that the framework can incorporate other user "hints" as to the desired colorization, showing an application to color histogram transfer.

Jun-Yan Zhu (Ph.D. student, Berkeley AI Research Lab(BAIR))

Jun-Yan Zhu is a Ph.D. student at the Berkeley AI Research (BAIR) Lab, working on computer vision, graphics and machine learning with Professor Alexei A. Efros. He received his B.E. from Tsinghua University in 2012, and was a Ph.D. student at CMU from 2012-13. His research goal is to build machines capable of recreating the visual world. Jun-Yan is currently supported by a Facebook Graduate Fellowship.

NVIDIA Research: Zoom, Enhance, Synthesize! Magic Image Upscaling and Material Synthesis Using Deep Learning
WEDNESDAY, AUG 2 | Booth Theater 403 | 1:00pm - 1:25pm

Recently deep learning has revolutionized computer vision and other recognition problems. During 2016, "image synthesis" techniques started to appear that use deep neural networks to apply style transfer algorithms for image restoration. Extending this field, NVIDIA Research developed techniques that use approaches based on AI, machine learning, and deep learning to greatly improve the process of creating game-ready materials. We'll look at several tools that leverage this research. The focus then shifts to a deep-dive on the design of a neural network that enables image magnification (or "super resolution") of 2x, 4x and 8x of input pictures and textures.

Andrew Edelsten (Senior Developer Technologies Manager, NVIDIA)

Andrew Edelsten has worked in the games and visual arts industry for 20 years. Starting his career making computer games and 3D engines in Australia in the mid 90s, Andrew had a two-year sojourn in Europe before starting work at NVIDIA in 2010. For the last year, Andrew and his team have been researching novel deep learning approaches to games industry pain points with the overarching goal to make games more beautiful and engaging.

Gamification of VR in Commercial
WEDNESDAY, AUG 2 | Booth Theater 403 | 1:30pm - 1:55pm

VR is not just the domain of media and entertainment any more. Learn how commercial VR is starting to take off and how some of the boundaries that have been pushed in the gaming space are really starting to change the way VR is used in commercial environments for training, manufacturing, healthcare and others.

Gary Radburn (Director, Virtual and Augmented Reality, Dell)

Gary Radburn is the Director of Virtual and Augmented Reality globally at Dell Inc. As part of his role, he manages the team charged with developing and delivering VR/AR technology, works closely with Dell customers on VR/AR deployments and founded Dell's VR Centers of Excellence with locations around the world for businesses and consumers to experience and learn more about VR in the real world. With more than 27 years of experience in the technology industry, ranging from Engineering to Sales and Marketing, Gary has had experience across all aspects of designing successful products and solutions and bringing them to market. He has worked for companies such as Digital Equipment, 3Com and, for the past 15 years at Dell. Originating from the UK, where he managed the OptiPlex client business for EMEA, he went on to lead the Workstation Solutions team in the US and then championed graphics virtualization for engineering applications. This has now progressed into other domains and with the fast adoption of VR and AR solutions, Gary has taken the helm for driving this exciting new area of business.

Light Field Rendering and Streaming for VR and AR
WEDNESDAY, AUG 2 | Booth Theater 403 | 2:00pm - 2:25pm

We'll discuss OTOY's cutting-edge light field rendering toolset and platform, which allows for immersive experiences on mobile HMDs and next-gen displays, making it ideal for VR and AR. OTOY is developing a groundbreaking light field rendering pipeline, including the world's first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for VR and emerging holographic displays.

Jules Urbach (CEO, OTOY)

Jules Urbach is a pioneer in computer graphics, streaming, and 3D rendering with over 25 years of industry experience. He made his first game, Hell Cab (Time Warner Interactive), at age 18, which was one of the first CD-ROM games ever created. Six years later, Jules founded Groove Alliance, which created the first 3D game ever available on Shockwave.com (Real Pool). Currently, Jules is busy working on his two latest ventures, OTOY and LightStage, which aim to revolutionize 3D content capture, creation, and delivery.

NVIDIA Research: Learning Light Transport the Reinforced Way
WEDNESDAY, AUG 2 | Booth Theater 403 | 2:30pm - 2:55pm

We show that the equations for reinforcement learning and light transport simulation are related integral equations. After a brief introduction of reinforcement learning and light transport simulation we visualize the correspondence between the equations by pattern matching. Based on this correspondence, a scheme to learn importance during sampling path space is derived. The new approach is demonstrated in a consistent light transport simulation algorithm that uses reinforcement learning to progressively learn probability density functions for importance sampling. Furthermore we show that our method is easy to integrate into any existing path tracer and can greatly increase rendering efficiency.

Ken Dahm (Research Scientist, NVIDIA)

Ken Dahm is a research scientist at the NVIDIA Advanced Rendering Center in Berlin. His areas of interest include medical imaging, deep learning, reinforcement learning and function approximation in computer graphics, combined density estimation and bidirectional path tracing using implicit multiple importance sampling, GPU-assisted ray front rendering, GPU-based ray marching with distance fields, and parallel algorithms for partition caches for divide-and-conquer ray tracing. Ken has an M.S. in visual computing from Saarland University.

Assembly Chain Training with Professional VR by Optis
WEDNESDAY, AUG 2 | Booth Theater 403 | 3:00pm - 3:25pm

Optis has been involved in advanced optical simulation for the past 25 years and has recently invested in VR for virtual prototyping. Its latest HIM built for human ergonomics evaluation in combination with advanced, real-time, physics-based rendering enables precise environment reproduction for appropriate prototyping or training. We'll present the latest integration for assembly line training with HTC Vive and feedback powered by NVIDIA PhysX. Companies such as Tesla Motors and Bentley are the proud early adopters of this solution. We'll demonstrate our software and show customer use cases and their data to explain how to improve the VR experience with haptics and audio simulation in the future

Nicolas Dalmasso (Innovation Director, Optis)

Nicolas Dalmasso created his company, SimplySim, in 2008 with the goal of providing highly accurate real-time simulation middleware to compete with Virtools and Unity. SimplySim was acquired by Optis in 2011 to bring real-time and VR capabilities to the Optis portfolio. After driving the development and deployment of the different real-time products available at Optis (Theia-RT, HIM, and VR Xperience), Nicolas is now leading innovation at the corporate level. Nicolas studied computer graphics and advanced computer science at the University of Nice and Polytech Engineering School.

Automating VR and Photoreal Imagery From Siemens Teamcenter
WEDNESDAY, AUG 2 | Booth Theater 403 | 3:30pm - 3:55pm

Learn how manufacturers are automating and in-housing their digital photorealistic and VR/AR visualization pipelines out of Siemens Teamcenter and NX through JT. This is leading to improved efficiency and cost reduction and, crucially, enabling manufacturer control over digital assets that allows them to be repurposed across the business. We'll demonstrate how to set up an automated visual digital pipeline out of Siemens Teamcenter into NVIDIA Iray and Epic Unreal Engine, accounting for configuration rules and buildability.

Ben Widdowson (Pre Sales Consultant, Lightworks)

Ben Widdowson is head of sales for EMEA at Lightworks. He works with customers to ascertain their needs, explaining the latest developments to Slipstream and helps find the best solution for them. With a background in 3D design, engineering and sales Ben understands the importance of being able to rapidly visualise your ideas, whether that be during design and engineering or at the point-of-sale. Slipstream will accelerate the lighting, look development and design visualisation work of engineers, designers and marketers across all sectors and disciplines.

NVIDIA Research: Charted Metropolis Light Transport
WEDNESDAY, AUG 2 | Booth Theater 403 | 4:00pm - 4:25pm

In this manuscript, inspired by a simpler reformulation of primary sample space Metropolis light transport, we derive a novel family of general Markov chain Monte Carlo algorithms called \emph{charted Metropolis-Hastings}, that introduces the notion of \emph{sampling charts} to extend a given sampling domain and make it easier to sample the desired target distribution and escape from local maxima through coordinate changes.

Jacopo Pantaleoni (Senior Research Scientist, NVIDIA)

Jacopo Pantaleoni explored the field of photo-realistic computer graphics to great depth for a dozen+ years, both as a large scale software architect and as a basic researcher, Jacopo has spent the last years investigating new stochastic methods for physically based light transport simulation, developing novel rendering techniques for the film VFX industry (contributing key technology - the PantaRay engine - for the Oscar winning special effects in Avatar), and providing new algorithms for real-time ray tracing. After a brief pause spent leading investigations into novel massively parallel algorithms for DNA sequencing, he is now back into creating beautiful images.

MDL in Substance Designer
WEDNESDAY, AUG 2 | Booth Theater 403 | 4:30pm - 4:55pm

Substance Designer is a powerful tool for creating physically-based procedural materials and MDLs. In this presentation we will look at the node-based workflow for creating an MDL and utilizing the procedural nature of Substance to drive input patterns for the material.

Wes McDermot (Substance Integration Product Manager / Training, Allegorithmic)

Wes McDermott is a technical artist and strives to improve his work by finding a key balance between technical knowledge and artistic skill sets. He works for Allegorithmic as the Product Manager for Substance Integrations. He also produces training for Substance Academy and studios. allegorithmic.com

Beyond Visualization, Harnessing the Power of Compute for Design
WEDNESDAY, AUG 2 | Booth Theater 403 | 5:00pm - 5:25pm

Autodesk Project Dreamcatcher takes the next step in the world of computation, artificial intelligence, and machine learning by harnessing the power of computing to deliver on the promise of Computer Aided Design. Today's GPU's allow for massive exploration of the design space for any problem, empowering designers and engineers to truly allow computation capacity to aid them in design and problem solving. Come learn how Autodesk is harnessing the power of computation in the cloud, powered by tomorrow's next generation hardware, to help everyone make better decisions.

Mitch Muncy (Senior Product Line Manager, Simulation, Autodesk)

Mitch Muncy is a Simulation Product Manager at Autodesk with almost 20 years in the CAE industry. His career has been focused on helping companies optimize product design through upfront simulation and validation. Mitch holds a degree in Mechanical Engineering from the University of California, Irvine. Before joining Autodesk, he was Executive Vice President of NEi Nastran, where he managed day-to-day operations of the sales, marketing, and technical teams.

Train Your Won AI Denoiser
WEDNESDAY, AUG 2 | Booth Theater 403 | 5:30pm - 5:55pm

Learn about the new AI denoiser that will be available in the OptiX 5.0 SDK. We will explain why we chose to apply AI to the denoising problem. We will share our methodology, learnings, and pitfalls, so that you train your own AI based denoiser. Finally, showing you the early results and how you can add denoising to your renderer with OptiX 5.0 or train your own AI denoiser.

Martin-Karl Lefrancois (DevTech Software Engineer Lead, NVIDIA)

Martin-Karl Lefrancois is a senior software engineer and team lead in the Developer Technology organization at NVIDIA in Berlin. Martin-Karl works with various NVIDIA rendering and core development teams to bring to clients the best rendering experience. Prior to NVIDIA, he worked at mental images to deliver automatic GPU support in mental ray. After graduating with a degree in computer science and mathematics from the University of Sherbrooke in Quebec, he worked as a graphic developer for nearly 10 years at Softimage in Montreal and Tokyo before leading the core game engine team at A2M.

Train Your Own AI Denoiser
THURSDAY, AUG 3 | Booth Theater 403 | 10:00am - 10:25am

Learn about the new AI denoiser that will be available in the OptiX 5.0 SDK. We will explain why we chose to apply AI to the denoising problem. We will share our methodology, learnings, and pitfalls, so that you train your own AI based denoiser. Finally, showing you the early results and how you can add denoising to your renderer with OptiX 5.0 or train your own AI denoiser.

Martin-Karl Lefrancois (DevTech Software Engineer Lead, NVIDIA)

Martin-Karl Lefrancois is a senior software engineer and team lead in the Developer Technology organization at NVIDIA in Berlin. Martin-Karl works with various NVIDIA rendering and core development teams to bring to clients the best rendering experience. Prior to NVIDIA, he worked at mental images to deliver automatic GPU support in mental ray. After graduating with a degree in computer science and mathematics from the University of Sherbrooke in Quebec, he worked as a graphic developer for nearly 10 years at Softimage in Montreal and Tokyo before leading the core game engine team at A2M.

Il Gigante: Michelangelo's David in VR
THURSDAY, AUG 3 | Booth Theater 403 | 10:30am - 10:55am

In 1998 Stanford University teamed up with the Soprintendenza ai beni artistici e storici per le province di Firenze to laser scan Michelangelo's David. The paper was published at SIGGRAPH in 2000, but it was not until 2009 that the entire dataset was pieced together. It totals nearly a billion polygons and has a 1/4mm accuracy. Because of the size of the dataset it has only ever been viewable in 2D renders, however, sculpture is made to be experienced in the round. Breakthroughs in virtual reality mean that users can now have an immersive experience in classical 3d sculpture as if they were there in person.

Chris Evans (Lead Technical Animator, EPIC)

Christopher Evans is Lead Technical Animator at Epic Games where he helps create memorable characters and innovative character technologies. Chris has a career of pushing for innovation in both the film and game industries (he's worked as a Character Technical Director at ILM on films like "Transformers" and James Cameron's "Avatar). He also served as the Director of Art and Animation Technology at Crytek for many years, where he lead teams to successfully merge multiple game and film technologies in what have become standards of many commercial game engines today.

Guardians of the Galaxy Vol. 2 VFX Breakdown
TUESDAY, AUG 1 | Booth Theater 403 | 3:00pm - 3:25pm

Guardians of the Galaxy Vol. 2 is the 15th film in the ever-popular Marvel Cinematic Universe, and the sequel to 2014's much-loved Guardians of the Galaxy. The Guardians films have a distinctive storytelling style and energy that makes for colorful, wild and eye-catching visual effects. Weta Digital was responsible for the third act which is set on a planet built with a highly complex structure based on a mathematical formula called a Mandelbulb – a pattern that manifests as a series of three dimensional fractals. In this talk, Kevin will delve in to some of the latest tech that made it possible to work at this scale with such spectacular results, including Weta's physically-correct, pre-lighting software Gazebo, which allows artists to directly review shots and manipulate lighting setups in real-time.

Kevin Smith (VFX Supervisor, Weta Digital)

TBA

vkFX: Effect'ive Approach for Vulkan API
THURSDAY, AUG 3 | Booth Theater 403 | 11:30am - 11:55am

This presentation is intended for Graphics developers who are developing against the Vulkan API using low level C++ code. The purpose is to inspire them to implement useful and meaningful approaches easily with this low level API. The attendees will learn the basic concepts of this approach through the presentation of use cases. Ultimately, they would have enough interest to visit the Open-source repository and use this project for their own work.

Tristan Lorach (Manager, DevTech Professional Visualization Group, NVIDIA)

Tristan Lorach is the manager of developer technical relations for the professional visualization team at NVIDIA. He participates on a variety of projects with NVIDIA partners, contributes to R&D, and writes demos for new GPU chips. Tristan discovered the world of computer graphics through his active contribution to the demoscene world (Amiga). After graduating in 1995, Tristan developed a series of 3D realtime interactive installations for exhibitions and events all over the world. From the creation of a specific engine for digging complex galleries into a virtual solid, to the conception of new 3D human interfaces for public events, Tristan always wanted to work in between the hardware technology and the implementation of new and creative ideas, using cutting-edge hardware.

Overview about the Advanced Rendering at NVIDIA
THURSDAY, AUG 3 | Booth Theater 403 | 12:00pm - 12:25pm

We'll present an overview about OptiX 5.0, MDL 2017.1, and NVIDIA IndeX 1.5

Tom-Michael Thamm (Director for Software Product Management, NVIDIA)

Tom-Michael Thamm is Director for Software Product Management at NVIDIA Advanced Rendering Center (ARC) in Berlin, Germany and is responsible for all software products, such as NVIDIA IndeX, NVIDIA Iray. He is managing and coordinating with his team the customer support as well as the general product definition and positioning. Mr. Thamm worked before NVIDIA ARC for mental images. He is for over 25 years in the 3D visualization business. He has led several key software projects and products, such as the NVIDIA IndeX product for large volume visualization. He has studied Mathematics.

Advancement in Design of Resto Mod Vehicles
THURSDAY, AUG 3 | Booth Theater 403 | 12:30pm - 12:55pm

We'll go over how scan data, NVIDIA technology, and fusion 360 allow us to design and constantly improve vehicle design. Through the use of NVIDIA graphics cards we are able to import large faceted scans(60 million facets) into fusion. We'll go into my process of how we start with scans and use the software to design, and go into key components of how fusion is important for this process(ex building plans directly off mesh data, sectioning mesh data). After talking about design and that side of the software, we'll go into how fusion 360 allows us to easily machine and manufacture parts for inhouse manufacturing using the CAM side of the software.

Nathan Watanabe (Icon4x4)

Nathan Watanabe has a degree in mechanical engineering for ICON. He attended California State University Northridge, and did Formula SAE.

Mars 2030
THURSDAY, AUG 3 | Booth Theater 403 | 1:00pm - 1:25pm

Mars 2030 is an interactive virtual reality simulation that offers a breathtaking look into the life of an astronaut hard at work studying and exploring the Martian landscape. A partnership between NASA and Fusion Media Group Labs, Mars 2030 aims to be the most photorealistic and scientifically accurate depiction of the Red Planet to date. Those in attendance will be among the first to glimpse the results of this exciting and wholly unprecedented immersive collaboration. We will also discuss our approach to working with raw scientific data and integration to Unreal Engine 4.

Julian Reyes (Director of VR/AR, Fusion Media Group Labs)

Julian Reyes is Director of VR/AR at Fusion Media Group Labs. He's currently the project lead on Mars 2030, a collaboration between NASA, MIT, and Fusion to develop a scientifically accurate VR simulation of a future human mission to Mars. His previous work includes Inside a Fusion Reactor, a VR visualization of a nuclear fusion reactor concept developed by General Fusion as well as a real-time 3D browser experience on illegal gold mining in Colombia, Blood Gold. He's also developed and composed several VR music experiences under the name Finder using Unreal Engine 4, most recently Voider and Local. Julian is also currently Director of Technology at III Points Music and Technology Festival in Miami, FL. His focus is on developing educational and informative virtual, augmented, and mixed reality experiences that utilize physical and digital means to push the medium forward.

Rendering Live 360 Stereo Video from 3D Applications
THURSDAY, AUG 3 | Booth Theater 403 | 1:30pm - 2:20pm

360 video is a new and exciting way to share immersive content with other people, but rendering such video with high quality and high performance is difficult. We'll describe both the techniques required to optimize performance and the best practices to avoid various visual artifacts. We'll cover efficient cube-map rendering, stereo-conversion of the cube-map, and handling of translucent objects. We'll share some of the pitfalls of working with particles, billboards, lighting, tone mapping, screen-space effects, etc.

Alexey Panteleev (Senior Developer Technology Engineer, NVIDIA)

Alexey Panteleev is a developer technology engineer working for NVIDIA since 2010. Originally he focused on GPU architecture and low-level performance analysis, but later switched to graphics programming. During the last few years, Alexey has been working on various novel rendering algorithms and libraries, such as VXGI and VRWorks Multi-Projection. He received a Ph.D. in computer architecture from Moscow Engineering and Physics Institute in 2013.

Image Creation using Generative Adversarial Networks with TensorFlow and DIGITS
MONDAY, JULY 31 | Room 513 | 9:00am - 11:00am

This lab will guide students through the process of training a Generative Adversarial Network (GAN) to generate image contents in DIGITS. After a quick review of the theory behind GANs we will train a GAN to generate images of handwritten digits using the MNIST dataset. We will create smooth animations of digits morphing across classes. In the second part of the lab we will apply these concepts on the CelebA dataset of celebrity faces. Using a pre-trained network we will see how to edit or retrieve face attributes (age, smile, etc.). We will see how to visualize the latent space in 3D and how to generate image analogies.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

After spending nearly a decade at UC Berkeley, Kelvin decided to repay public education by helping build UC Merced. He spent 7 years teaching 4500 students across 55 classes while redesigning the Undergraduate Computer Science curriculum. He is now busy designing curriculums at NVIDIA's Deep Learning Institute to democratize access to latest technologies by educating Scientists, Developers and Researchers. Kelvin received his BS in EECS from UC Berkeley.

Image Style Transfer with Torch
MONDAY, JULY 31 | Room 513 | 12:00pm – 2:00pm

Learn how neural networks transfer the look and feel of one image to another image by extracting distinct visual features. See how convolutional neural networks are used for feature extraction, and feeds into a generator for painting a new resultant image. Explore the architectural innovations and training techniques used to make the transfer faster, look qualitatively better to humans and work with any arbitrary input style.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

After spending nearly a decade at UC Berkeley, Kelvin decided to repay public education by helping build UC Merced. He spent 7 years teaching 4500 students across 55 classes while redesigning the Undergraduate Computer Science curriculum. He is now busy designing curriculums at NVIDIA's Deep Learning Institute to democratize access to latest technologies by educating Scientists, Developers and Researchers. Kelvin received his BS in EECS from UC Berkeley.

Character Animation with Theano
MONDAY, JULY 31 | Room 513 | 3:00pm – 5:00pm

See the possibilities of automatic character creation, including animation over various terrains, using neural networks. We start from data preparation for training using motion capture data and a simple neural network. Then teach a more advanced network to understand the animation sequence, as well as breaking down the character motion into phases to make smooth transitions. Learn the skills to train 3D characters by exploiting new advanced techniques in neural networks.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

TBA

Image Creation using Generative Adversarial Networks with TensorFlow and DIGITS
TUESDAY, AUG 1 | Room 513 | 9:00am - 11:00am

This lab will guide students through the process of training a Generative Adversarial Network (GAN) to generate image contents in DIGITS. After a quick review of the theory behind GANs we will train a GAN to generate images of handwritten digits using the MNIST dataset. We will create smooth animations of digits morphing across classes. In the second part of the lab we will apply these concepts on the CelebA dataset of celebrity faces. Using a pre-trained network we will see how to edit or retrieve face attributes (age, smile, etc.). We will see how to visualize the latent space in 3D and how to generate image analogies.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

TBA

Character Animation with Theano
TUESDAY, AUG 1 | Room 513 | 12:00pm – 2:00pm

Learn how neural networks transfer the look and feel of one image to another image by extracting distinct visual features. See how convolutional neural networks are used for feature extraction, and feeds into a generator for painting a new resultant image. Explore the architectural innovations and training techniques used to make the transfer faster, look qualitatively better to humans and work with any arbitrary input style.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

TBA

Character Animation with TensorFlow
TUESDAY, AUG 1 | Room 513 | 3:00pm – 5:00pm

See the possibilities of automatic character creation, including animation over various terrains, using neural networks. We start from data preparation for training using motion capture data and a simple neural network. Then teach a more advanced network to understand the animation sequence, as well as breaking down the character motion into phases to make smooth transitions. Learn the skills to train 3D characters by exploiting new advanced techniques in neural networks.

Kelvin Lwin (Certified Instructor, NVIDIA Deep Learning Institute)

TBA

Introduction to VR in Unreal Engine
WEDNESDAY, AUG 2 | Room 513 | 2:00pm – 5:00pm

Join Epic Games for a VR starter session, a live training tutorial where participants will learn the basics of creating VR projects and VR best practices in Unreal Engine (UE4). Learn the basics of creating a VR project from starter content using the UE4 Blueprint visual scripting system. Blueprints are easily leveraged by all game development disciplines, so artists, designers, and programmers are welcome. We'll cover topics such as project setup, VR peripheral setup, building materials for VR, VR character setup, using the slice tool, and configuring UI for VR projects. We'll also cover design principles and VR performance optimization techniques. Prerequisites: Basic familiarity with the 3D graphics pipeline is desired, but not required.

Wes Bunn (Sr. Technical Writer, Epic Games)

TBA

Luis Cataldi (Director, Education & Learning Resources, Epic Games)

TBA

Introduction to VR in Unreal Engine
WEDNESDAY, AUG 2 | Room 513 | 2:00pm – 5:00pm

Join Epic Games for a VR starter session, a live training tutorial where participants will learn the basics of creating VR projects and VR best practices in Unreal Engine (UE4). Learn the basics of creating a VR project from starter content using the UE4 Blueprint visual scripting system. Blueprints are easily leveraged by all game development disciplines, so artists, designers, and programmers are welcome. We'll cover topics such as project setup, VR peripheral setup, building materials for VR, VR character setup, using the slice tool, and configuring UI for VR projects. We'll also cover design principles and VR performance optimization techniques. Prerequisites: Basic familiarity with the 3D graphics pipeline is desired, but not required.

Wes Bunn (Sr. Technical Writer, Epic Games)

TBA

Introduction to VR in Unreal Engine
WEDNESDAY, AUG 2 | Room 513 | 2:00pm – 5:00pm

Join Epic Games for a VR starter session, a live training tutorial where participants will learn the basics of creating VR projects and VR best practices in Unreal Engine (UE4). Learn the basics of creating a VR project from starter content using the UE4 Blueprint visual scripting system. Blueprints are easily leveraged by all game development disciplines, so artists, designers, and programmers are welcome. We'll cover topics such as project setup, VR peripheral setup, building materials for VR, VR character setup, using the slice tool, and configuring UI for VR projects. We'll also cover design principles and VR performance optimization techniques. Prerequisites: Basic familiarity with the 3D graphics pipeline is desired, but not required.

Luis Cataldi (Director, Education & Learning Resources, Epic Games)

TBA

Introduction to the Unity Game Engine: Viking Quest and VRWorks Integration
WEDNESDAY, AUG 2 | Room 513 | 9:00am 12:00pm

During this hands-on training, attendees will learn how to build a brand new 3D, VR/AR-ready game from start to finish while touching upon many of the diverse systems and tools that Unity offers. Topics to be covered include: General workflows, Unity scripting, The graphics pipeline, Global illumination (GI), Physically based rendering (PBR), Physics, Audio, Animation (Mecanim), Virtual Reality with NVIDIA VRWorks.

Mark Schoennagel (Evangelist, Unity)

TBA