Events

Subscribe
The World Leader
in Visual Computing
Siggraph 2016 | July 24-28 | Anaheim, CA
The World Leader in Visual Computing

NVIDIA AT SIGGRAPH 2016

 

This year at SIGGRAPH in Anaheim, NVIDIA developers and engineers will showcase the most vital work in visual computing today–including graphics, artificial intelligence, virtual reality, and visualization.

email

Sign up to receive show news from NVIDIA

SIGN UP

Talks

Simon Jones and Martin Enthed
Enterprise Director, Epic
Development & Operations IT Manager, IKEA

Epic’s Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization and virtual reality to design and deliver engaging user experiences that change the way they do business. We bring you IKEA and how VR is changing kitchen design for its customers. They will talk about creating and using a digital library of products/assets, from CAD models, then took the next step with a VR kitchen app - IKEA VR Experience. They wil discuss custom tools developed to speed-up the development process.
More Info >

Zhili Chen and Chris Hebert
3D Graphics Researcher, ADOBE
Developer Technology Software Engineer, NVIDIA

We built a real-time oil painting system that simulates the physical interactions among brush, paint, and canvas at the bristle level entirely using CUDA. To simulate sub-pixel paint details given the limited computational resource, we propose to define paint liquid in a hybrid fashion: the liquid close to the brush is modeled by particles, and the liquid away from the brush is modeled by a density field. Based on this representation, we develop a variety of techniques to ensure the performance and robustness of our simulator under large time steps, including brush and particle simulations in non-inertial frames, a fixed-point method for accelerating Jacobi iterations, and a new Eulerian-Lagrangian approach for simulating detailed liquid effects.
More Info >

Andrew Rink
Global Marketing Strategy , NVIDIA

Andrew Rink is responsible for global marketing strategy for AEC and Manufacturing Industries at NVIDIA. With wide-ranging international experience in a variety of industries including CAD and animation software, lasers and photonic power, Andrew has been bringing innovative technology to market for over 25 years. Based at NVIDIA’s Silicon Valley headquarters, he has travelled to over 80 countries and is fluent in three languages.
More Info >

Date: Subject:
Reset
Review All
Presented by:
Lenovo
SUNDAY, July 24th | Best of GTC Talks | ROOM #210D
9:00am - 10:00am
SIG1601: Rendering Faster and Better with VRWorks
Cem Cebenoyan (Dev Tech, NVIDIA)
10:15am - 11:15am
SIG1602: Best Practices in GPU-Based Video Processing
Thomas True (Senior Applied Engineer for Professional Video and Image Processing, NVIDIA)
11:30am - 12:30pm
SIG1603: See the Big Picture: How to Build Large Display Walls Using NVIDIA DesignWorks APIs and Tools
Doug Traill (Senior Solutions Architect, NVIDIA)
12:45pm - 1:15pm
NVIDIA Research: SIG1604: Textures: Achieving an Infinite Resolution Image
Alexander Reshetov (Research Engineer, NVIDIA)
1:15pm - 1:45pm
NVIDIA Research: SIG1605: Reflectance Capture by Parametric Texture Synthesis
Jaakko Lehtinen (Research Engineer, NVIDIA)
2:00pm - 3:00pm
SIG1606: Using MDL to Share Physically Based Materials
Lutz Kettner (Senior Manager, Rendering Software and Material Definition, NVIDIA)
Jan Jordan (Software Product Manager MDL, NVIDIA)
3:15pm - 4:15pm
SIG1607: Advances in NVIDIA’s OptiX
Steven Parker (VP & CTO of Professional Graphics, NVIDIA)
4:30pm - 5:30pm
SIG1608: Real-Time Graphics for Film Production at Pixar
Pol Jeremias (Graphics Software Engineer, Pixar)
Dirk Van Gelder (Lead Software Engineer, Pixar)
5:45pm - 6:45pm
SIG1609: NVIDIA OpenGL in 2016
Mark Kilgard (Principal Software Engineer, NVIDIA)
Jeffrey Kiel (Manager, Graphics Tools, NVIDIA)
Presented by:
Dell
MONDAY, July 25th | Best of GTC Talks | ROOM #210D
9:00am - 10:00am
SIG1610: How to Render AEC Interiors for 2D and VR In Minutes
Pascal Gautron (Project Leader, NVIDIA)
10:15am - 11:15am
SIG1611: Programming for High Dynamic Range Rendering and Display on NVIDIA GPUs
Thomas True (Senior Applied Engineer for Professional Video and Image Processing, NVIDIA)
11:30am - 12:00pm
SIG1612: Vulkan and the Khronos API Ecosystem
Neil Trevett (Vice President Developer Ecosystem, NVIDIA)
12:00pm - 1:45pm
SIG1613: Vulkan and NVIDIA: A Deep Dive
Tristan Lorach (Manager of Devtech for Professional Visualization, NVIDIA)
Jeffrey Kiel (Manager of Graphics Tools, NVIDIA)
2:00pm - 3:00pm
SIG1614: VR: You Are Here
David Luebke (Vice President of Graphics Research, NVIDIA)
3:15pm - 4:15pm
SIG1615: Overcoming Challenges for Virtual and Augmented Reality Display
David Luebke (Vice President of Graphics Research)
4:30pm - 5:30pm
SIG1616: Next-Gen Material Edition with Substance Designer Native MDL Visual UI and NVIDIA Iray®
Sébastien Deguy (CEO, Allegorithmic)
Jerome Derel (Chief Product Officer, Allegorithmic)
5:45pm - 6:45pm
SIG1617: Advanced Rendering Solutions from NVIDIA
Phil Miller (Senior Director, Advanced Rendering Products, NVIDIA)
TUESDAY, July 26th | Best of GTC Talks | ROOM #210D
12:00pm – 1:00pm
SIG1663: Machine Learning and the Making of Things
Mike Haley (Senior Director of Machine Intelligence, Autodesk)
1:15pm – 2:15pm
SIG1664: Rendering Sparse Volumes with NVIDIA© GVDB in DesignWorks
Rama Hoetzlein (Research Engineer, NVIDIA)
2:30pm – 3:00pm
NVIDIA Research: SIG1665: Massive Time-lapse Point Cloud Rendering in Virtual Reality
Markus Schuetz (Research Engineer, NVIDIA)
3:00pm – 3:30pm
SIG1669: Mars 2030
Julian Reyes (VR Producer, Fusion)
Justin Sonnekalb (Technical Designer, Consultant)
Dave Flamburis (Senior Lead Artist, Creative Consultant)
3:45pm – 4:15pm
NVIDIA Research: SIG1666: Rendering Highly Specular Materials
Anton S. Kaplanyan (Research Scientist, NVIDIA)
4:30pm – 5:00pm
SIG1667: Bringing Pascal to Professionals
Allen Bourgoyne (Senior Product Marketing Manager, NVIDIA)
Presented by:
HPI
TUESDAY, JULY 26th | Best of GTC Theatre | BOOTH #509
10:00am - 10:25am
SIG1618: Visualization Applications on NVIDIA DGX-1
Charlie Boyle (Senior Director, DGX-1 Marketing, NVIDIA)
10:30am - 10:55am
SIG1619: IKEA: Exploring VR
Simon Jones (Enterprise Director, Epic)
Martin Enthed (Development & Operations IT Manager, IKEA)
11:00am - 11:25am
SIG1620: VR - Not Just for Games!
Simon Jones (Enterprise Director, Epic)
Solomon Rogers (CEO, Rewind.io)
11:30am - 11:55am
SIG1621: Exclusively Using NVidia GPUs and Redshift 3D to Deliver the Next Wave of Original Content
Yurie Rocha (Creative Director of Production, Guru Studios)
12:00pm - 12:25pm
SIG1622: Independence Day: Resurgence – Killer Queen
Matt Aitken (Visual Effects Supervisor, Weta Digital)
12:30pm - 12:55pm
SIG1623: Building VR Funhouse with UE4
Victoria Rege (Global Alliances & Ecosystem Development, VR, NVIDIA)
1:00pm - 1:25pm
SIG1624: VR Multi GPU Acceleration Featuring Autodesk VRED
Paul Schmucker (Subject Matter Expert, Autodesk)
Tobias France (Hyundai Designer, Hyundai)
1:30pm - 1:55pm
SIG1625: Vulkan on NVIDIA: The Essentials
Tristan Lorach (Manager of Devtech for Professional Visualization Group, NVIDIA)
2:00pm - 2:25pm
SIG1626: NVIDIA Mental Ray and Iray® Plug-ins: New Rendering Solutions
Phillip Miller (Director of Product Management, NVIDIA)
2:30pm - 2:55pm
SIG1627: Video Processing and Deep Learning and the Importance of the GPU
3:00pm - 3:25pm
SIG1628: Look Development in Real Time
Jean-Daniel Nahmias (Technical Director, Pixar)
Davide Pesare (Lead Software Engineer, Pixar)
3:30pm - 3:55pm
SIG1629: Production-Quality, Final-Frame Rendering on the GPU
Robert Slater (Vice President Engineering, RedShift)
4:00pm - 4:25pm
SIG1630: NVIDIA Iray®: Changing the Face of Architecture and Design
Scott DeWoody (Firmwide Creative Media Manager, Gensler)
4:30pm - 4:55pm
SIG1631: MDL Materials to GLSL Shaders: Theory and Practice
Andreas Mank (Team Leader Software Development, ESI Group)
Andreas Suessenbach (Senior DevTech Engineer, NVIDIA)
5:00pm - 5:25pm
SIG1632: Cutting Edge Tools and Techniques for Real-Time Rendering with NVIDIA GameWorks
David Coombes (Developer Marketing Manager,GameWorks, NVIDIA)
5:30pm - 6:00pm
SIG1633: Give Life to your 3D Art with MDL and NVIDIA Iray® in Substance Painter
Manuel Kraemer (Senior Developer Technology Engineer, NVIDIA)
Jérémie Noguer (Senior Product Manager, Allegorithmic)
Presented by:
HPI
WEDNESDAY, JULY 27th | Best of GTC Theatre | BOOTH #509
10:00am - 10:25am
SIG1634: A New Reality with Iray VR
Phillip Miller (Senior Director, DGX-1 Marketing, NVIDIA)
10:30am - 10:55am
SIG1635: Leveraging Microsoft Azure's GPU N-Series for Compute Workloads and Visualization
Karan Batta (Program Manager, Microsoft)
11:00am - 11:25am
SIG1636: WetBrush: GPU-Based 3D Painting Simulation at the Bristle Level
Zhili Chen (3D Graphics Researcher, ADOBE)
Chris Hebert (Developer Technology Software Engineer, NVIDIA)
11:30am - 11:55am
SIG1637: Visualization Applications on NVIDIA DGX-1
Charlie Boyle (Senior Director, DGX-1 Marketing, NVIDIA)
12:00pm - 12:25pm
SIG1638: Rendering Lost Historical Buildings with NVIDIA Technology
Andrew Rink (Global Marketing Strategy , NVIDIA)
12:30pm - 12:55pm
SIG1639: Learning Representations for Automatic Colorization
Gustav Larsson (Ph.D. Student, University of Chicago)
1:00pm – 1:25pm
SIG1640:Audi´s drive for “The Best Car Configurator On The Internet”
Thomas Orenz (Team Leader Virtual Reality and Sales Content, Audi, AG)
François de Bodinat (CMO, ZeroLight)
1:30pm - 1:55pm
SIG1641: Face2Face: Real-time Face Capture and Reenactment
Justus Thies (Ph.D.Student, University of Erlangen-Nuremberg)
Matthias Niessner (Assistant Professor, Stanford University)
2:00pm - 2:25pm
SIG1642: The Technology Powering the Immersive Cinema Experiences from Lucasfilm's ILMxLAB
Lutz Latta (Principal Engineer, Lucasfilms)
2:30pm - 2:55pm
SIG1643: Introducing NVIDIA© GVDB Sparse Volumes
Rama Hoetzlein (Research Engineer, NVIDIA)
3:00pm - 3:25pm
SIG1644: Light Field Rendering and Streaming for VR & AR
Jules Urbach (CEO & Founder, OTOY Inc,)
3:30pm - 3:55pm
SIG1645: Large Scale Video Processing for VR
Daniel Kopeinigg (Principal Engineer, Jaunt VR)
4:00pm - 4:25pm
SIG1646: Digital Actors at MPC: Bridging the Uncanny Valley with GPU Technology
Damien Fagnou (CTO, MPC)
4:30pm - 4:55pm
SIG1647: Look Development in Real Time
Jean-Daniel Nahmias (Technical Director, Pixar)
Davide Pesare (Lead Software Engineer, Tools Shading, Pixar)
5:00pm - 5:25pm
SIG1648: Virtual Reality Rendering Features of NVIDIA GPUs
Mark Kilgard (Principal Software Engineer, NVIDIA)
5:30pm - 6:00pm
SIG1649: Giant VR - A Sundance Movie
Milica Zec (Director, Editor, Screen Writer, Giant VR)
Winslow Porter (Producer, co-Creator, Giant)
Presented by:
HPI
THURSDAY, JULY 28th | Best of GTC Theatre | BOOTH #509
10:00am - 10:25am
SIG1650 Visualization Applications on NVIDIA DGX-1
Deepti Jain (Senior Applied Engineer, NVIDIA)
10:30am - 10:55am
SIG1651: Independence Day: Resurgence - Killer Queen
Matt Aitken (Visual Effects Supervisor, Weta Digital)
11:00am - 11:25am
SIG1652: NUKE Studio for Film Pipelines
Juan Salazar (NUKE Studio Product Manager, The Foundry)
11:30am - 11:55am
SIG1653: Look Development in Real Time
Jean-Daniel Nahmias (Technical Director, Pixar)
Davide Pesare (Lead Software Engineer, Tools Shading, Pixar)
12:00pm - 12:25pm
SIG1654: Processing VR Video in the Cloud
Sean Safreed (Cofounder/CMO, Pixvana)
12:30pm - 12:55pm
SIG1655: VR Multi GPU Acceleration Featuring Autodesk VRED
Paul Schmucker (Subject Matter Expert, Autodesk)
Tobias France (Hyundai Designer, Hyundai)
1:00pm - 1:25pm
SIG1656: Rendering Faster and Better with VRWorks on Pascal
Ryan Prescott (Developer Technology Engineer, NVIDIA)
1:30pm - 1:55pm
SIG1657: NV Research: The Magic Behind GameWorks' Hybrid Frustum Traced Shadows (HFTS)
Chris Wyman (Senior Research Scientist, NVIDIA)
2:00pm – 2:25pm
SIG1658: Bringing Pascal to Professionals
Allen Bourgoyne (Senior Product Marketing Manager, NVIDIA)
2:30pm – 3:00pm
SIG1659: Leveraging GPU Technology to Visualize Next-Generation Products and Ideas
Michael Wilken (Director of 3D, Saatchi & Saatchi, LA)
SUNDAY-THURSDAY, 24th-28th July | Emerging Technologies | Hall C
 
Perceptually-Based Foveated Virtual Reality
Tuesday, July 26th | NVIDIA Papers | Ballroom C
2:00pm-3:30pm
NVIDIA Research: Reflectance Modeling by Neural Texture Synthesis:
Miika Aittala (Aalto University)
Jaakko Lehtinen(NVIDIA, Aalto University)
Timo Aila (NVIDIA)
Wednesday, July 27th | NVIDIA Papers | Ballroom C, D & E
3:45pm - 5:55pm
NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wade Brainerd (Activision)
Matthias Niessner(Stanford University)
3:45pm - 5:55pm
NVIDIA Research: A System for Rapid Exploration of Shader Optimization Choices
Tim Foley (NVIDIA)
Tuesday, July 26th | NVIDIA Courses | Ballroom A & B
9:00am - 12:15pm
The Quest for The Ray Tracing API
Alexander Keller (Director of Graphics Research, NVIDIA)
2:00pm - 5:15pm
Open Problems in Real-Time Rendering
Aaron Lefohn (NVIDIA)
N. Tatarchuk (Bungie Games)
Sunday, July 24th | NVIDIA Talks | Ballroom B
3:45pm - 5:35pm
HFTS: Hybrid Frustum-Traced Shadows in "The Division"
Jon Story (NVIDIA)
Chris Wyman (Senior Research Scientist, NVIDIA)
Tuesday, July 26th | NVIDIA Talks | ROOM 303 A-C
9:00am -10:30am
Stochastic Layered Alpha Blending
Chris Wyman (Senior Research Scientist, NVIDIA)
Thursday, July 28th | NVIDIA Talks | Ballroom B
3:45pm – 5:15pm
GI Next: Global Illumination for Production Rendering on the GPU
3:45pm – 5:15pm
Estimating Local Beckmann Roughness for Complex BSDFs

Demos

Visit booth #509 to check out our latest demos and VR rooms where attendees can explore the suite of NVIDIA-hosted demos shown on Oculus Rift and HTC Vive headsets. We’ll be showcasing amazing VR experiences including Everest, VR Funhouse, and more. Our VR demos book up quickly so check back here to schedule an appointment before the show. NVIDIA is also sponsoring Siggraph's VR village in the Experience Hall, so stop by!

Deep Learning
M&E
VR

NVIDIA GPU Technology Conference is the largest and most important event of the year for developers. Check out www.gputechconf.com for more details.

SIG1601: Rendering Faster and Better with VRWorks
SUNDAY, JULY 24 | ROOM #210D | 9:00am - 10:00am

This talk will introduce developers to NVIDIA VRWorks™, an SDK for VR game, engine, and headset developers that cut latency and accelerate stereo rendering performance on NVIDIA GPUs. We'll explain the features of this SDK, including VR SLI®, multi-resolution shading, context priorities, and direct mode. We'll discuss the motivation for these features, how they work, and how developers can use VRWorks in their renderers to improve the VR experience on Oculus Rift, HTC Vive, and other VR headsets.

Cem Cebenoyan (Dev Tech, NVIDIA)

Cem Cebenoyan head of the Game Engines and Core Tech team at NVIDIA, a team focusing on working with game engines and building solutions to next-gen rendering challenges for games. He's been at NVIDIA for (practically) his whole life, leading teams in graphics, games, computer vision, and automotive.

SIG1602: Best Practices in GPU-Based Video Processing
SUNDAY, JULY 24 | ROOM #210D | 10:15am - 11:15am

We'll explore best practices and techniques for the development of efficient GPU-based video and image- processing applications. Topics to be discussed include threading models for efficient parallelism, CPU affinity to optimize system memory and GPU locality, image segmentation for overlapped asynchronous transfers, optimal memory usage strategies to reduce expensive data movement, and image format considerations to reduce and eliminate data conversions. Single and multi-GPU systems for uncompressed real time 4K video capture, processing, display, and play-out will be considered. Takeaways should prove applicable to developers of video broadcast and digital post production systems, as well as to developers of large scale visualization systems that require video ingest.

Thomas True (Senior Applied Engineer for Professional Video and Image Processing, NVIDIA)

Thomas True is a Senior Applied Engineer for Professional Video and Image Processing in NVIDIA’s Professional Solutions Group where he focuses on the use of GPUs in broadcast, video, and film applications ranging from pre-visualization to post production and live to air. Prior to joining NVIDIA, Tom was an Applications Engineering at SGI. Thomas has a M.S. degree in Computer Science from the Graphics Lab at Brown University and a B.S. Degree from the Rochester Institute of Technology.

SIG1603: See the Big Picture: How to Build Large Display Walls Using NVIDIA DesignWorks APIs and Tools
SUNDAY, JULY 24 | ROOM #210D | 11:30am - 12:30pm

The need to drive multiple displays, be it for digital signage, a corporate conference room, or even an immersive VR room, is becoming more common. We'll provide an overview of the display management tools and APIs that are part of NVIDIA's DesignWorks™ SDK. Attendees will learn about NVIDIA® Mosaic, display setup and management using NVAPI + NVWMI, synchronization methods, and warp and blend APIs.

Doug Traill (Senior Solutions Architect, NVIDIA)

Doug Traill is a senior solutions architect at NVIDIA responsible for scalable visualization technologies. In this role, he works with system integrators, developers, and customers to help design and implement complex visualization systems. Prior to NVIDIA, he worked at Silicon Graphics for nine years in various technical roles, including solutions architect and visualization product manager. During his career, Doug has helped design and build some the world's largest visualization centers. He holds a B.S. in Electronic Systems and Microprocessor Engineering from the University of Glasgow, U.K., as well as an M.S. of Telecommunications Business Management from King's College London, U.K.

NVIDIA Research: SIG1604: Textures: Achieving an Infinite Resolution Image
SUNDAY, JULY 24 | ROOM #210D | 12:45pm - 1:15pm

We propose a new texture sampling approach that preserves crisp silhouette edges when magnifying during close-up viewing, and benefits from image pre-filtering when minifying for viewing at farther distances. During a pre-processing step, we extract curved silhouette edges from the underlying images. These edges are used to adjust the texture coordinates of the requested samples during magnification. The original image is then sampled—only once—with the modified coordinates. The new technique provides a resolution-independent image representation capable of billions of texels per second on a mid-range graphics card.

Alexander Reshetov (Research Engineer, NVIDIA)

Alexander Reshetov received his Ph.D. degree from Keldysh Institute for Applied Mathematics in Russia. He joined NVIDIA in January 2014. Prior to NVIDIA, he worked for 17 years in Intel Labs on 3D graphics algorithms and applications, and two years at the Super-Conducting Super-Collider Laboratory in Texas, where he was designing control system for the accelerator.

NVIDIA Research: SIG1605: Reflectance Capture by Parametric Texture Synthesis
SUNDAY, JULY 24 | ROOM #210D | 1:15pm - 1:45pm

We’ve developed an algorithm that’s able to capture a spatially -varying reflectance model suitable for real-time rendering (normal map, diffuse map, gloss maps) from a single cell phone photo. We build on a statistical descriptor of natural textures based on deep convolutional neural networks, and combine it with a renderer by a non-linear optimizser that “fuzzily” searches for the maps that give the best reproduction when fed to a shader. This is joint work with Aalto University in Helsinki, and will be published in the SIGGRAPH 2016 technical papers program.

Jaakko Lehtinen (Research Engineer, NVIDIA)

Jaakko Lehtinen is a Senior Research Scientist with NVIDIA and an Associate Professor at Aalto University in Helsinki, Finland. His research interests include physically-based rendering and material appearance capture. In the past, he built graphics technology for Max Payne 1 & 2 and Alan Wake at Remedy Entertainment.

SIG1606: Using MDL to Share Physically Based Materials
SUNDAY, July 24 | ROOM #210D | 2:00pm - 3:00pm

The basics of NVIDIA's Material Definition Language (MDL) will be discussed, showing how a single material can be used to define matching appearances between different renderers and rendering techniques. Users will learn how physically -based definitions can be defined while developers will learn what’s entailed in supporting MDL within their own product/renderer.

Lutz Kettner (Senior Manager, Rendering Software and Material Definition, NVIDIA)

Lutz Kettner leads the design and engineering efforts for MDL and the NVIDIA Iray® renderer from the NVIDIA Advanced Rendering Center. He has been working on leading software products in advanced rendering, language design, API design, and geometry for 19 years. He’s known for his influential work on the open source Computational Geometry Algorithms Library CGAL. Lutz holds a Ph. D in Computer Science from ETH Zurich, Switzerland, worked as a researcher at the University of North Carolina at Chapel Hill and led a research group at the Max-Planck-Institute in Saarbrucken, Germany. He served on ISO and ECMA standardization committees.

SIG1606: Using MDL to Share Physically Based Materials
SUNDAY, July 24 | ROOM #210D | 2:00pm - 3:00pm

The basics of NVIDIA's Material Definition Language (MDL) will be discussed, showing how a single material can be used to define matching appearances between different renderers and rendering techniques. Users will learn how physically -based definitions can be defined while developers will learn what’s entailed in supporting MDL within their own product/renderer.

Jan Jordan (Software Product Manager MDL, NVIDIA)

Jan Jordan is the product manager for the NVIDIA Material Definition Language. He’ is a graduate engineer of applied computer science from the Fachhochschule fur Wirtschaft und Technik Berlin, Germany, and has a B.S. in computer science from the RTC Galway, Ireland. Before joining NVIDIA, his diverse working experience spanned from research work on practical VR applications to working as an art director in computer games. He is a long-time member of NVIDIA's Advanced Rendering team, where his focus has been on enabling material workflows across many different applications and renderers.

SIG1607: Advances in NVIDIA’s OptiX
SUNDAY, July 24 | ROOM #210D | 3:15pm - 4:15pm

Learn about the NVIDIA OptiX™ ray tracing engine, a sophisticated library for performing GPU ray tracing. We'll provide an overview of the OptiX ray tracing pipeline and the programmable components that allow for the implementation of many algorithms and applications. OptiX can be used in many domains, ranging from rendering to acoustic modeling to scientific visualization. Several case studies will be presented describing the benefits of integrating this solution into third-party applications.

Steven Parker (VP & CTO of Professional Graphics, NVIDIA)

Dr. Steven Parker is VP & CTO of Professional Graphics at NVIDIA Corporation, where he holds responsibility for professional graphics hardware and software, including NVIDIA Iray rendering system, the OptiX ray tracing API, the Material Definition Language, and the IndeX scientific visualization system. Combining a history of ray tracing, rendering and high-performance computing, Dr. Parker has been focused on bringing physically-based rendering systems to interactive applications.

SIG1608: Real-Time Graphics for Film Production at Pixar
SUNDAY, July 24 | ROOM #210D | 4:30pm - 5:30pm

Join the Pixar GPU team for a session that explores how real-time graphics are used at Pixar. We'll cover the unique needs for film production, including loading and run-time management of massive movie sets and complex characters, real-time subdivision surfaces, real-time effects particularly useful for technical directors, and how these assets are rendered using the latest hardware features. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

Pol Jeremias (Graphics Software Engineer, Pixar)

Pol Jeremias is passionate about technology and art. He grew up near Barcelona and moved to California in 2006. Since then, Pol has researched computer graphics and worked in multiple games for companies such as LucasArts or SoMa Play. Today, he helps create movies at Pixar Animation Studios. In his spare time, he has co-founded Shadertoy.com and Beautypi. When he is not programming you will find him running, reading or watching movies.

SIG1608: Real-Time Graphics for Film Production at Pixar
SUNDAY, July 24 | ROOM #210D | 4:30pm - 5:30pm

Join the Pixar GPU team for a session that explores how real-time graphics are used at Pixar. We'll cover the unique needs for film production, including loading and run-time management of massive movie sets and complex characters, real-time subdivision surfaces, real-time effects particularly useful for technical directors, and how these assets are rendered using the latest hardware features. Don't miss this great opportunity to learn about graphics, algorithms, and movies!

Dirk Van Gelder (Lead Software Engineer, Pixar)

Dirk Van Gelder joined Pixar Animation Studios in 1997 as a software engineer for Academy Award® nominated film "A Bug's Life." and winning short film "Geri's Game", working on animation software and the studio's first use of subdivision surfaces. Dirk has worked on software for every Pixar movie since, including the ground-up rewrite of the studio's proprietary animation system Presto. Currently Dirk leads the Presto Character team within in the Pixar Studio Tools Department.

SIG1609: NVIDIA OpenGL in 2016
SUNDAY, July 24 | ROOM #210D | 5:45pm - 6:45pm

Attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs. Hear straight from an OpenGL expert at NVIDIA how the OpenGL standard continues to evolve with NVIDIA's support. See examples of the latest features for virtual reality, vector graphics, interoperability with Vulkan, and modern high-performance usage--including the latest features of NVIDIA's Pascal GPU generation. Learn how your application can benefit from NVIDIA's leadership driving OpenGL as a cross-platform, open industry standard.

Mark Kilgard (Principal Software Engineer, NVIDIA)

Mark Kilgard is a principal system software engineer at NVIDIA working on OpenGL, vector graphics, we page rendering, and GPU-rendering algorithms. Mark has 25 years' experience with OpenGL including the specification of numerous important OpenGL extensions. He implemented the OpenGL Utility Toolkit (GLUT) library. Mark authored two books and is named on over 50 graphics-related patents.

SIG1609: NVIDIA OpenGL in 2016
SUNDAY, July 24 | ROOM #210D | 5:45pm - 6:45pm

Attend this session to get the most out of OpenGL on NVIDIA Quadro, GeForce, and Tegra GPUs. Hear straight from an OpenGL expert at NVIDIA how the OpenGL standard continues to evolve with NVIDIA's support. See examples of the latest features for virtual reality, vector graphics, interoperability with Vulkan, and modern high-performance usage--including the latest features of NVIDIA's Pascal GPU generation. Learn how your application can benefit from NVIDIA's leadership driving OpenGL as a cross-platform, open industry standard.

Jeffrey Kiel (Manager, Graphics Tools, NVIDIA)

Jeffrey Kiel is the manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD, and ShaderPerf. Before coming to NVIDIA, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. Jeff has given presentations at many GDCs and SIGGRAPHs and contributed articles to graphics related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill where he received his B.S. in Mathematical Sciences.

SIG1610: How to Render AEC Interiors for 2D and VR In Minutes
MONDAY, July 25 | ROOM #210D | 9:00am - 10:00am

When full photorealism is simply not fast enough, Iray Interactive renders images of CAD-grade models in a matter of seconds or minutes with ray-tracing quality. A set of render modes adapted to numerous use cases will be demonstrated, such as interactive design and interior layout. We’ll explore different Iray Interactive implementations, such as in SOLIDWORKS Visualize for instant preview and material tuning, or 3DVIA HomeByMe enabling full detailed interior design to be rendered within minutes on AWS cloud. A beta version of Iray VR for Iray Interactive will also be showcased, illustrating the potential of highly reduced render times for creating enriched Iray- quality VR experiences.

Pascal Gautron (Project Leader, NVIDIA)

Pascal Gautron is leading the Iray Interactive project at NVIDIA since 2013, focusing on fast high quality ray-tracing solutions for integration in interactive design tools. Over the last 13 years, he has gathered an academic and industrial background in computer graphics research, photorealistic image synthesis, real-time OpenGL rendering and movie post-production.

SIG1611: Programming for High Dynamic Range Rendering and Display on NVIDIA GPUs
MONDAY, July 25 | ROOM #210D | 10:15am - 11:15am

We’ll provide an introduction to High Dynamic Range (HDR) and describe application programming techniques for HDR rendering and display on NVIDIA GPUs. Concepts to be discussed include color spaces, expanding chromaticity versus luminance, and scene and display referred imaging. For application developers, takeaways will include methods to query and set GPU and display capabilities for HDR as well as OpenGL and DirectX programming to render and display HDR imagery.

Thomas True (Senior Applied Engineer for Professional Video and Image Processing, NVIDIA)

Thomas True is a Senior Applied Engineer for Professional Video and Image Processing in NVIDIA’s Professional Solutions Group where he focuses on the use of GPUs in broadcast, video, and film applications ranging from pre-visualization to post production and live to air. Prior to joining NVIDIA, Tom was an Applications Engineer at SGI. Thomas has a M.S. degree in Computer Science from the Graphics Lab at Brown University and a B.S. degree from the Rochester Institute of Technology.

SIG1612: Vulkan and the Khronos API Ecosystem
MONDAY, July 25 | ROOM #210D | 11:30am - 12:00pm

Discover how over 100 companies cooperate at the Khronos Group to create open, royalty free standards that enable developers to access the power of the GPU to accelerate demanding graphics and compute applications. This session includes the very latest roadmap and ecosystem updates for the newly announced Vulkan and SPIR-V, with details about NVIDIA's Vulkan rollout across its product range.

Neil Trevett (Vice President Developer Ecosystem, NVIDIA)

Neil Trevett has spent over thirty years in the 3D graphics industry - and by day drives the advanced apps ecosystem on NVIDIA Tegra mobile and embedded devices. By night, Neil is the elected President of the Khronos Group industry standards consortium where he initiated the OpenGL ES standard now used by billions worldwide every day, helped catalyze the WebGL project to bring interactive 3D graphics to the Web, chairs the OpenCL working group defining the open standard for heterogeneous parallel computation and has helped create and launch the new generation Vulkan API.

SIG1613: Vulkan and NVIDIA: A Deep Dive
MONDAY, July 25 | ROOM #210D | 12:00pm - 1:45pm

NVIDIA is bringing the power of Vulkan to a range of platforms to extend the choice of APIs for developers. This rapid-fire session will cover the essentials of NVIDIA's Vulkan rollout across its product range – with insights to help you judge whether Vulkan is right for your next development project. You will also get a sneak peak at what’s in store for Vulkan support in Nsight Visual Studio Edition!

Tristan Lorach (Manager of Devtech for Professional Visualization, NVIDIA)

Tristan Lorach discovered the world of computer graphics through his active contribution to the demoscene world (Amiga). Since graduating in 1995, Tristan has developed a series of 3D real-time interactive installations for exhibitions and events. From the creation of specific engines to the conception of new 3D human interfaces for public events, Tristan has always wanted to mix new hardware technology with innovative and creative ideas. Tristan is now working at NVIDIA as the manager of the "Devtech Proviz" team (Developer Technical Relations Department for Professional Visualization), participating in a variety of projects in relation with NVIDIA partners while contributing to R&D, as well as writing demos and tools for new GPUs.

SIG1613: Vulkan and NVIDIA: A Deep Dive
MONDAY, July 25 | ROOM #210D | 12:00pm - 1:45pm

NVIDIA is bringing the power of Vulkan to a range of platforms to extend the choice of APIs for developers. This rapid-fire session will cover the essentials of NVIDIA's Vulkan rollout across its product range – with insights to help you judge whether Vulkan is right for your next development project. You will also get a sneak peak at what’s in store for Vulkan support in Nsight Visual Studio Edition!

Jeffrey Kiel (Manager of Graphics Tools, NVIDIA)

Jeff Kiel is the manager of Graphics Tools at NVIDIA. His responsibilities include development and oversight of graphics performance and debugging tools, including Nsight Visual Studio Edition and Tegra Graphics Debugger. Previous projects at NVIDIA include PerfHUD, and ShaderPerf. Before coming to NVIDIA, Jeff worked on PC and console games at Interactive Magic and Sinister Games/Ubisoft. Jeff has given presentations at many GDCs and SIGGRAPHs and contributed articles to graphics related publications. His passion for the art started in the G-Lab at the University of North Carolina at Chapel Hill where he received his B.S. in Mathematical Sciences.

SIG1614: VR: You Are Here
MONDAY, July 25 | ROOM #210D | 2:00pm - 3:00pm

In this “state of the union” survey, we will review the technology, the components, and the challenges of virtual reality. We’ll describe how GPUs fit into these challenges, and lay out NVIDIA Research’s vision for the future of VR.

David Luebke (Vice President of Graphics Research, NVIDIA)

David Luebke helped found NVIDIA Research in 2006 after eight years of teaching computer science on the faculty of the University of Virginia. David is currently Vice President of Graphics Research at NVIDIA. His personal research interests include virtual and augmented reality, display technology, ray tracing, and graphics architecture. His honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award." David has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, an online course on parallel computing that has reached over 80,000 students, and dozens of papers, articles, chapters, and patents on computer graphics and GPU computing.

SIG1615: Overcoming Challenges for Virtual and Augmented Reality Display
MONDAY, July 25 | ROOM #210D | 3:15pm - 4:15pm

We'll describe work by NVIDIA Research and our partners on challenges common to all wearable VR and AR displays:(1) FOCUS: how to put a display as close to the eye as a pair of eyeglasses, where we cannot bring it into focus? (2) FIELD OF VIEW: how to fill the user's entire vision with displayed content (3) RESOLUTION: how to fill that wide field of view with enough pixels. A "brute force" display would require 10,000 x 8,000 pixels per eye! (4) BULK: displays should be vanishingly unobtrusive, as light and forgettable as a pair of sunglasses, but the laws of optics dictate that most VR displays today are bulky boxes bigger than ski goggles. I will describe several "computational display" prototypes which sidestep these challenges by co-designing the optics, display, and rendering algorithm.

David Luebke (Vice President of Graphics Research, NVIDIA)

David Luebke helped found NVIDIA Research in 2006 after eight years of teaching computer science on the faculty of the University of Virginia. David is currently Vice President of Graphics Research at NVIDIA. His personal research interests include virtual and augmented reality, display technology, ray tracing, and graphics architecture. His honors include the NVIDIA Distinguished Inventor award, the NSF CAREER and DOE Early Career PI awards, and the ACM Symposium on Interactive 3D Graphics "Test of Time Award." David has co-authored a book, a SIGGRAPH Electronic Theater piece, a major museum exhibit visited by over 110,000 people, an online course on parallel computing that has reached over 80,000 students, and dozens of papers, articles, chapters, and patents on computer graphics and GPU computing.

SIG1616: Next-Gen Material Edition with Substance Designer Native MDL Visual UI and NVIDIA Iray®
MONDAY, July 25 | ROOM #210D | 4:30pm - 5:30pm

Allegorithmic has been integrating Iray render engine to combine its expertise of procedural texture rendering (aka substances) with multi-layered MDL physically based materials and Iray, GPU-accelerated unbiased raytracer. With latest Substance Designer release, you are now able to natively create your MDL and substance materials from scratch through the node-tree editor. Materials can then be exported to your predilection MDL capable 3D software (Iray plugins for Maya, 3ds Max, Rhino), enabling infinite capabilities for material rendering. The MDL editor addition to Allegorithmic Substance Designer will help solving artists and developers PBR material challenges from creation and edition, to final frame rendering for artistic shots. During the session, some actual industrial use cases will be showcased, including some of the work achieved with Hyundai Genesis G380 interior and exterior design.

Sébastien Deguy (CEO, Allegorithmic)

Sebastien Deguy is the Founder of Allegorithmic SAS and serves as its Chief Executive Officer and President. Dr. Deguy has a computer science background with a specialization in mathematics, random processes, simulation, computer vision, and image synthesis. He is also a musician and an award-winning director and producer of traditional and animated short films.

SIG1616: Next-Gen Material Edition with Substance Designer Native MDL Visual UI and NVIDIA Iray®
MONDAY, July 25 | ROOM #210D | 4:30pm - 5:30pm

Allegorithmic has been integrating Iray render engine to combine its expertise of procedural texture rendering (aka substances) with multi-layered MDL physically based materials and Iray, GPU-accelerated unbiased raytracer. With latest Substance Designer release, you are now able to natively create your MDL and substance materials from scratch through the node-tree editor. Materials can then be exported to your predilection MDL capable 3D software (Iray plugins for Maya, 3ds Max, Rhino), enabling infinite capabilities for material rendering. The MDL editor addition to Allegorithmic Substance Designer will help solving artists and developers PBR material challenges from creation and edition, to final frame rendering for artistic shots. During the session, some actual industrial use cases will be showcased, including some of the work achieved with Hyundai Genesis G380 interior and exterior design.

Jerome Derel (Chief Product Officer, Allegorithmic)

Jerome Derel is an engineer and product designer and joined Allegorithmic in 2014 as a chief product officer. Jerome worked for seven years at Dassault Systemes as a visualization expert in the Design Studio and the CATIA Design teams, leading projects producing high-quality virtual materials.

SIG1616: Next-Gen Material Edition with Substance Designer Native MDL Visual UI and NVIDIA Iray®
MONDAY, July 25 | ROOM #210D | 4:30pm - 5:30pm

Allegorithmic has been integrating Iray render engine to combine its expertise of procedural texture rendering (aka substances) with multi-layered MDL physically based materials and Iray, GPU-accelerated unbiased raytracer. With latest Substance Designer release, you are now able to natively create your MDL and substance materials from scratch through the node-tree editor. Materials can then be exported to your predilection MDL capable 3D software (Iray plugins for Maya, 3ds Max, Rhino), enabling infinite capabilities for material rendering. The MDL editor addition to Allegorithmic Substance Designer will help solving artists and developers PBR material challenges from creation and edition, to final frame rendering for artistic shots. During the session, some actual industrial use cases will be showcased, including some of the work achieved with Hyundai Genesis G380 interior and exterior design.

Pierre Mahuet (Product Manager & Senior Industrial Designer, Allegorithmic)

Pierre Mahuet has an industrial design background and after 8 years at Dassault Systemes as CATIA Creative Design expert & portfolio manager, Pierre joined Allegorithmic as product manager & senior industrial designer.

SIG1616: Next-Gen Material Edition with Substance Designer Native MDL Visual UI and NVIDIA Iray®
MONDAY, July 25 | ROOM #210D | 4:30pm - 5:30pm

Allegorithmic has been integrating Iray render engine to combine its expertise of procedural texture rendering (aka substances) with multi-layered MDL physically based materials and Iray, GPU-accelerated unbiased raytracer. With latest Substance Designer release, you are now able to natively create your MDL and substance materials from scratch through the node-tree editor. Materials can then be exported to your predilection MDL capable 3D software (Iray plugins for Maya, 3ds Max, Rhino), enabling infinite capabilities for material rendering. The MDL editor addition to Allegorithmic Substance Designer will help solving artists and developers PBR material challenges from creation and edition, to final frame rendering for artistic shots. During the session, some actual industrial use cases will be showcased, including some of the work achieved with Hyundai Genesis G380 interior and exterior design.

David Nikel (Digital Model Manager, Hyundai)

David Nikel joined Hyundai After 10 years as a modeler for General Motors and running his own independent company for 4 years. David has been the Digital Model Manager at Hyundai USA since 2002.

SIG1617: Advanced Rendering Solutions from NVIDIA
MONDAY, July 25 | ROOM #210D | 5:45pm - 6:45pm

Come learn of NVIDIA’s latest rendering technologies powering the most popular 3D tools in the entertainment and design markets. The underlying offerings will be explained for those looking to add GPU acceleration and/or rendering to their own solutions, along with what cutting edge solutions are accessible to 3D artists and designers.

Phil Miller (Senior Director, Advanced Rendering Products, NVIDIA)

Phillip Miller is senior director of NVIDIA's commercial advanced rendering offerings, ranging from the Iray and mental ray shipping within leading products in Design and Entertainment to the IndeX technology used in large data visualization. Phil has been with NVIDIA for 7 years and has led leading software products for over 20 years, including the Entertainment offerings at Autodesk and the Web Design product line at Adobe. He holds a Masters of Architecture from the University of Illinois and is a registered architect.

SIG1618: Visualization Applications on NVIDIA DGX-1
TUESDAY, JULY 26 | BOOTH #509 | 10:00am - 10:25am

An introduction to the NVIDIA DGX-1 System including discussion of containerized applications for professional visualization & deep learning.

Charlie Boyle (Senior Director, DGX-1 Marketing, NVIDIA)

Charlie Boyle leads the product marketing efforts to drive the world wide adoption of NVIDIA’s DGX-1 system. Charlie brings a wealth of knowledge in the IT and service provider industry, having run marketing, engineering and data center operations for some of the world's largest telcos and service providers.

SIG1619: IKEA: Exploring VR
TUESDAY, JULY 26 | BOOTH #509 | 10:30am - 10:55am

Epic’s Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization and virtual reality to design and deliver engaging user experiences that change the way they do business. We bring you IKEA and how VR is changing kitchen design for its customers. They will talk about creating and using a digital library of products/assets, from CAD models, then took the next step with a VR kitchen app - IKEA VR Experience. They wil discuss custom tools developed to speed-up the development process.

Simon Jones (Enterprise Director, Epic)

Simon Jones has amassed over two decades of experience within the video games industry, and has spent the last three years focusing on visualization solutions within the automotive, industrial and aviation sectors. As a seasoned professional he joined Epic Games to head up their newly created Enterprise division at the end of 2015, and is currently building the team that will enable Unreal Engine to be the real-time visualization solution of choice within the enterprise sector.

SIG1619: IKEA: Exploring VR
TUESDAY, JULY 26 | BOOTH #509 | 10:30am - 10:55am

Epic’s Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization and virtual reality to design and deliver engaging user experiences that change the way they do business. We bring you IKEA and how VR is changing kitchen design for its customers. They will talk about creating and using a digital library of products/assets, from CAD models, then took the next step with a VR kitchen app - IKEA VR Experience. They wil discuss custom tools developed to speed-up the development process.

Martin Enthed (Development & Operations IT Manager, IKEA)

Martin Enthed joined IKEA in 2007 as 3D R&D at IKEA Communications AB* in Älmhult, Sweden. In September 2011, he changed roles at IKEA and is now Development and Operations IT Manager with the focus of all the companies need for development. Managing a group of talented co-workers and developing companies to meet IKEA business needs. VR/MR and 3D Development is still a big part of my everyday work and passion.

SIG1620: VR - Not Just for Games!
TUESDAY, JULY 26 | BOOTH #509 | 11:00am - 11:25am

Epic’s Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization and virtual reality to design and deliver engaging user experiences that change the way they do business. We bring you Rewind.io, a creative agency and digital production studio based in London. They are specialists in virtual reality and the creative production behind unforgettable experiential events, such as the BBC's 'Home' VR Space walk and BJORG Digital music VR video.

Simon Jones (Enterprise Director, Epic)

Simon Jones has amassed over two decades of experience within the video games industry, and has spent the last three years focusing on visualization solutions within the automotive, industrial and aviation sectors. As a seasoned professional he joined Epic Games to head up their newly created Enterprise division at the end of 2015, and is currently building the team that will enable Unreal Engine to be the real-time visualization solution of choice within the enterprise sector.

SIG1620: VR - Not Just for Games!
TUESDAY, JULY 26 | BOOTH #509 | 11:00am - 11:25am

Epic’s Unreal Engine is not just for games. Simon Jones will be detailing how Epic's Enterprise division enables major players across verticals such as automotive, aerospace, data visualization and virtual reality to design and deliver engaging user experiences that change the way they do business. We bring you Rewind.io, a creative agency and digital production studio based in London. They are specialists in virtual reality and the creative production behind unforgettable experiential events, such as the BBC's 'Home' VR Space walk and BJORG Digital music VR video.

Solomon Rogers (CEO, Rewind.io)

Solomon Rogers founded REWIND, a creative production agency, in 2011 after growing demands for his professional work pulled him away from 15 years as a University Senior Lecturer in Digital Animation, Visual Effects & Emerging Technology. He started as one of the youngest University Senior Lecturers in the UK, and helped write four new degrees, two masters and a PHD. Since then, Sol has grown REWIND into an industry award winning tribe of vibrant creative technologists and digital artists, focused on harnessing immersive technologies to deliver groundbreaking VR, AR, Animation, DOOH, VFX and 360 degree video projects for some of the world's largest companies, agencies and brands.

SIG1621: Exclusively Using NVidia GPUs and Redshift 3D to Deliver the Next Wave of Original Content
TUESDAY, JULY 26 | BOOTH #509 | 11:30am - 11:55am

GPU rendering of final frames is beginning to make its way into mainstream production. Yurie will discuss the immediate benefits to building a GPU exclusive pipeline, and how Guru Studio is choosing NVidia's technology to eventually achieve near real-time ray tracing. With the appetite for original content at an all-time high, Guru is re-imagining how it will meet the demands of broadcasters and push the quality of their work.

Yurie Rocha (Creative Director of Production, Guru Studios)

Yurie Rocha is a long time veteran at Guru Studio where he oversees production from a technical and creative perspective. He has CG Supervised Justin Time, Paw Patrol, and is currently actively in production on a new Guru original and Netflix exclusive entitled "True and the Rainbow Kingdom"

SIG1622: Independence Day: Resurgence – Killer Queen
TUESDAY, JULY 26 | BOOTH #509 | 12:00pm - 12:25pm

We'll discuss Weta Digital's creation of the Alien Queen in this year's Independence Day: Resurgence. We will focus on the big showdown with the Queen at Area 51. He will also cover some of the unique fx simulation work, new innovations with their skydome lighting setup as well as some of the techniques that allowed Weta to move to a large number of all CG shots.

Matt Aitken (Visual Effects Supervisor, Weta Digital)

Matt Aitken has worked at Weta Digital since the early days of the company. Matt was Digital Models Supervisor on The Lord of the Rings trilogy, pre-production / R&D Supervisor for King Kong, and Visual FX Supervisor for Bridge to Terabithia. Recently Matt supervised visual effects on The Lovely Bones and District 9, as Digital Effects Supervisor and Visual Effects Supervisor respectively. He also worked on Weta Digital projects including Tintin and Avatar. Matt has a Bachelor of Science in Mathematics from Victoria University of Wellington and a Master of Science in Computer Graphics from Middlesex University, London.

SIG1623: Building VR Funhouse with UE4
TUESDAY, JULY 26 | BOOTH #509 | 12:30pm - 12:55pm

VR Funhouse is midway carnival game built to bring a new level of immersion to VR by enhancing what you see, hear and touch through a combination of great graphics, fully interactive audio and simulated physics. Built in UE4, it incorporates several graphics technologies to simulate realistic hair, destruction, fire and more. Launched in July 2016, Lightspeed Studios open sourced blueprints and assets so that VR artists and content creators can take advantage of cutting edge VR development. This talk will walk through the game producer's perspective and share how to build your own carnival game.

Victoria Rege (Global Alliances & Ecosystem Development, VR, NVIDIA)

Victoria Rege is focused on ecosystem development and alliances for virtual and augmented reality. Previously, she ran global product marketing and campaigns for NVIDIA GRID. She's passionate about graphics products and technologies, and the way they're changing how companies work, how students learn and how consumers play. Prior to joining NVIDIA, Victoria worked in the hedge fund industry, developing executive-level conferences and leading an association of COOs of hedge funds. Victoria has a B.A. in public relations and French from the University of Rhode Island. Most days you can find her on Twitter @Fleurdevie.

SIG1624: VR Multi GPU Acceleration Featuring Autodesk VRED
TUESDAY, JULY 26 | BOOTH #509 | 1:00pm - 1:25pm

Hyundai Design Research & Autodesk VR Team presents a virtual design review of the Hyundai N Vision 2025 Gran Turismo. Tobias France, Hyundai Designer, and Paul Schmucker, Autodesk Automotive SME, will demonstrate their multi-user car design review utilizing the HTC VIVE and Autodesk VRED Pro software; powered by NVIDIA Quadro Graphics.

Paul Schmucker (Subject Matter Expert, Autodesk)

As an Automotive SME, Paul helps win customers by interpreting business issues and formulating solutions to solve challenges. Paul has over 10 years’ experience as a successful industrial designer, working on projects including transportation/automotive, furniture, consumer electronics and sporting goods. Paul also co-hosts/produces an online show reviewing cars for Everyday Driver.

SIG1624: VR Multi GPU Acceleration Featuring Autodesk VRED

Hyundai Design Research & Autodesk VR Team presents a virtual design review of the Hyundai N Vision 2025 Gran Turismo. Tobias France, Hyundai Designer, and Paul Schmucker, Autodesk Automotive SME, will demonstrate their multi-user car design review utilizing the HTC VIVE and Autodesk VRED Pro software; powered by NVIDIA Quadro Graphics.

Tobias France (Hyundai Designer, Hyundai)

Tobias France is an Automotive Designer and Future Visionary who started his production design career styling cars for Ford Motor Company in Dearborn Michigan. He lived 6 years in Japan, doing production and conceptual design for Mazda where his design expertise contributed to the creation of both the “Nagare” and "Kodo" design philosophies. He currently resides in Irvine California, visioning the future for Hyundai North America. He is the designer of the Hyundai N Vision 2025 Grand Turismo concept car which debuted at the Frankfurt Auto Show last year and will make its virtual world stage debut on Sony’s Playstation 4 this November with the release of Polyphony Digital’s, GT Sport.

SIG1625: Vulkan on NVIDIA: The Essentials
TUESDAY, JULY 26 | BOOTH #509 | 1:30pm - 1:55pm

NVIDIA is bringing the power of Vulkan to a range of platforms to extend the choice of APIs for developers. This rapid-fire session will cover the essentials of NVIDIA's Vulkan rollout across its product range – with insights to help you judge whether Vulkan is right for your next development project.

Tristan Lorach (Manager of Devtech for Professional Visualization Group, NVIDIA)

Tristan Lorach discovered the world of computer graphics through his active contribution to the demoscene world (Amiga). After graduating in 1995, Tristan has developed a series of 3D real-time interactive installations for exhibitions and events. From the creation of specific engines to the conception of new 3D human interfaces for public events, Tristan has always wanted to mix new Hardware technology with innovative and creative ideas. Tristan is now working at NVIDIA, as the manager of the "Devtech Proviz" team (Developer Technical Relations Department for Professional Visualization), participating in a variety of projects in relation with NVIDIA partners while contributing to R&D, writing demos and tools for new GPU Chips.

SIG1626: NVIDIA Mental Ray and Iray® Plug-ins: New Rendering Solutions
TUESDAY, JULY 26 | BOOTH #509 | 2:00pm - 2:25pm

Come learn of NVIDIA’s latest rendering product offerings for use in Maya, 3ds Max, Cinema4D and Rhino. Topics will include: GPU production rendering, lighting simulation, VR production, cluster rendering and options for outfitting a studio.

Phillip Miller (Director of Product Management, NVIDIA)

Phillip Miller is senior director of NVIDIA's commercial advanced rendering offerings, ranging from the Iray and mental ray shipping within leading products in Design and Entertainment to the IndeX technology used in large data visualization. Phil has been with NVIDIA for 7 years and has led leading software products for over 20 years, including the Entertainment offerings at Autodesk and the Web Design product line at Adobe. He holds a Masters of Architecture from the University of Illinois and is a registered architect.

SIG1627: Video Processing and Deep Learning and the Importance of the GPU
TUESDAY, JULY 26 | BOOTH #509 | 2:30pm - 2:55pm

Deep Learning and Machine Learning have enabled many new applications in image processing over the last few years. However, these technologies have not yet been widely used for practical video applications due to the heavy processing requirements and advanced capabilities needed. Vilynx has developed an advanced video solution that overcomes the hurdles to leverage ML/DL technologies to provide next generation products for top publishers and the YouTube market. Through the use of audience data, social networks data, video contextual data and video processing algorithms; Vilynx is able to a) automatically detect the best moments of any video, b) auto tag the clips and c) relate these tags with traffic and audience analytics, so that they match what people are looking for and saying about a particular topic. This allows content creators to effortlessly connect their videos with the right audience and amplify their message. During the tech talk we will cover how the Vilynx stack combines Machine Learning, Video Processing and Deep Learning to enable a new video discovery and video sharing experience while showcasing the importance of GPU computing to optimize the deployment of the technology.

Juan Carlos Riveiro (CEO, Vilynx)

Juan Carlos Riveiro is CEO and Co-Founder of Vilynx. Globally recognized expert on digital signal processing/data analysis/machine learning with more than 100 patents. Previous to Vilynx, he was CEO & Co-Founder GIGLE Networks, a home networking startup acquired by BROADCOM and CTO & VP of R&D at DS2 which was acquired by MARVELL.

SIG1628: Look Development in Real Time
TUESDAY, JULY 26 | BOOTH #509 | 3:00pm - 3:25pm

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Jean-Daniel Nahmias (Technical Director, Pixar)

Jean-Daniel Nahmias received his B.Sc., M.Sc and Ph.D from University College London, specializing in virtual reality, augmented reality and computer vision. Before joining Pixar he spent most of his time optimizing algorithms to run quickly on GPUs. This included limited angle tomography reconstruction for breast cancer screening and stereo vision reconstruction. He joined Pixar as a global tech TD to work on productions and is currently developing real time lighting technologies.

SIG1628: Look Development in Real Time
TUESDAY, JULY 26 | BOOTH #509 | 3:00pm - 3:25pm

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Davide Pesare (Lead Software Engineer, Pixar)

Davide Pesare has been involved in the film and gaming industry for almost 15 years. His work spans shading, lighting and software development, as he contributed to projects at Animal Logic, MPC, and Pyro Studios. He now leads a software department at Pixar.

SIG1629: Production-Quality, Final-Frame Rendering on the GPU
TUESDAY, JULY 26 | BOOTH #509 | 3:30pm - 3:55pm

We'll discuss the latest features of Redshift, the GPU-accelerated renderer running on NVIDIA GPUs that is redefining the industry's perception towards GPU final-frame rendering. A few customer work examples will be demonstrated. This talk will be of interest both to the industry professional who wants to learn more about GPU-accelerated production-quality rendering as well as the software developer who's interested on GPU-accelerated rendering.

Robert Slater (Vice President Engineering, RedShift)

Robert Slater is a seasoned GPU software engineer and video game industry veteran, with a vast amount of experience in and passion for the field of programming. As a programmer, Rob has worked for companies such as Electronic Arts, Acclaim and Double Helix Games (now Amazon Games). During this time, Rob was responsible for the core rendering technology at each studio, driving their creative and technical development. Rob's graphics engine programming experience and know-how ensures that Redshift is always at the forefront of new trends and advances in the industry.

SIG1630: NVIDIA Iray®: Changing the Face of Architecture and Design
TUESDAY, JULY 26 | BOOTH #509 | 4:00pm - 4:25pm

NVIDIA's Iray technology was a game changer in the design process of its new corporate campus. Gensler teamed up with developers at NVIDIA to help integrate this technology into the process to accurately simulate how the design of the campus would look in the real world. This process ended up helping everyone understand how light and materials were going to act in the 500,000-square-foot space. Being able to accurately compute how the massive amount of daylight coming into the space would react to changes in the design was incredible feedback for the designers. The data that Iray visualized helped with almost every design decision from start to finish.

Scott DeWoody (Firmwide Creative Media Manager, Gensler)

Scott DeWoody has always had an affinity for art and technology. After seeing the animation being done through computers, he knew he could combine the two. In 2007, he graduated from The Art Institute of Houston with a B.A. in media arts and animation. There, he focused on lighting and rendering techniques using 3ds Max software. Image quality and workflow are the top priorities in his work. He is constantly studying color theory, composition, and new ways to produce the best possible results. He has worked at Gensler for the past eight years as a visualization artist and manager. He has worked for numerous clients, including NVIDIA Corporation, ExxonMobil, Shell Oil Company, BP, City Center Las Vegas, and many more. He is exploring the new possibilities of architecture in the interactive space with gaming platforms, augmented reality, and virtual reality.

SIG1631: MDL Materials to GLSL Shaders: Theory and Practice
TUESDAY, JULY 26 | BOOTH #509 | 4:30pm - 4:55pm

Learn how you can map arbitrarily complex materials described with NVIDIA's Material Definition Language (MDL) onto sets of material-specific GLSL shaders using the MDL SDK. We use a skeleton of a general purpose main shader per stage, where a couple of pre-defined evaluation and sample functions are called. The body of those functions is composed by some code-snippets selected by the material analyzer. This approach has been adopted by ESI into their new rendering framework to showcase the power and flexibility of MDL. A demo will show the implementation results with focus on material re-use and sharing.

Andreas Mank (Team Leader Software Development, ESI Group)

Andreas Mank leads the visualization team in the BU Immersive Experience, where he's responsible for driving advances in visualization technologies and delivering state-of-the-art, high-performance immersive engineering visualization as well as advanced, high-quality rendering with ESI software products. Andreas has studied media computer science at the University of Applied Sciences in Wedel, Germany. He has over 10 years of experience in virtual reality-related software development. In recent years, he has worked as a team leader in research and development.

SIG1631: MDL Materials to GLSL Shaders: Theory and Practice
TUESDAY, JULY 26 | BOOTH #509 | 4:30pm - 4:55pm

Learn how you can map arbitrarily complex materials described with NVIDIA's Material Definition Language (MDL) onto sets of material-specific GLSL shaders using the MDL SDK. We use a skeleton of a general purpose main shader per stage, where a couple of pre-defined evaluation and sample functions are called. The body of those functions is composed by some code-snippets selected by the material analyzer. This approach has been adopted by ESI into their new rendering framework to showcase the power and flexibility of MDL. A demo will show the implementation results with focus on material re-use and sharing.

Andreas Suessenbach (Senior DevTech Engineer, NVIDIA)

Andreas Suessenbach is a senior DevTech engineer in NVIDIA's Professional Solutions Group, where he works to help different ISVs improve their GPU-related implementations. He has more than 15 years of experience in scene graph and rendering technologies, with emphasis on efficient handling of geometries and materials. He has a diploma in mathematics with a focus on numerical mathematics and CAGD.

SIG1632: Cutting Edge Tools and Techniques for Real-Time Rendering with NVIDIA GameWorks
TUESDAY, JULY 26 | BOOTH #509 | 5:00pm - 5:25pm

The GameWorks program gets the best tools and techniques into the hands of game developers everywhere. Increasingly these tools are for film, VR, simulation and other demanding applications as well as AAA games. Attend this talk to gain a board overview of our technologies and how you can use them in your project. Highlights include real time volumetric lighting, voxel based ambient occlusion, and voxel based physics simulation as well as tools like our class leading graphics debugger and native android development tools.

David Coombes (Developer Marketing Manager,GameWorks, NVIDIA)

David Coombes worked at Sony on PlayStation products for many years and now helps NVIDIA get awesome technology into the hands of developers everywhere.

SIG1633: Give Life to your 3D Art with MDL and NVIDIA Iray® in Substance Painter
TUESDAY, JULY 26 | BOOTH #509 | 5:30pm - 6:00pm

Allegorithmic and NVIDIA will show how combining Substance, worldwide reference for procedural textures, MDL, the new standard to define multi-layer materials, and NVIDIA Iray, GPU-accelerated unbiased raytracer, will help solving artists and developers PBR material challenges from edition to final frame rendering for artistic shots. After explaining MDL basics and the associated material workflow in Substance Designer, we will showcase the latest edition of Substance Painter, market's most innovative real-time 3D painting software. Now embedding Iray as alternate viewport, Substance Painter fully leverages the power of MDL and Substance and natively enhances your art with the most advanced rendering quality reduced to minimal compute time thanks to GPU acceleration.

Manuel Kraemer (Senior Developer Technology Engineer, NVIDIA)

Manuel Kraemer is a Senior Developer Technology Engineer at NVIDIA. Previously Manuel was a Graphics Software Engineer at Pixar Animation Studios. Prior to that, Manuel worked as a technical director at Disney Feature Animation, Double Negative and the BBC.

SIG1633: Give Life to your 3D Art with MDL and NVIDIA Iray® in Substance Painter
TUESDAY, JULY 26 | BOOTH #509 | 5:30pm - 6:00pm

Allegorithmic and NVIDIA will show how combining Substance, worldwide reference for procedural textures, MDL, the new standard to define multi-layer materials, and NVIDIA Iray, GPU-accelerated unbiased raytracer, will help solving artists and developers PBR material challenges from edition to final frame rendering for artistic shots. After explaining MDL basics and the associated material workflow in Substance Designer, we will showcase the latest edition of Substance Painter, market's most innovative real-time 3D painting software. Now embedding Iray as alternate viewport, Substance Painter fully leverages the power of MDL and Substance and natively enhances your art with the most advanced rendering quality reduced to minimal compute time thanks to GPU acceleration.

Jérémie Noguer (Senior Product Manager, Allegorithmic)

Jérémie Noguer has a game developer background and after being a Technical Artist for Allegorithmic for 7 years, Jérémie is the Senior Product Manager for Substance Painter since 2013.

SIG1634: A New Reality with Iray VR
WEDNESDAY, JULY 27 | BOOTH #509 | 10:00am - 10:25am

Iray VR brings photographic realism to an interactive VR experience. Learn how you can leverage this new capability in your own work and with the industry leading 3D tools you already use.

Phillip Miller (Director of Applied Engineering, NVIDIA)

Phillip Miller is senior director of NVIDIA's commercial advanced rendering offerings, ranging from the Iray and mental ray shipping within leading products in Design and Entertainment to the IndeX technology used in large data visualization. Phil has been with NVIDIA for 7 years and has led leading software products for over 20 years, including the Entertainment offerings at Autodesk and the Web Design product line at Adobe. He holds a Masters of Architecture from the University of Illinois and is a registered architect.

SIG1635: Leveraging Microsoft Azure's GPU N-Series for Compute Workloads and Visualization
WEDNESDAY, JULY 27 | BOOTH #509 | 10:30am - 10:55am

This talk will cover the recently announced state-of-the-art GPU visualization and compute infrastructure in Microsoft's Azure cloud, the session will cover how you can leverage them for rendering, encoding, visualization and dynamic creation of assets. This session is aimed at folks who would like to learn more about how to utilize and leverage Azure for their production pipelines.

Karan Batta (Program Manager, Microsoft)

Karan Batta is a program manager in the Big Compute/HPC team in Microsoft's Azure, where he leads the vision and deployment of the new Azure GPU N-Series as part of broader Azure Compute IaaS capabilities. Additionally, he leads the media and entertainment vertical solutions as part of the Azure Batch HPC service

SIG1636: WetBrush: GPU-Based 3D Painting Simulation at the Bristle Level
WEDNESDAY, JULY 27 | BOOTH #509 | 11:00am - 11:25am

We built a real-time oil painting system that simulates the physical interactions among brush, paint, and canvas at the bristle level entirely using CUDA. To simulate sub-pixel paint details given the limited computational resource, we propose to define paint liquid in a hybrid fashion: the liquid close to the brush is modeled by particles, and the liquid away from the brush is modeled by a density field. Based on this representation, we develop a variety of techniques to ensure the performance and robustness of our simulator under large time steps, including brush and particle simulations in non-inertial frames, a fixed-point method for accelerating Jacobi iterations, and a new Eulerian-Lagrangian approach for simulating detailed liquid effects.

Zhili Chen (3D Graphics Researcher, ADOBE)

Zhili Chen is a 3D graphics researcher at Adobe. He received his Ph.D. in Computer Science from Ohio State University in 2015. His research interests include physically based simulation, real-time graphics, 3D reconstruction, and virtual reality.

SIG1636: WetBrush: GPU-Based 3D Painting Simulation at the Bristle Level
WEDNESDAY, JULY 27 | BOOTH #509 | 11:00am - 11:25am

We built a real-time oil painting system that simulates the physical interactions among brush, paint, and canvas at the bristle level entirely using CUDA. To simulate sub-pixel paint details given the limited computational resource, we propose to define paint liquid in a hybrid fashion: the liquid close to the brush is modeled by particles, and the liquid away from the brush is modeled by a density field. Based on this representation, we develop a variety of techniques to ensure the performance and robustness of our simulator under large time steps, including brush and particle simulations in non-inertial frames, a fixed-point method for accelerating Jacobi iterations, and a new Eulerian-Lagrangian approach for simulating detailed liquid effects.

Chris Hebert (Developer Technology Software Engineer, NVIDIA)

Chris Hebert has worked with real rime rendering and data visualization for 20 years across the games and pro-vis industries. He has worked with algorithm development for path rendering, real time ray tracking and fluid simulation. Chris joined NVIDIA in March 2015 and now specializes in rendering optimization for 2D/3D graphics and compute.

SIG1637: Visualization Applications on NVIDIA DGX-1
WEDNESDAY, JULY 27 | BOOTH #509 | 11:30am - 11:55am

An introduction to the NVIDIA DGX-1 System including discussion of containerized applications for professional visualization & deep learning.

Charlie Boyle (Senior Director, DGX-1 Marketing, NVIDIA)

Charlie Boyle leads the product marketing efforts to drive the world wide adoption of NVIDIA’s DGX-1 system. Charlie brings a wealth of knowledge in the IT and service provider industry, having run marketing, engineering and data center operations for some of the world's largest telcos and service providers.

SIG1638: Rendering Lost Historical Buildings with NVIDIA Technology
WEDNESDAY, JULY 27 | BOOTH #509 | 12:00pm - 12:25pm

An exploration of Project Soane, an effort to create a digital model and photorealistic renderings of the Bank of England from the 1800s, and the latest NVIDIA software and graphics hardware used for interactive physically based rendering.

Andrew Rink (Global Marketing Strategy , NVIDIA)

Andrew Rink is responsible for global marketing strategy for AEC and Manufacturing Industries at NVIDIA. With wide-ranging international experience in a variety of industries including CAD and animation software, lasers and photonic power, Andrew has been bringing innovative technology to market for over 25 years. Based at NVIDIA’s Silicon Valley headquarters, he has travelled to over 80 countries and is fluent in three languages.

SIG1639: Learning Representations for Automatic Colorization
WEDNESDAY, JULY 27 | BOOTH #509 | 12:30pm - 12:55pm

We developed a fully automatic image colorization system. Our approach leverages recent advances in deep networks, exploiting both low-level and semantic representations during colorization. As many scene elements naturally appear according to multimodal color distributions, we train our model to predict per-pixel color histograms. This intermediate output can be used to automatically generate a color image, or further manipulated prior to image formation; our experiments consider both scenarios. On both fully and partially automatic colorization tasks, our system significantly outperforms all existing methods.

Gustav Larsson (Ph.D. Student, University of Chicago)

Gustav Larsson is a fifth-year Ph.D. student at the University of Chicago, working with Greg Shakhnarovich and Michael Maire at the Toyota Technological Institute at Chicago. His research interests include neural network architectures, unsupervised and semi-supervised learning, and novel applications of deep learning for computer vision.

SIG1640:The Audi VR Experience - A Look into the Future of Digital Retail
WEDNESDAY, JULY 27 | BOOTH #509 | 1:00pm – 1:25pm

After delivering the pinnacle of commercial VR earlier this year, Audi and strategic partners ZeroLight unveil the next generation in online retail. Hailed by Jalopnik as “the best car configurator on the internet” and winner of the Techies 2016 Cloud Technology Award. Audi’s new 3D web solution utilizes revolutionary techniques to deliver a self-repairing, extremely stable and responsive cloud configurator. Born as a response to changing consumer behavior, with 96% of research conducted online, Audi will address the importance of introducing a highly advanced, immersive and engaging proposition for Audi customers to ensure they receive the premium quality experience synonymous with the four rings. The discussion will include insight into the development challenges faced, and how the solution combines with the automated omnichannel, forming a cohesive, interactive customer journey.

Thomas Orenz (Team Leader Virtual Reality and Sales Content, Audi, AG)

Thomas Orenz is responsible for Content innovation at AUDI AG and visual quality around POS systems like Audi City using Game Engine technology like Unity5 together with Zerolight, pushing the limits to the next level. Thomas studied product design at Burg Giebichenstein, University of Art and Design Halle.

SIG1640:The Audi VR Experience - A Look into the Future of Digital Retail
WEDNESDAY, JULY 27 | BOOTH #509 | 1:00pm – 1:25pm

After delivering the pinnacle of commercial VR earlier this year, Audi and strategic partners ZeroLight unveil the next generation in online retail. Hailed by Jalopnik as “the best car configurator on the internet” and winner of the Techies 2016 Cloud Technology Award. Audi’s new 3D web solution utilizes revolutionary techniques to deliver a self-repairing, extremely stable and responsive cloud configurator. Born as a response to changing consumer behavior, with 96% of research conducted online, Audi will address the importance of introducing a highly advanced, immersive and engaging proposition for Audi customers to ensure they receive the premium quality experience synonymous with the four rings. The discussion will include insight into the development challenges faced, and how the solution combines with the automated omnichannel, forming a cohesive, interactive customer journey.

François de Bodinat (CMO, ZeroLight)

François de Bodinat has 15 years of experience in the commercial visualization industry. His tenure as CMO at ZeroLight is preceded by roles at Amazon, RTT and Dassault Systèmes. Such expertise is underpinned by an M.Sc in 3D software engineering and an MBA from the renowned INSEAD business school.

SIG1641: Face2Face: Real-time Face Capture and Reenactment
WEDNESDAY, JULY 27 | BOOTH #509 | 1:30pm - 1:55pm

We present a novel approach for real-time facial reenactment of a monocular target-video sequence, where the goal is to animate the facial expressions of the target video with a source actor and re-render the manipulated output video in a photo-realistic fashion.

Justus Thies (Ph.D.Student, University of Erlangen-Nuremberg)

Justus Thies is a PhD student at the University of Erlangen-Nuremberg. His research focuses on real-time facial performance capturing and expression transfer using commodity hardware. He is interested in Computer Vision and Computer Graphics, as well as in efficient implementations of optimization techniques, especially on graphics hardware.

SIG1641: Face2Face: Real-time Face Capture and Reenactment
WEDNESDAY, JULY 27 | BOOTH #509 | 1:30pm - 1:55pm

We present a novel approach for real-time facial reenactment of a monocular target-video sequence, where the goal is to animate the facial expressions of the target video with a source actor and re-render the manipulated output video in a photo-realistic fashion.

Matthias Niessner (Assistant Professor, Stanford University)

Matthias Niessner is a visiting assistant professor at Stanford University. Previous to his appointment at Stanford, he earned his PhD from the University of Erlangen-Nuremberg, Germany under the supervision of Günther Greiner. His research focuses on different fields of computer graphics and computer vision, including the reconstruction and semantic understanding of 3D scene environments.

SIG1642: The Technology Powering the Immersive Cinema Experiences from Lucasfilm's ILMxLAB
WEDNESDAY, JULY 27 | BOOTH #509 | 2:00pm - 2:25pm

Bringing Cinematic Virtual Reality to life requires the kind of tight collaboration between technical and creative forces that Lucasfilm has thrived on for over 40 years. We'll dive deep into the technology that powers the creative and technical experiments underway at the studio. We will discuss how multiple GPUs collaborate to achieve the highest level of photorealism for virtual reality today, how to repurpose offline rendered movie quality assets for real time rendering in sub 11 milliseconds per frame, and some of the lessons learned along the way.

Lutz Latta (Principal Engineer, Lucasfilms)

Lutz Latta is the Principal Engineer of ILMxLAB, where he leads the technology development for Lucasfilm's innovative VR, AR, and immersive experiences. Previously he worked extensively on video games, as the Lead Graphics Engineer of Star Wars: 1313 at LucasArts, and on The Lord of the Rings and Command & Conquer games at Electronic Arts Los Angeles.

SIG1643: Introducing NVIDIA© GVDB Sparse Volumes
WEDNESDAY, JULY 27 | BOOTH #509 | 2:30pm - 2:55pm

We introduce GVDB Sparse Volumes as a new offering with NVIDIA DesignWorks to focus on high quality raytracing of sparse volumetric data for motion pictures. Based on the VDB topology of Museth, with a novel GPU-based data structure and API, GVDB is designed for efficient compute and raytracing on a sparse hierarchy of grids. Raytracing on the GPU is accelerated with indexed memory pooling, 3D texture atlas storage and a new hierarchical traversal algorithm. GVDB integrates with NVIDIA OptiX, and is developed as an open source library as a part of DesignWorks.

Rama Hoetzlein (Research Engineer, NVIDIA)

Rama Hoetzlein works in the areas of computer graphics and media arts. His thesis at Cornell University in 2001 focused on robotic and mechanical sculpture. From 2001 to 2004, he co-founded the Game Design Initiative at Cornell. In 2010, Rama’s dissertation at the University of California Santa Barbara explored creative support tools for procedural modeling. His current work investigates developer technologies and graphics research on sparse volumes at NVIDIA.

SIG1643: Introducing NVIDIA© GVDB Sparse Volumes
WEDNESDAY, JULY 27 | BOOTH #509 | 2:30pm - 2:55pm

We introduce GVDB Sparse Volumes as a new offering with NVIDIA DesignWorks to focus on high quality raytracing of sparse volumetric data for motion pictures. Based on the VDB topology of Museth, with a novel GPU-based data structure and API, GVDB is designed for efficient compute and raytracing on a sparse hierarchy of grids. Raytracing on the GPU is accelerated with indexed memory pooling, 3D texture atlas storage and a new hierarchical traversal algorithm. GVDB integrates with NVIDIA OptiX, and is developed as an open source library as a part of DesignWorks.

Ken Museth

Description Coming Soon!

SIG1644: Light Field Rendering and Streaming for VR & AR
WEDNESDAY, JULY 27 | BOOTH #509 | 3:00pm - 3:25pm

We will discuss OTOY's cutting edge light field rendering toolset and platform. OTOY's light field rendering technology allows for immersive experiences on mobile HMDs and next gen displays, ideal for VR and AR. OTOY is actively developing a groundbreaking light field rendering pipeline, including the world's first portable 360 LightStage capture system and a cloud-based graphics platform for creating and streaming light field media for virtual reality and emerging holographic displays.

Jules Urbach (CEO & Founder, OTOY Inc,)

Jules Urbach is a pioneer in computer graphics, streaming and 3D rendering with over 25 years of industry experience. He made his first game, Hell Cab (Time Warner Interactive) at age 18, which was one of the first CD-ROM games ever created. Six years after Hell Cab, Jules founded Groove Alliance. Groove created the first 3D game ever available on Shockwave.com (Real Pool). Currently, Jules is busy working on his two latest ventures, OTOY and LightStage which aim to revolutionize 3D content capture, creation and delivery.

SIG1645: Large Scale Video Processing for VR
WEDNESDAY, JULY 27 | BOOTH #509 | 3:30pm - 3:55pm

Jaunt VR has developed a GPU based large scale video processing platform to combine multiple HD camera streams in radial configuration into seamlessly stitched stereoscopic spherical panoramas. The approach uses complex computational photography algorithms that require sharded processing of the data across hundreds of cloud based GPU instances.

Daniel Kopeinigg (Principal Engineer, Jaunt VR)

Daniel Kopeinigg is a Principal Engineer and the Head of Computational Photography at Jaunt VR. Prior to Jaunt he worked at Intel on their mobile imaging pipeline. Daniel has expertise in medical imaging, computer vision, light-fields, computational imaging and camera hardware. He is originally from Austria and has a master's degree in Signal Processing and Computer Science from University of Technology, Graz and holds a PhD in Electrical Engineering from Stanford University.

SIG1646: Digital Actors at MPC: Bridging the Uncanny Valley with GPU Technology
WEDNESDAY, JULY 27 | BOOTH #509 | 4:00pm - 4:25pm

Discover the next generation of GPU-enabled facial rigs for digital actors at MPC. Through a mixed approach of linear deformers and non-linear analysis, MPC aims to improve the performance and appearance of its digital actors and improve upon the state of the art in the visual effects industry. You'll learn from industry experts how MPC is using the latest fabric engine technology to ease the transition to GPUs, enabling fast drawing of characters and fast parallel computation of deformers on CUDA.

Damien Fagnou (CTO, MPC)

Damien Fagnou is the CTO at MPC, where he brings together his expertise in software and production to evolve and refine the creation processes across all feature film VFX work. After finishing university with an M.S. in computer science in France, he worked for an animated series implementing the technology to speed up the motion capture pipeline and rendering. He later accepted a job to help set up the workflow at Attitude studios and then took on the role of Tools and Workflow Programmer at Climax in the U.K. In 2003, he transferred his skills to the film industry and started at leading VFX post-production studio MPC to work on Troy, implementing preview tools and city rendering scripts. In 2005, Damien became R&D lead on Charlie and the Chocolate Factory, 10,000 BC, and Narnia. He then moved closer to production and became MPC's stereographer working on movies, including Pirates of the Caribbean: On Stranger Tides, the Harry Potter films, and Prometheus. After a few years in production, he returned to his software roots and became global head of software overseeing software development efforts across the company.

SIG1647: Look Development in Real Time
WEDNESDAY, JULY 27 | BOOTH #509 | 4:30pm - 4:55pm

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Jean-Daniel Nahmias (Technical Director, Pixar)

Jean-Daniel Nahmias received his B.Sc., M.Sc and Ph.D from University College London, specializing in virtual reality, augmented reality and computer vision. Before joining Pixar he spent most of his time optimizing algorithms to run quickly on GPUs. This included limited angle tomography reconstruction for breast cancer screening and stereo vision reconstruction. He joined Pixar as a global tech TD to work on productions and is currently developing real time lighting technologies.

SIG1647: Look Development in Real Time
WEDNESDAY, JULY 27 | BOOTH #509 | 4:30pm - 4:55pm

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Davide Pesare (Lead Software Engineer, Tools Shading, Pixar)

Davide Pesare has been involved in the film and gaming industry for almost 15 years. His work spans shading, lighting and software development, as he contributed to projects at Animal Logic, MPC, and Pyro Studios. He now leads a software department at Pixar.

SIG1648: Virtual Reality Rendering Features of NVIDIA GPUs
WEDNESDAY, JULY 27 | BOOTH #509 | 5:00pm - 5:25pm

Come and learn about the virtual reality (VR) rendering features of NVIDIA’s latest GeForce, Quadro, and Tegra GPUs. Geared for a general audience, this talk visually explains the VR rendering process and how NVIDIA GPUs with support for Simultaneous Multi-Projection can render VR more efficiently and at higher quality than conventional VR rendering techniques.

Mark Kilgard (Principal Software Engineer, NVIDIA)

Mark Kilgard is a principal system software engineer at NVIDIA working on OpenGL, vector graphics, we page rendering, and GPU-rendering algorithms. Mark has 25 years' experience with OpenGL including the specification of numerous important OpenGL extensions. He implemented the OpenGL Utility Toolkit (GLUT) library. Mark authored two books and is named on over 50 graphics-related patents.

SIG1649: Giant VR - A Sundance Movie
WEDNESDAY, JULY 27 | BOOTH #509 | 5:30pm - 6:00pm

Trapped in an active war-zone, two parents struggle to distract their young daughter by inventing a fantastical tale. Inspired by real events, this immersive virtual-reality experience, which mixes both game engine and live-action video, transports the viewer into the family's makeshift basement shelter. Giant had its world premiere at 2016 Sundance Film Festival New Frontier and its European premiere at Cannes Film Festival where it garnered strong emotional responses from both the general public and the press. Come learn about the making of this cinematic virtual reality experience from its creators Milica Zec and Winslow Porter.

Milica Zec (Director, Editor, Screen Writer, Giant VR)

Milica Zec is a New York City-based film and virtual reality director, editor, and screenwriter. Her directorial debut in the virtual reality medium was a short narrative piece called, “Giant,” which premiered at Sundance Film Festival New Frontier 2016. Since its premiere, “Giant” has been lauded as a seminal expression of the potential of virtual reality as a storytelling vehicle.

SIG1649: Giant VR - A Sundance Movie
WEDNESDAY, JULY 27 | BOOTH #509 | 5:30pm - 6:00pm

Trapped in an active war-zone, two parents struggle to distract their young daughter by inventing a fantastical tale. Inspired by real events, this immersive virtual-reality experience, which mixes both game engine and live-action video, transports the viewer into the family's makeshift basement shelter. Giant had its world premiere at 2016 Sundance Film Festival New Frontier and its European premiere at Cannes Film Festival where it garnered strong emotional responses from both the general public and the press. Come learn about the making of this cinematic virtual reality experience from its creators Milica Zec and Winslow Porter.

Winslow Porter (Producer, co-Creator, Giant)

Winslow Turner Porter III (Producer, Co-Creator of “Giant”) is a Brooklyn based director, producer and creative technologist, specializing in virtual reality and large-scale immersive experiential installations. Winslow has always been fascinated with the possibilities of how the intersection of art and technology can elevate storytelling. Winslow started out as a feature film editor, but pivoted into art/tech after Graduating from NYU Tisch’s Interactive Telecommunications Program (ITP) in 2010. With over six years of experiential work under his belt, he has helped create interactive art experiences for Google, Delta, Diesel, Merrell and Wired, to name a few. Winslow also produced the Tribeca Film Festival Transmedia Award-winning documentary CLOUDS, among other acclaimed new media projects. Since 2015, Winslow has been a member of NEW INC, the New Museum’s art, technology and design incubator program. Giant is Winslow’s 5th virtual reality project.

SIG1650: Visualization Applications on NVIDIA DGX-1
THURSDAY, JULY 28 | BOOTH #509 | 10:00am - 10:25am

An introduction to the NVIDIA DGX-1 System including discussion of containerized applications for professional visualization & deep learning.

Deepti Jain (Senior Applied Engineer, NVIDIA)

Deepti Jain is a senior applied engineer at NVIDIA. Her focus is deep learning and data analytics on DGX-1. She is very passionate about GPU technology and its practical applications across industries and verticals.

SIG1651: Independence Day: Resurgence - Killer Queen
THURSDAY, JULY 28 | BOOTH #509 | 10:30am - 10:55am

We'll discuss Weta Digital's creation of the Alien Queen in this year's Independence Day: Resurgence. We will focus on the big showdown with the Queen at Area 51. He will also cover some of the unique fx simulation work, new innovations with their skydome lighting setup as well as some of the techniques that allowed Weta to move to a large number of all CG shots.

Matt Aitken (Visual Effects Supervisor, Weta Digital)

Matt Aitken has worked at Weta Digital since the early days of the company. He was Digital Models Supervisor on The Lord of the Rings trilogy, pre-production / R&D supervisor for King Kong and CG Supervisor for Avatar. Matt’s credits as a Visual Effects Supervisor include Steven Spielberg’s The Adventures of Tintin, Peter Jackson’s King Kong 360 3-D attraction at Universal Studios Hollywood, The Lovely Bones, District 9 (which won him a nomination for Best Visual Effects Oscar), The Hobbit: An Unexpected Journey, Iron Man 3 and The Hobbit: The Desolation of Smaug and The Battle of the Five Armies. Matt recently completed work on Independence Day Resurgence.

SIG1652: NUKE Studio for Film Pipelines
THURSDAY, JULY 28 | BOOTH #509 | 11:00am - 11:25am

In this demo, NUKE STUDIO Product Manager Juan Salazar looks at how NUKE STUDIO fits into a film pipeline by assuming the roles of a supervisor, a 2D lead and a NUKE artist. Follow the entire collaborative process, from the supervisor setting up projects, ingesting media, annotating shots and exporting assets; to the lead visualizing sequences with timeline effects and quick compositing; to the artist creating the final composite using the advanced features in NUKE 10; and finally to review and finishing back in NUKE STUDIO’s timeline.

Juan Salazar (NUKE Studio Product Manager, The Foundry)

As NUKE STUDIO Product Manager, Juan is currently guiding the development and design of NUKE Studio. He has spent the last three years as a NUKE Workflow specialist and Creative Specialist at The Foundry, training and consulting companies around the world on how to use and implement The Foundry software into their pipelines. Juan has over 10 years of experience working in Colombia, the USA and the UK as an animation director, compositor, online editor, colorist and VFX supervisor in companies such as The Mill among many others. While working in London he has also taught Nuke and Maya at the NFTS and Escape Studios. Juan has a Bachelor of Arts in Computer Animation, Media Arts and VFX from the Art Institute of Fort Lauderdale, USA and a Master of Arts in Film and Television digital Post Production from the National Film and Television School in London.

SIG1653: Look Development in Real Time
THURSDAY, JULY 28 | BOOTH #509 | 11:30am - 11:55am

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Jean-Daniel Nahmias (Technical Director, Pixar)

Jean-Daniel Nahmias received his B.Sc., M.Sc and Ph.D from University College London, specializing in virtual reality, augmented reality and computer vision. Before joining Pixar he spent most of his time optimizing algorithms to run quickly on GPUs. This included limited angle tomography reconstruction for breast cancer screening and stereo vision reconstruction. He joined Pixar as a global tech TD to work on productions and is currently developing real time lighting technologies.

SIG1653: Look Development in Real Time
THURSDAY, JULY 28 | BOOTH #509 | 11:30am - 11:55am

Pixar's next-generation look development tool, Flow, allows artists to quickly develop and visualize complex shader networks in order to create rich and compelling materials for film assets. Flow interactively displays images using RTP, our real time GPU ray tracer built on top of NVIDIA's OptiX toolkit and supporting our Universal Scene Description (USD). This enables us to match Pixar's RenderMan output by sharing our studio's lights and surfaces.

Davide Pesare (Lead Software Engineer, Tools Shading, Pixar)

Davide Pesare has been involved in the film and gaming industry for almost 15 years. His work spans shading, lighting and software development, as he contributed to projects at Animal Logic, MPC, and Pyro Studios. He now leads a software department at Pixar.

SIG1654: Processing VR Video in the Cloud
THURSDAY, JULY 28 | BOOTH #509 | 12:00pm - 12:25pm

At Pixvana, we are designing a platform for XR storytelling that enables new experiences for augmented or virtual reality systems. The media processing system for our new platform is based in the cloud, allowing us to create accelerated processes for delivering high quality VR video. We will share insights on building around GPU-accelerated cloud infrastructure for both batch and interactive systems along with details about our cloud processing system for video transformation and encoding that dramatically improves the quality of VR video streaming.

Sean Safreed (Cofounder/CMO, Pixvana)

Sean Safreed is a serial entrepreneur, product designer, and occasional javascript coder. Prior to Pixvana, he cofounded Red Giant, the leading tool provider for editors and motion graphics designers. He believes that VR and AR are going to define an entirely new medium for computing, interaction and storytelling.

SIG1655: VR Multi GPU Acceleration Featuring Autodesk VRED
THURSDAY, JULY 28 | BOOTH #509 | 12:30pm - 12:55pm

Hyundai Design Research & Autodesk VR Team present a virtual design review of the Hyundai N Vision 2025 Gran Turismo. Tobias France, Hyundai Designer, and Paul Schmucker, Autodesk Automotive SME, will demonstrate their multi-user car design review utilizing the HTC VIVE and Autodesk VRED Pro software; powered by NVIDIA Quadro Graphics.

Paul Schmucker (Subject Matter Expert, Autodesk)

As an Automotive SME, Paul helps win customers by interpreting business issues and formulating solutions to solve challenges. Paul has over 10 years’ experience as a successful industrial designer, working on projects including transportation/automotive, furniture, consumer electronics and sporting goods. Paul also co-hosts/produces an online show reviewing cars for Everyday Driver.

SIG1655: VR Multi GPU Acceleration Featuring Autodesk VRED
THURSDAY, JULY 28 | BOOTH #509 | 12:30pm - 12:55pm

Hyundai Design Research & Autodesk VR Team present a virtual design review of the Hyundai N Vision 2025 Gran Turismo. Tobias France, Hyundai Designer, and Paul Schmucker, Autodesk Automotive SME, will demonstrate their multi-user car design review utilizing the HTC VIVE and Autodesk VRED Pro software; powered by NVIDIA Quadro Graphics.

Tobias France (Hyundai Designer, Hyundai)

Tobias France is an Automotive Designer and Future Visionary who started his production design career styling cars for Ford Motor Company in Dearborn Michigan. He lived 6 years in Japan, doing production and conceptual design for Mazda where his design expertise contributed to the creation of both the “Nagare” and "Kodo" design philosophies. He currently resides in Irvine California, visioning the future for Hyundai North America. He is the designer of the Hyundai N Vision 2025 Grand Turismo concept car which debuted at the Frankfurt Auto Show last year and will make its virtual world stage debut on Sony’s Playstation 4 this November with the release of Polyphony Digital’s, GT Sport.

SIG1656: Rendering Faster and Better with VRWorks on Pascal
THURSDAY, JULY 28 | BOOTH #509 | 1:00pm - 1:25pm

This talk will introduce developers to NVIDIA's VRWorks, an SDK for VR game, engine, and headset developers that cut latency and accelerate stereo rendering performance on NVIDIA GPUs. We'll explain the features of this SDK, including VR SLI, multi-resolution shading, context priorities, and direct mode. We'll discuss the motivation for these features, how they work, and how developers can use VRWorks in their renderers to improve the VR experience on Oculus Rift, HTC Vive, and other VR headsets.

Ryan Prescott (Developer Technology Engineer, NVIDIA)

Ryan Prescott helps developers optimize rendering, implements new techniques, and engages in the eternal battle against the Nyquist limit and aliasing in all forms.

SIG1657: Talk Coming Soon!
THURSDAY, JULY 28 | BOOTH #509 | 1:30pm - 1:55pm

Games commonly use filtered shadow maps to shadow their worlds, but these introduce blocky aliasing artifacts that can introduce distracting shadow popping and flickering. At NVIDIA Research, we developed a fast algorithm using "irregular z-buffers" that still leverages decades of shadow map technology, but avoids this aliasing to provide ray traced quality shadows. Working with others at NVIDIA, we combined this work with our proven PCSS technology to significantly increase shadow quality for today's games. We will discuss some of the technical innovations behind this work, which is now available in NVIDIA GameWorks and has shipped in UbiSoft's Tom Clancy's The Division.

Chris Wyman (Senior Research Scientist, NVIDIA)

Chris Wyman is a Senior Research Scientist in NVIDIA's real-time rendering research group in Redmond, WA. Prior to joining NVIDIA, he was an Associate Professor at the University of Iowa. He has published research on many aspects of real-time lighting and rendering including shadows, transparency, global illumination, and material properties.

SIG1658: Bringing Pascal to Professionals
THURSDAY, JULY 28 | BOOTH #509 | 2:00pm – 2:25pm

Designs are becoming more complex. Media is becoming richer with higher fidelity, combining greater resolutions and complex visual effects. Scientific visualization and compute problems are larger than ever. VR is changing all facets of entertainment, design, engineering, architecture, and medicine. Customers want to experience ideas, validate designs, rehearse procedures, and visualize problems interacting with them naturally and at scale.

Allen Bourgoyne (Senior Product Marketing Manager, NVIDIA)

Allen Bourgoyne is a Sr. Product Marketing Manager for NVIDIA’s Professional Solutions team, focused on NVIDIA’s Quadro professional graphics solutions. Allen has over 20 years of experience in the computer hardware and software industry with a strong emphasis on visual computing, driving engineering and marketing efforts to bring unique solutions require by professional customers in the visual computing markets.

SIG1659: Leveraging GPU Technology to Visualize Next-Generation Products and Ideas
THURSDAY, JULY 28 | BOOTH #509 | 2:00pm – 2:25pm

While CAD real-time visualization solutions and 3D content creation software have been available for decades, there were practical workflow barriers that inhibit efficient integration into an agency's creative and production process. Using the latest in GPU technology from NVIDIA, Saatchi and Saatchi LA is pioneering the breaking of these barriers. 3D artists work with creative directors and clients to rapidly visualize ideas and products. Real-time visualization is integrated into the production workflow seamlessly, making rapid visualization both inspiring and cost-saving. We'll provide a top-level overview of how Saatchi is leveraging NVIDIA GPU technologies, including the NVIDIA VCA, to create powerful virtual creative collaborations.

Michael Wilken (Director of 3D, Saatchi & Saatchi, LA)

Michael Wilken leads Saatchi and Saatchi LA's growing 3D capabilities. He built powerful agency 3D capability from a single role to a 30+ team serving some of the world's largest brands. His team has successfully realized an industry-leading integration of 3D production ability with creative collaboration within an advertising agency.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Anjul Patney (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Joohwan Kim (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Marco Salvi (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Anton Kaplanyan (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Chris Wyman (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Nir Benty (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

Aaron Lefohn (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

Perceptually-Based Foveated Virtual Reality
SUNDAY-THURSDAY, July 24-28 | Hall C

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://s2016.siggraph.org/content/emerging-technologies

David Luebke (NVIDIA)

We present a set of perceptually-based methods for improving foveated rendering running on a prototype virtual reality headset with an integrated eye tracker. Foveated rendering has previously been demonstrated in conventional displays, but has recently become an especially attractive prospect in virtual reality (VR) and augmented reality (AR) display settings with a large field-of-view (FOV) and high frame rate requirements. Investigating prior work on foveated rendering, we find that some previous quality-reduction techniques can create objectionable artifacts like temporal instability and contrast loss. Our emerging technologies installation demonstrates these techniques running live in a head-mounted display and we will compare them against our new perceptually-based foveated techniques. Our new foveation techniques enable significant reduction in rendering cost but have no discernible difference in visual quality. We show how combinations of object-space shading, temporal anti-aliasing, and contrast-preserving texturing techniques can fulfill these requirements with potentially large reductions in rendering cost.

NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html

Wade Brainerd (Activision)

We’ll present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses the GPU tessellation hardware and processes each face of a base mesh independently and in a streaming fashion, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submits one tessellated primitive per input face. By traversing the quadtree for each tessellated vertex, we are then able to accurately and efficiently evaluate the limit surface. This yields both a more uniform tessellation pattern on the surface, and faster rendering, as fewer primitives are processed by the hardware pipeline. We evaluate our method on a variety of assets, and find that it can be up to three times faster than state-of-the-art methods. In addition, our streaming formulation makes it easier to integrate subdivision surfaces with application and shader code written for polygonal models. We demonstrate integration of our technique into a full-featured video game engine.

NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html

Tim Foley (NVIDIA)

We’ll present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses the GPU tessellation hardware and processes each face of a base mesh independently and in a streaming fashion, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submits one tessellated primitive per input face. By traversing the quadtree for each tessellated vertex, we are then able to accurately and efficiently evaluate the limit surface. This yields both a more uniform tessellation pattern on the surface, and faster rendering, as fewer primitives are processed by the hardware pipeline. We evaluate our method on a variety of assets, and find that it can be up to three times faster than state-of-the-art methods. In addition, our streaming formulation makes it easier to integrate subdivision surfaces with application and shader code written for polygonal models. We demonstrate integration of our technique into a full-featured video game engine.

NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html

Manuel Kraemer (NVIDIA)

We’ll present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses the GPU tessellation hardware and processes each face of a base mesh independently and in a streaming fashion, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submits one tessellated primitive per input face. By traversing the quadtree for each tessellated vertex, we are then able to accurately and efficiently evaluate the limit surface. This yields both a more uniform tessellation pattern on the surface, and faster rendering, as fewer primitives are processed by the hardware pipeline. We evaluate our method on a variety of assets, and find that it can be up to three times faster than state-of-the-art methods. In addition, our streaming formulation makes it easier to integrate subdivision surfaces with application and shader code written for polygonal models. We demonstrate integration of our technique into a full-featured video game engine.

NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html

Henry Moreton (NVIDIA)

We’ll present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses the GPU tessellation hardware and processes each face of a base mesh independently and in a streaming fashion, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submits one tessellated primitive per input face. By traversing the quadtree for each tessellated vertex, we are then able to accurately and efficiently evaluate the limit surface. This yields both a more uniform tessellation pattern on the surface, and faster rendering, as fewer primitives are processed by the hardware pipeline. We evaluate our method on a variety of assets, and find that it can be up to three times faster than state-of-the-art methods. In addition, our streaming formulation makes it easier to integrate subdivision surfaces with application and shader code written for polygonal models. We demonstrate integration of our technique into a full-featured video game engine.

NVIDIA Research: Streaming Subdivision Surfaces for Efficient GPU Rendering
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

Humans have two distinct vision systems: foveal and peripheral vision. Foveal vision is sharp and detailed, while peripheral vision lacks fidelity. The difference in characteristics of the two systems enable recently popular foveated rendering systems, which seek to increase rendering performance by lowering image quality in the periphery.

http://www.graphics.stanford.edu/~niessner/brainerd2016efficient.html

Matthias Niessner (Stanford University)

We’ll present a novel method for real-time rendering of subdivision surfaces whose goal is to make subdivision faces as easy to render as triangles, points, or lines. Our approach uses the GPU tessellation hardware and processes each face of a base mesh independently and in a streaming fashion, thus allowing an entire model to be rendered in a single pass. The key idea of our method is to subdivide the u, v domain of each face ahead of time, generating a quadtree structure, and then submits one tessellated primitive per input face. By traversing the quadtree for each tessellated vertex, we are then able to accurately and efficiently evaluate the limit surface. This yields both a more uniform tessellation pattern on the surface, and faster rendering, as fewer primitives are processed by the hardware pipeline. We evaluate our method on a variety of assets, and find that it can be up to three times faster than state-of-the-art methods. In addition, our streaming formulation makes it easier to integrate subdivision surfaces with application and shader code written for polygonal models. We demonstrate integration of our technique into a full-featured video game engine.

NVIDIA Research: A System for Rapid Exploration of Shader Optimization Choices
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

We’ll present a shading language and compiler framework that facilitates rapid exploration of shader optimization choices afforded by modern real-time graphics engines. Our design extends concepts from prior work in rate-based shader programming with new language features that expand the scope of shader execution beyond traditional GPU hardware pipelines, and enable a diverse set of shader optimizations to be described by a single mechanism: the placement of overloaded shader terms at various spatio-temporal computation rates provided by the pipeline. Importantly, and in contrast to prior work, neither the shading language’s design, nor our compiler framework’s implementation, is specific to the capabilities of any one rendering pipeline, and thus our system establishes architectural separation between the shading system and the implementation of modern rendering engines (allowing different rendering pipelines to utilize its services). We demonstrate use of this language and compiler framework to author complex shaders for different rendering pipelines and rapidly explore shader optimization decisions that impact logic spanning CPU, GPU, and preprocessing computations. We further demonstrate the utility of the proposed system by developing a shader level-of-detail library and shader auto-tuning system on top of its abstractions, and demonstrate rapid re-optimization of shaders for different target hardware platforms.

http://graphics.cs.cmu.edu/projects/spire/

Yong He (CMU)

NVIDIA Research: A System for Rapid Exploration of Shader Optimization Choices
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

We’ll present a shading language and compiler framework that facilitates rapid exploration of shader optimization choices afforded by modern real-time graphics engines. Our design extends concepts from prior work in rate-based shader programming with new language features that expand the scope of shader execution beyond traditional GPU hardware pipelines, and enable a diverse set of shader optimizations to be described by a single mechanism: the placement of overloaded shader terms at various spatio-temporal computation rates provided by the pipeline. Importantly, and in contrast to prior work, neither the shading language’s design, nor our compiler framework’s implementation, is specific to the capabilities of any one rendering pipeline, and thus our system establishes architectural separation between the shading system and the implementation of modern rendering engines (allowing different rendering pipelines to utilize its services). We demonstrate use of this language and compiler framework to author complex shaders for different rendering pipelines and rapidly explore shader optimization decisions that impact logic spanning CPU, GPU, and preprocessing computations. We further demonstrate the utility of the proposed system by developing a shader level-of-detail library and shader auto-tuning system on top of its abstractions, and demonstrate rapid re-optimization of shaders for different target hardware platforms.

http://graphics.cs.cmu.edu/projects/spire/

Kayvon Fatahalian (CMU)

 

NVIDIA Research: A System for Rapid Exploration of Shader Optimization Choices
Wednesday, July 27 | Ballroom C, D & E | 3:45pm - 5:55pm

We’ll present a shading language and compiler framework that facilitates rapid exploration of shader optimization choices afforded by modern real-time graphics engines. Our design extends concepts from prior work in rate-based shader programming with new language features that expand the scope of shader execution beyond traditional GPU hardware pipelines, and enable a diverse set of shader optimizations to be described by a single mechanism: the placement of overloaded shader terms at various spatio-temporal computation rates provided by the pipeline. Importantly, and in contrast to prior work, neither the shading language’s design, nor our compiler framework’s implementation, is specific to the capabilities of any one rendering pipeline, and thus our system establishes architectural separation between the shading system and the implementation of modern rendering engines (allowing different rendering pipelines to utilize its services). We demonstrate use of this language and compiler framework to author complex shaders for different rendering pipelines and rapidly explore shader optimization decisions that impact logic spanning CPU, GPU, and preprocessing computations. We further demonstrate the utility of the proposed system by developing a shader level-of-detail library and shader auto-tuning system on top of its abstractions, and demonstrate rapid re-optimization of shaders for different target hardware platforms.

http://graphics.cs.cmu.edu/projects/spire/

Tim Foley (NVIDIA)

 

NVIDIA Research: Reflectance Modeling by Neural Texture Synthesis:
Tuesday, July 26 | Ballroom C | 2:00pm-3:30pm

This work synthesizes an SVBRDF of a textured material from a single input image. In the spirit of parametric texture synthesis, it drives the model fit using a statistical image comparison without explicit matches between model predictions and the input. Given the softness of the objective, surprisingly good results are obtained.

http://s2016.siggraph.org/technical-papers/sessions/materials

Miika Aittala (Aalto University)

 

NVIDIA Research: Reflectance Modeling by Neural Texture Synthesis:
Tuesday, July 26 | Ballroom C | 2:00pm-3:30pm

This work synthesizes an SVBRDF of a textured material from a single input image. In the spirit of parametric texture synthesis, it drives the model fit using a statistical image comparison without explicit matches between model predictions and the input. Given the softness of the objective, surprisingly good results are obtained.

http://s2016.siggraph.org/technical-papers/sessions/materials

Jaakko Lehtinen (NVIDIA, Aalto University)

 

NVIDIA Research: Reflectance Modeling by Neural Texture Synthesis:
Tuesday, July 26 | Ballroom C | 2:00pm-3:30pm

This work synthesizes an SVBRDF of a textured material from a single input image. In the spirit of parametric texture synthesis, it drives the model fit using a statistical image comparison without explicit matches between model predictions and the input. Given the softness of the objective, surprisingly good results are obtained.

http://s2016.siggraph.org/technical-papers/sessions/materials

Timo Aila (NVIDIA)

 

The Quest for The Ray Tracing API
TUESDAY, July 26 | Ballroom A & B | 9:00am - 12:15pm

Intended Audience: Graphics developers and researchers seeking insight into current and future ray tracing technologies across vendors and platforms.

Ray tracing has become commodity in rendering and first ray tracing hardware emerges. Hence, the quest for an API is on. The course reviews current efforts and abstractions, especially the interaction of rasterization and ray tracing, cross platform challenges, real-time constraints, and enabling applications beyond image synthesis.

http://s2016.siggraph.org/courses/events/quest-ray-tracing-api

Alexander Keller (Director of Graphics Research, NVIDIA)

 

The Quest for The Ray Tracing API
TUESDAY, July 26 | Ballroom A & B | 9:00am - 12:15pm

Intended Audience: Graphics developers and researchers seeking insight into current and future ray tracing technologies across vendors and platforms.

Ray tracing has become commodity in rendering and first ray tracing hardware emerges. Hence, the quest for an API is on. The course reviews current efforts and abstractions, especially the interaction of rasterization and ray tracing, cross platform challenges, real-time constraints, and enabling applications beyond image synthesis.

http://s2016.siggraph.org/courses/events/quest-ray-tracing-api

Ralf Karrenberg (NVIDIA)

 

Open Problems in Real-Time Rendering
TUESDAY, July 26 | Ballroom A & B | 2:00pm - 5:15pm

Intended Audience: This course is targeted at game developers, computer graphics researchers, and other real-time graphics practitioners interested in understanding the limits of current rendering technology, and the types of innovation needed to advance the field.

This course brings together leading industry experts and researchers in real-time rendering to distill down the top unsolved problems in real-time rendering. While it is impossible to cover all of the open problems in the field, we pick the problems most important this year, and expect to cover different topics each year the course is held. Each topic includes the researcher and industry practitioner perspective, and covers the state-of-the-art today for the topic, why current solutions don’t work in-practice, the desired ideal solution, and the problems that need to be solved to work toward that ideal.

http://s2016.siggraph.org/courses/events/open-problems-real-time-rendering

Aaron Lefohn (NVIDIA)

 

Open Problems in Real-Time Rendering
TUESDAY, July 26 | Ballroom A & B | 2:00pm - 5:15pm

Intended Audience: This course is targeted at game developers, computer graphics researchers, and other real-time graphics practitioners interested in understanding the limits of current rendering technology, and the types of innovation needed to advance the field.

This course brings together leading industry experts and researchers in real-time rendering to distill down the top unsolved problems in real-time rendering. While it is impossible to cover all of the open problems in the field, we pick the problems most important this year, and expect to cover different topics each year the course is held. Each topic includes the researcher and industry practitioner perspective, and covers the state-of-the-art today for the topic, why current solutions don’t work in-practice, the desired ideal solution, and the problems that need to be solved to work toward that ideal.

http://s2016.siggraph.org/courses/events/open-problems-real-time-rendering

N. Tatarchuk (Bungie Games)

 

HFTS: Hybrid Frustum-Traced Shadows in "The Division"
SUNDAY, July 24 | Ballroom B | 3:45pm - 5:35pm

We present a hybrid irregular z-buffer shadow algorithm building on work by Story [2015] and Wyman et al. [2015] that allows soft shadows and is fast enough for use in shipping games, like The Division. Key novelties include an improved light-space partitioning scheme that speeds best- and average-case running times compared to using multiple cascades. We also extract a per-pixel distance to the nearest occluder to enable transitioning between irregular z-buffers and filtered shadow maps.

http://s2016.siggraph.org/talks/sessions/dark-hides-light

Jon Story (NVIDIA)

 

HFTS: Hybrid Frustum-Traced Shadows in "The Division"
SUNDAY, July 24 | Ballroom B | 3:45pm - 5:35pm

We present a hybrid irregular z-buffer shadow algorithm building on work by Story [2015] and Wyman et al. [2015] that allows soft shadows and is fast enough for use in shipping games, like The Division. Key novelties include an improved light-space partitioning scheme that speeds best- and average-case running times compared to using multiple cascades. We also extract a per-pixel distance to the nearest occluder to enable transitioning between irregular z-buffers and filtered shadow maps.

http://s2016.siggraph.org/talks/sessions/dark-hides-light

Chris Wyman (Senior Research Scientist, NVIDIA)

Chris Wyman is a Senior Research Scientist in NVIDIA's real-time rendering research group in Redmond, WA. Prior to joining NVIDIA, he was an Associate Professor at the University of Iowa. He has published research on many aspects of real-time lighting and rendering including shadows, transparency, global illumination, and material properties.

Stochastic Layered Alpha Blending
TUESDAY, July 26 | ROOM 303 A-C | 9:00am -10:30am

Researchers have long sought efficient techniques for order-independent transparency (OIT) in a rasterization pipeline, to avoid sorting geometry prior to render. Techniques like A-buffers, k-buffers, stochastic transparency, hybrid transparency, adaptive transparency, and multi-layer alpha blending all approach the problem slightly differently with different tradeoffs.
These OIT algorithms have many similarities, and our investigations allowed us to construct a continuum on which they lie. During this categorization, we identified various new algorithms including stochastic layered alpha blending (SLAB), which combines stochastic transparency’s consistent and (optionally) unbiased convergence with the smaller memory footprint of k-buffers. Our approach can be seen as a stratified sampling technique for stochastic transparency, generating quality better than 32× samples per pixel for roughly the cost and memory of 8× stochastic samples. As with stochastic transparency, we can exchange noise for added bias; our algorithm provides an explicit parameter to trade noise for bias. At one end, this parameter gives results identical to stochastic transparency. On the other end, the results are identical to k-buffering.

http://s2016.siggraph.org/talks/sessions/roll-dice

Chris Wyman (Senior Research Scientist, NVIDIA)

Chris Wyman is a Senior Research Scientist in NVIDIA's real-time rendering research group in Redmond, WA. Prior to joining NVIDIA, he was an Associate Professor at the University of Iowa. He has published research on many aspects of real-time lighting and rendering including shadows, transparency, global illumination, and material properties.

GI Next: Global Illumination for Production Rendering on the GPU
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

The sheer size of texture data and custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

http://s2016.siggraph.org/talks/sessions/roll-dice

E. Catalano (NVIDIA)

 

GI Next: Global Illumination for Production Rendering on the GPU
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

The sheer size of texture data and custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

http://s2016.siggraph.org/talks/sessions/roll-dice

R. Yasui-Schoeffel (NVIDIA)

 

GI Next: Global Illumination for Production Rendering on the GPU
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

The sheer size of texture data and custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

http://s2016.siggraph.org/talks/sessions/roll-dice

K. Dahm (NVIDIA)

 

GI Next: Global Illumination for Production Rendering on the GPU
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

The sheer size of texture data and custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

http://s2016.siggraph.org/talks/sessions/roll-dice

N.Binder (NVIDIA)

 

GI Next: Global Illumination for Production Rendering on the GPU
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

The sheer size of texture data and custom shaders in production rendering were the two major hurdles in the way of GPU acceleration. Requiring only tiny modifications of an existing production renderer, we are able to accelerate the computation of global illumination by more than an order of magnitude.

http://s2016.siggraph.org/talks/sessions/roll-dice

A. Keller (NVIDIA)

 

Estimating Local Beckmann Roughness for Complex BSDFs
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

Many light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beckmann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport, such as Manifold Exploration and Half Vector Space Light Transport employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as MERL library) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction.

https://research.nvidia.com/publication/estimating-local-beckmann-roughness-complex-bsdfs
http://s2016.siggraph.org/talks/sessions/scratching-surface

Nicolas Holzschuch (NVIDIA)

 

Estimating Local Beckmann Roughness for Complex BSDFs
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

Many light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beckmann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport, such as Manifold Exploration and Half Vector Space Light Transport employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as MERL library) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction.

https://research.nvidia.com/publication/estimating-local-beckmann-roughness-complex-bsdfs
http://s2016.siggraph.org/talks/sessions/scratching-surface

Anton Kaplanyan (NVIDIA)

Anton Kaplanyan is a research scientist at NVIDIA specializing on light transport and shading, including artistic illumination editing, light transport in half-vector space, and specular antialiasing. Anton had been working at Crytek on CryENGINE 3 and a AAA game Crysis 2 as lead researcher, including the development of Light Propagation Volumes.

Estimating Local Beckmann Roughness for Complex BSDFs
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

Many light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beckmann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport, such as Manifold Exploration and Half Vector Space Light Transport employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as MERL library) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction.

https://research.nvidia.com/publication/estimating-local-beckmann-roughness-complex-bsdfs
http://s2016.siggraph.org/talks/sessions/scratching-surface

Johannes Hanika (NVIDIA)

 

Estimating Local Beckmann Roughness for Complex BSDFs
THURSDAY, July 28 | Ballroom B | 3:45pm – 5:15pm

Many light transport related techniques require an analysis of the blur width of light scattering at a path vertex, for instance a Beckmann roughness. Such use cases are for instance analysis of expected variance (and potential biased countermeasures in production rendering), radiance caching or directionally dependent virtual point light sources, or determination of step sizes in the path space Metropolis light transport framework: recent advanced mutation strategies for Metropolis Light Transport, such as Manifold Exploration and Half Vector Space Light Transport employ local curvature of the BSDFs (such as an average Beckmann roughness) at all interactions along the path in order to determine an optimal mutation step size. A single average Beckmann roughness, however, can be a bad fit for complex measured materials (such as MERL library) and, moreover, such curvature is completely undefined for layered materials as it depends on the active scattering layer. We propose a robust estimation of local curvature for BSDFs of any complexity by using local Beckmann approximations, taking into account additional factors such as both incident and outgoing direction.

https://research.nvidia.com/publication/estimating-local-beckmann-roughness-complex-bsdfs
http://s2016.siggraph.org/talks/sessions/scratching-surface

Carsten Dachsbacher (NVIDIA)

 

SIG1663: Machine Learning and the Making of Things
TUESDAY, July 26 | Room 210D | 12:00pm – 1:00pm

We live in a world where everything around us is designed by someone. The pace of innovation is escalating and with new methods of manufacturing, such as 3D printing,the demands placed on designers and design technology are increasing. What if there was a better way to organize all of this information and allow ideas and creations to emerge more organically? We will explore how the design software of the future will help designers rise to the challenge through the application of machine learning to 3D data. We introduce a geometric shape analysis and machine learning technology we call the Design Graph. By learning from millions of 3D models and then assembling a knowledge graph it is able to react to a constantly evolving world, guiding the designs of the future.

Mike Haley (Senior Director of Machine Intelligence, Autodesk)

Mike Haley leads the Machine Intelligence group at Autodesk focused on ground breaking machine learning technologies for the future of making things which includes everything from 3D digital design to how physical creation or assembly occurs. His team develops the strategies for applying machine learning as well as performing research and development on techniques unique to designing and making. For the last several years Mike’s team has been focused on bringing geometric shape-analysis and high scale machine-learning techniques to 3D design information with the intent to make software a true partner in the design process.

SIG1664: Rendering Sparse Volumes with NVIDIA© GVDB in DesignWorks
TUESDAY, July 26 | Room 210D | 1:15pm – 2:15pm

We introduce GVDB Sparse Volumes as a new offering with NVIDIA DesignWorks to focus on high quality raytracing of sparse volumetric data for motion pictures. Based on the VDB topology of Museth, with a novel GPU-based data structure and API, GVDB is designed for efficient compute and raytracing on a sparse hierarchy of grids. Raytracing on the GPU is accelerated with indexed memory pooling, 3D texture atlas storage and a new hierarchical traversal algorithm. GVDB integrates with NVIDIA OptiX, and is developed as an open source library as a part of DesignWorks.

Rama Hoetzlein (Research Engineer, NVIDIA)

Rama Hoetzlein works in the areas of computer graphics and media arts. His thesis at Cornell University in 2001 focused on robotic and mechanical sculpture. From 2001 to 2004, he co-founded the Game Design Initiative at Cornell. In 2010, Rama’s dissertation at the University of California Santa Barbara explored creative support tools for procedural modeling. His current work investigates developer technologies and graphics research on sparse volumes at NVIDIA.

NVIDIA Research: SIG1665: Massive Time-lapse Point Cloud Rendering in Virtual Reality
TUESDAY, July 26 | Room 210D | 2:30pm – 3:00pm

We’ll present a system that allows us to render and play through time-slices of large point clouds scans while meeting the high performance and quality requirements of virtual reality systems. Our viewer is capable of rendering our currently available times-slices of a building site: 200 time-slices captured daily by drones using photogrammetry and consisting of roughly 40 million points each, as well as 10 high-density laser-scans with roughly 800 million points each. The viewer is also built to handle additional and larger scans that will be produced by the ongoing scan operations in the future. We will discuss the challenges of rendering point clouds and the methods used to meet the increased performance and quality requirements of VR.

Markus Schuetz (Research Engineer, NVIDIA)

Markus Schuetz is a Visual Computing student at the Vienna University of Technology and responsible for the development of the WebGL point cloud viewer Potree. He is continuing point cloud rendering research and development in his current position as an intern at NVIDIA.

NVIDIA Research: SIG1666: Rendering Highly Specular Materials
TUESDAY, July 26 | Room 210D | 3:45pm – 4:15pm

High-frequency illumination effects, such as sparkling and highly glossy highlights on curved surfaces, are challenging to render in a stable manner. In this talk, we will discuss two methods for rendering glints and filtering highly glossy highlights. We provide practical solutions applicable for real-time rendering. Our real-time methods are GPU-friendly, temporally stable, and compatible with deferred shading, normal maps, as well as with filtering methods for normal maps.

http://s2016.siggraph.org/talks/sessions/scratching-surface

Anton S. Kaplanyan (Research Scientist, NVIDIA)

Anton Kaplanyan is a research scientist at NVIDIA specializing on light transport and shading, including artistic illumination editing, light transport in half-vector space, and specular antialiasing. Anton had been working at Crytek on CryENGINE 3 and a AAA game Crysis 2 as lead researcher, including the development of Light Propagation Volumes.

SIG1667: Bringing Pascal to Professionals
TUESDAY, July 26 | Room 210D | 4:30pm – 5:00pm

Designs are becoming more complex. Media is becoming richer with higher fidelity, combining greater resolutions and complex visual effects. Scientific visualization and compute problems are larger than ever. VR is changing all facets of entertainment, design, engineering, architecture, and medicine. Customers want to experience ideas, validate designs, rehearse procedures, and visualize problems interacting with them naturally and at scale.

Allen Bourgoyne (Senior Product Marketing Manager, NVIDIA)

Allen Bourgoyne is a Sr. Product Marketing Manager for NVIDIA’s Professional Solutions team, focused on NVIDIA’s Quadro professional graphics solutions. Allen has over 20 years of experience in the computer hardware and software industry with a strong emphasis on visual computing, driving engineering and marketing efforts to bring unique solutions require by professional customers in the visual computing markets.

SIG1668: Using Deep Learning to Improve Texture and Other Graphics
TUESDAY, July 26 | Room 210D | 5:15pm-6:15pm

Description Coming soon!

Speaker TBA (Artomatix)

Description Coming soon!

SIG1669: Mars 2030
TUESDAY, July 26 | Room 210D | 3:00pm – 3:30pm

Mars 2030 is an interactive virtual reality project that offers a breathtaking look into the life of an astronaut hard at work studying and exploring the Martian landscape. Produced in conjunction with NASA and Fusion Media Network (a joint venture between ABC and Disney), Mars 2030 aims to be the most photo realistic and scientifically accurate depiction of the Red Planet to date. We'll expound on the project's scope and technical capacities, in addition to showcasing a full VR demo of the game itself. Those in attendance will be among the first to glimpse the results of this exciting and wholly unprecedented multimedia collaboration.

Julian Reyes (VR Producer, Fusion)

Julian Reyes is a VR Producer for Fusion Media Network, an ABC-Disney joint venture. His primary focus is on creating interactive VR experiences using Unreal Engine 4. Among some of his works, he teamed up with Canadian company General Fusion to produce a VR simulation of a nuclear fusion reactor concept. He also produced an interactive project on illegal gold mining in Colombia called Blood Gold. He graduated from SAE Institute with a focus on Sound Engineering and Music Production and picked up game development by watching YouTube and other online tutorials. On his free time, he produces interactive VR music experiences for music festivals and has performed at III Points, Miami's Interactive Festival. He's also scheduled to be a panel member at this year's SXSW VR Track. He is currently working on an upcoming VR project based on a future mission to Mars with help from Disney Interactive, NASA, and MIT's Space Systems Laboratory.

SIG1669: Mars 2030
TUESDAY, July 26 | Room 210D | 3:00pm – 3:30pm

Mars 2030 is an interactive virtual reality project that offers a breathtaking look into the life of an astronaut hard at work studying and exploring the Martian landscape. Produced in conjunction with NASA and Fusion Media Network (a joint venture between ABC and Disney), Mars 2030 aims to be the most photo realistic and scientifically accurate depiction of the Red Planet to date. We'll expound on the project's scope and technical capacities, in addition to showcasing a full VR demo of the game itself. Those in attendance will be among the first to glimpse the results of this exciting and wholly unprecedented multimedia collaboration.

Justin Sonnekalb (Technical Designer, Consultant)

Justin Sonnekalb is a Technical Designer with four years' experience creating prototypes for major, back-of-the-box gameplay systems, a producer with three years' experience working with world-class story and art teams, and the voiceover editor for Bioshock and Bioshock Infinite. His specialty is tackling technical challenges and pulling off impossible scripting or shader hacks, working to establish the "voice" of a game, and just generally getting into a flow where all the many disparate disciplines that make up a game start coming together. Because gameplay prototyping rarely overlaps voiceover editing in a typical production cycle, Justin has the opportunity to fully develop both sets of skills and enjoy using each as a respite from the other.

SIG1669: Mars 2030
TUESDAY, July 26 | Room 210D | 3:00pm – 3:30pm

Mars 2030 is an interactive virtual reality project that offers a breathtaking look into the life of an astronaut hard at work studying and exploring the Martian landscape. Produced in conjunction with NASA and Fusion Media Network (a joint venture between ABC and Disney), Mars 2030 aims to be the most photo realistic and scientifically accurate depiction of the Red Planet to date. We'll expound on the project's scope and technical capacities, in addition to showcasing a full VR demo of the game itself. Those in attendance will be among the first to glimpse the results of this exciting and wholly unprecedented multimedia collaboration.

Dave Flamburis (Senior Lead Artist, Creative Consultant)

Dave has worked in the industry as a hands-on Art Director, Lead Artist, and Senior Artist. His goal is to continue to develop and craft unique experiences, share insight, knowledge, build momentum both within and across teams, and have an amazing time doing so.