Events

Subscribe
BRINGING AI TO THE GRAPHICS INDUSTRY
Siggraph 2017 | July 30 – August 3, 2017 | Los Angeles, CA
Bringing Deep Learning and AI to the Graphics Industry

Creativity, fueled by NVIDIA research

 

At NVIDIA, AI research is the core of our business, and we’re sharing our breakthroughs to empower the graphics industry. Join us at SIGGRAPH to see how we're revolutionizing professional graphics with deep learning and AI. From product design using collaborative VR to making in-scene creative decisions with AI-assisted rendering, we’re developing innovations to change the way you design, iterate, and collaborate.

Come find out more at our booth #403

 

Check out the NVIDIA Newsroom for announcements, downloadable assets, and links to our social media channels.

Featured Talks

Xiaoguang Han
University of Hong Kong

Create a 3D Caricature in Minutes with Deep Learning.

Paul Kruszewski
CEO, CED, wrnch

Real-Time 30 Motion Capture Using Webcams and GPUs.

Neil Trevett
VP, NVIDIA

Khronos API Ecosystem Update Including Vulkan, OpenXR for Cross-Platform Virtual

 

NVIDIA Spotlights

SPONSORED SESSION ROOM: THE BEST OF GTC AND NVIDIA RESEARCH

Explore technical deep dives covering today’s most exciting applications and a glimpse into the next generation of groundbreaking advancements.

  • Sunday, July 30 and Monday, July 31, open from 9 a.m. to 6 p.m.
  • Room #404 AB
NVIDIA DEEP LEARNING INSTITUTE INSTRUCTOR-LED LABS

Get instructor-led training on the latest techniques for designing, training, and deploying neural networks for digital content creation and the basics of creating and optimizing VR projects.

  • Free to all attendees
  • Monday, July 31 through Wednesday, August 2 from 9 a.m. to 5 p.m.
  • Pre-registration required. For more details, visit Eventbrite.
NVIDIA INNOVATION THEATER PRESENTED BY HP

The Innovation Theater features talks on a wide range of topics and is open to all attendees.

  • Tuesday 8/1 and Wednesday 8/2 - 9:30 a.m.-6 p.m., and Thursday 8/3 9:30 a.m.-3 p.m.
  • NVIDIA Booth #403
 

View Schedule

 

NVIDIA is everywhere at SIGGRAPH

Check out our other exciting activities taking place throughout the conference.

email

Sign up to receive the latest news from NVIDIA

SIGN UP
Create a 3D Caricature in Minutes with Deep Learning
Tuesday, Aug 1 | Booth #403 | 11:00am - 11:25am

Face modeling has been paid much attention in the field of visual computing. There exist many scenarios, including cartoon characters, avatars for social media, 3D face caricatures as well as face-related art and design, where low-cost interactive face modeling is a popular approach especially among amateur users. In this talk, I will introduce a deep learning based sketching system for 3D face and caricature modeling. The system has a labor-efficient sketching interface, that allows the user to draw freehand imprecise yet expressive 2D lines representing the contours of facial features. A novel CNN based deep regression network is designed for inferring 3D face models from 2D sketches. The proposed system also supports gesture based interactions for users to further manipulate initial face models. Both user studies and numerical results indicate that our sketching system can help users create face models quickly and effectively. A significantly expanded face database with diverse identities, expressions and levels of exaggeration is constructed to promote further research and evaluation of face modeling techniques.

Xiaoguang Han, Ph.D.

Xiaoguang Han is currently a final-year Ph.D. student with the Department of Computer Science at the University of Hong Kong since 2013. He received his M.Sc. in Applied Mathematics (2011) from Zhejiang University, and his B.S. in Information and Computer Science (2009) from Nanjing University of Aeronautics and Astronautics, China. He was also a Research Associate of School of Creative Media at City University of Hong Kong during 2011 to 2013. His research interests include Computer Graphics, Computer Vision and Computational Geometry, especially on image/video editing, 3D reconstruction, discrete geodesic computing. His current research focuses on high-quality 3D modeling and reconstruction using deep neural networks.

Real-time 3D motion capture using webcams and GPUs
Tuesday, Aug 1 | Booth #403 | 12:00pm - 12:25pm

We'll provide a brief overview of how to apply GPU-based deep learning techniques to extract 3D human motion capture from standard 2D RGB video. We'll describe in detail the stages of our CUDA-based pipeline from training to cloud-based deployment. Our training system is a novel mix of real world data collected with Kinect cameras and synthetic data based on rendering thousands of virtual humans generated in the Unity game engine. Our execution pipeline is a series of connected models including 2D video to 2D pose estimation and 2D pose to 3D pose estimation. We'll describe how this system can be integrated into a variety of mobile applications ranging from social media to sports training. We'll present a live demo using a mobile phone connected into an AWS GPU cluster.

Paul Kruszewski, CEO, wrnch

As a serial entrepreneur, Paul Kruszewski has been hustling and hacking in visual computing since he was 12, when he leveraged a $250 livestock sale into a $1,000 TRS-80 Color Computer. Soon after he wrote his fist video game. Paul went on to obtain a Ph.D. in the probabilistic algorithmic analysis from McGill University. In 2000, he founded AI.implant and developed the world's first real-time navigation middleware for 3D humans. AI.implant was acquired in 2005 by Presagis, the world's leading developer of software tools for military simulation and training. In 2007, he founded GRIP and developed the world's first brain authoring system for video game characters. GRIP was acquired in 2011 by Autodesk, the world's leading developer of software tools for digital entertainment. In 2014, he founded wrnch to democratize computer vision technology.

Khronos API Ecosystem Update – Including Vulkan and OpenXR for Cross-Platform Virtual Reality
Monday, July 31 | Booth #404A & 404B | 01:00pm - 01:50pm

Discover how over 100 companies cooperate at the Khronos Group to create open, royalty-free standards that enable developers to access the power of the GPU to accelerate demanding compute, graphics, and vision applications. Learn the very latest updates on a number of Khronos cross-platform standards, including the newly announced OpenXR for portable AR and VR, Vulkan, SPIR-V, OpenVX, OpenGL, and OpenCL. We'll also provide insights into how these open standards APIs are supported across NVIDIA's product families.

Neil Trevett, VP, NVIDIA

Neil Trevett has spent over 30 years in the 3D graphics industry. At NVIDIA, he works to enable applications to leverage advanced silicon acceleration, with a particular focus on augmented reality. Neil is also the elected president of the Khronos Group standards consortium, where he initiated the OpenGL ES API, now used on billions of mobile phones, and helped catalyze the WebGL project to bring interactive 3D graphics to the web. Neil also chairs the OpenCL working group defining the open standard for heterogeneous parallel computation and has helped establish and launch the OpenVX vision API, the new-generation Vulkan GPU API, and the OpenXR standard for portable AR and VR.