GTC 2017 will bring together experts from Silicon Valley automotive research labs, Tier-1 suppliers, and startup companies who are all using artificial intelligence to develop self-driving cars, trucks, and shuttles. Their innovations in GPU-based supercomputing enable deep learning, natural language processing, and gesture control that will change how people drive cars—and even enable cars to drive people. In addition to AI Car sessions, this track will showcase the future of automotive design, engineering simulation, virtual showrooms, and in-vehicle infotainment.
VOLVO CAR CORPORATION
FORD MOTOR COMPANY
Technical Expert Autonomous Vehicles
FORD MOTOR COMPANY, Technical Expert Autonomous Vehicles
Mark Crawford is a Technical Expert in autonomous vehicles at Ford Motor Company, where he is working in an advanced engineering team to develop Ford’s autonomous vehicle. He holds two diploma degrees, a B.S. and M.S., in Mechanical Engineering from the Missouri University of Science and Technology and is currently pursuing his Ph.D. in Information Systems Engineering at the University of Michigan – Dearborn.
VOLVO CAR CORPORATION, Technical Expert
Henrik Lind has a master in Electrical Engineering from Chalmers University of Technology. Henrik has been working with advanced driver assistance technologies and technology research at Volvo Technological Development since 1997 leading research of sensors and functions. From the year 2001 Henrik moved to Volvo Cars as responsible for the introduction of radar and vision related functions at Volvo Car Corporation with the aim to provide increased safety and comfort for drivers. He introduced forward collision warning with emergency brake and adaptive cruise control in 2006 followed by new innovations in safety. From 2013 and forward Henrik has been working in bringing in highly automated driving technologies at Volvo Cars. He is appointed technical specialist.
AUDI AG, Research Engineer
Christoph is Research Engineer at AUDI AG, Ingolstadt. He holds a diploma degree in Computer Science from the University of Erlangen-Nuremberg and a postgraduate diploma in Computer and Information Sciences from Auckland University of Technologies. In 2011, he received his PhD (Dr.-Ing.) at the University of Erlangen-Nuremberg on the model-based design of embedded safety control units.
Project Lead Audi VR Experience
Sr. Director of Engineering
HERE, Sr. Director of Engineering
Vlad Zhukov is a Senior Director of Engineering, leading development of Cloud Platform assets within HERE. In his current role as the Head of HD Map Engineering he is focused on building highly precise mapping assets supporting vehicle automation. Vlad has spent more than 12 years with NAVTEQ / Nokia / HERE, working in various engineering and leadership roles. He started as a research engineer, building map automation tools and later moved to lead technical team supporting a joint venture in China. He has also held product development roles leading teams supporting the development of ADAS applications. Prior to joining NAVTEQ, Vlad spent a number of years at Oracle.
AUDI AG, Project Lead Audi VR Experience
Marcus Kuehne is a progressive mind, bringing several innovations to the car industry during his career. He studied Interface Design and started his way at the Audi product marketing. After that, Marcus changed to electronical development and was responsible for development and market introduction of the MMI touch – the first fully integrated touchpad based car HMI. In 2013, Marcus returned to the marketing & sales department and took over the project lead for the Audi VR experience. For a real VR enthusiast like him, it's a fulfilled dream to realize one of the most ambitious and complex VR industry applications.
DFKI, Senior Researcher
Richard is a senior researcher at the German Research Center for Artificial Intelligence (DFKI). He holds a diploma degree in Computer Science from the University of Erlangen-Nuremberg and a postgraduate diploma in Computer and Information Sciences from Auckland University of Technologies. In 2013, he received his PhD (Dr.-Ing.) at the University of Erlangen-Nuremberg on the automatic code generation for GPU accelerators from a domain-specific language for medical imaging.
The Self-Driving Car and Automotive track at GTC featured over 40 sessions from experts using GPU-based innovations and artificial intelligence to redefine how cars of the future will drive us.
Deep Learning Based Driver Monitoring: A Close Look At Driver Face Analytics and Emotion Recognition
Developing Software Architectures
for Autonomous Vehicles
A Single Forward Propagation of Neural Network for
End-to-End Image Detection System
We'll introduce you to ultra-lightweight vision software that reads facial micro-expressions in real time for use in driver-monitoring systems in next-generation vehicles. Using deep learning-based convolutional neural networks (CNNs) powered by GPUs, vision algorithms for embedded systems can now allow vehicles to constantly monitor drivers inattention, cognitive awareness, and emotional distraction in 1/30 of a second through a number of face-analytics and emotion-recognition technologies. We'll also reveal the five most common applications in the automotive space, ranging from invisible reactive support systems to semi-autonomous driving. Plus, we'll present a brief live demo toward the end of the session.
Senior Manager, Platform for Autonomus Driving
Developing Software Architectures for Autonomous Vehicles
Learn how the GPU's real-time graphics capabilities can be used to interactively visualize and enhance the camera system in modern cars. The GPU simplifies design, enhances the interactive calibration and testing of the car's computer vision systems, and even allows for creating simulated environments where the behavior of the car's computer vision can be tested to pass standard safety tests or navigational street situations.
Hollywood Under the Hood: Inside the Mercedes-Benz Concept Car
Developing the “Audi Virtual Cockpit” Fully Digital Instrument Cluster
High-Performance Pedestrian Detection on NVIDIA® Tegra®
Senior Research Engineer
PANASONIC SILICON VALLEY LABORATORY
A Single Forward Propagation of Neural Network for End-to-End Image Detection System
This talk will describe how a single forward propagation of a neural network can give us locations of objects interested on an image frame. There are no proposal generation steps before running neural networks and no post-processing steps after. The speaker will describe a fully neural detection system—implemented by deep learning research teams at Panasonic—that achieves real-time speed and state-of-the-art performance. The talk also includes a live demonstration of the system on a laptop PC with an NVIDIA® GeForce® GTX 970M-powered notebook and a tablet featuring an NVIDIA Tegra® K1 GPU.
Senior Software Engineer
Hollywood Under the Hood: INSIDE THE MERCEDES-BENZ CONCEPT CAR
Mercedes Benz' history is defined by moments of innovation that have dramatically shaped and impacted the automotive industry. When the company sought a solution to support the efficient development and delivery of next generation digital user experiences (UX), it engaged The Foundry. The solution, code named Project Dash, leverages proven 3D content and digital visualization technology from The Foundry, existing Mercedes solutions, and custom software development. Working closely with Mercedes, The Foundry created a fully bespoke solution for real time UI/UX design. With this solution, Mercedes UX designers can explore, create and iterate faster, with high-quality content.
Manager Cluster Instruments and Graphics Framework
Get an overview of the techniques used for Audi's NVIDIA® Tegra® 3-powered virtual cockpit, focusing on (1) reduction of start-up time, (2) instrument display with 60 fps, and (3) synchronization with the infotainment main unit. You'll also get to know the overall software structure and see how graphical effects were implemented. The virtual cockpit is available in two configurations. The single-display configuration is used for sport models—like the Audi TT and R8—where the output of the infotainment main unit is integrated into the instrument cluster. In contrast, the dual-display configuration also features a 'standard' main unit display.
High-Performance Pedestrian Detection on NVIDIA® Tegra®
This study presents an innovate approach to efficiently mapping a popular pedestrian detection algorithm (HoG) on an NVIDIA® Tegra® GPU. Attendees will learn new techniques to optimize a real CV application on the Tegra X1, as well as several new architecture features of the GPU.