GTC 2017 will bring together experts from Silicon Valley automotive research labs, Tier-1 suppliers, and startup companies who are all using artificial intelligence to develop self-driving cars, trucks, and shuttles. Their innovations in GPU-based supercomputing enable deep learning, natural language processing, and gesture control that will change how people drive cars—and even enable cars to drive people. In addition to AI Car sessions, this track will showcase the future of automotive design, engineering simulation, virtual showrooms, and in-vehicle infotainment.
FORD MOTOR COMPANY, Technical Expert Autonomous Vehicles
Mark Crawford is a Technical Expert in autonomous vehicles at Ford Motor Company, where he is working in an advanced engineering team to develop Ford’s autonomous vehicle. He holds two diploma degrees, a B.S. and M.S., in Mechanical Engineering from the Missouri University of Science and Technology and is currently pursuing his Ph.D. in Information Systems Engineering at the University of Michigan – Dearborn.
VOLVO CAR CORPORATION, Technical Expert
Henrik Lind has a master in Electrical Engineering from Chalmers University of Technology. Henrik has been working with advanced driver assistance technologies and technology research at Volvo Technological Development since 1997 leading research of sensors and functions. From the year 2001 Henrik moved to Volvo Cars as responsible for the introduction of radar and vision related functions at Volvo Car Corporation with the aim to provide increased safety and comfort for drivers. He introduced forward collision warning with emergency brake and adaptive cruise control in 2006 followed by new innovations in safety. From 2013 and forward Henrik has been working in bringing in highly automated driving technologies at Volvo Cars. He is appointed technical specialist.
AUDI AG, Research Engineer
Christoph is Research Engineer at AUDI AG, Ingolstadt. He holds a diploma degree in Computer Science from the University of Erlangen-Nuremberg and a postgraduate diploma in Computer and Information Sciences from Auckland University of Technologies. In 2011, he received his PhD (Dr.-Ing.) at the University of Erlangen-Nuremberg on the model-based design of embedded safety control units.
HERE, Sr. Director of Engineering
Vlad Zhukov is a Senior Director of Engineering, leading development of Cloud Platform assets within HERE. In his current role as the Head of HD Map Engineering he is focused on building highly precise mapping assets supporting vehicle automation. Vlad has spent more than 12 years with NAVTEQ / Nokia / HERE, working in various engineering and leadership roles. He started as a research engineer, building map automation tools and later moved to lead technical team supporting a joint venture in China. He has also held product development roles leading teams supporting the development of ADAS applications. Prior to joining NAVTEQ, Vlad spent a number of years at Oracle.
AUDI AG, Project Lead Audi VR Experience
Marcus Kuehne is a progressive mind, bringing several innovations to the car industry during his career. He studied Interface Design and started his way at the Audi product marketing. After that, Marcus changed to electronical development and was responsible for development and market introduction of the MMI touch – the first fully integrated touchpad based car HMI. In 2013, Marcus returned to the marketing & sales department and took over the project lead for the Audi VR experience. For a real VR enthusiast like him, it's a fulfilled dream to realize one of the most ambitious and complex VR industry applications.
DFKI, Senior Researcher
Richard is a senior researcher at the German Research Center for Artificial Intelligence (DFKI). He holds a diploma degree in Computer Science from the University of Erlangen-Nuremberg and a postgraduate diploma in Computer and Information Sciences from Auckland University of Technologies. In 2013, he received his PhD (Dr.-Ing.) at the University of Erlangen-Nuremberg on the automatic code generation for GPU accelerators from a domain-specific language for medical imaging.
The Self-Driving Car and AI track at GTC features three full days of sessions covering deep learning, AI, HD mapping, and supercomputing solutions that will transform the transportation industry—from the cockpit to autonomous driving.
Using NVIDIA DRIVE™ PX 2 to Drive a Vehicle Autonomously
We'll discusss the process of installing NVIDIA DRIVE PX 2 in a car, including data acquisition, data annotation, neural network training, and in-vehicle inference. We'll focus on the type of sensors required to perceive, on how to log data, annotate data, train a neural network with that data, and use that neural network to inference on DRIVE PX 2 to create an occupancy grid and drive the car.
Deep Learning in Ford's Autonomous Vehicles
We'll provide an overview of some of the models Ford is using to fuse sensor information, and give some examples of the performance optimization. Ford is using deep learning for autonomous vehicle perception across a multitude of sensors. It is important that these models have optimized performance to process high-resolution images, lidar point clouds, and other sensor inputs in a timely fashion. Ford is exploring a variety of methods to push the run-time performance to new limits and maximize the use of the resources available. These include modifying the underlying models, the data structures, and the inference engine itself.
Deep Unconstrained Gaze Estimation with Synthetic Data
Gaze tracking in unconstrained conditions, including inside cars, is challenging where traditional gaze trackers fail. We've developed a CNN-based algorithm for unconstrained, head-pose- and subject-independent gaze tracking, which requires only consumer-quality color images of the eyes to determine gaze direction, and points along the boundary of the eye, pupil, and iris. We'll describe how we successfully trained the CNN with millions of synthetic photorealistic eye images, which we rendered on the NVIDIA GPU for a wide range of head poses, gaze directions, subjects, and illumination conditions. Among appearance-based gaze estimation techniques, our algorithm has best-in-class accuracy.
The Self-Healing Map for Automated Driving
Self-Driving vehicles require a high definition, real-time "self-healing" map to help the vehicle operate safely and comfortably. It needs to understand "where am I?", "what lies ahead?" and "how do I get there comfortably?" The vehicle needs real-time, accurate and semantically rich data to pinpoint its lane level position, and enable the vehicle to make proactive maneuvers in response to changes or incidents that affect driving conditions. These are critical issues that HERE is addressing with its HD Live Map by providing precise positioning on the road, accurate planning of vehicle control maneuvers beyond sensor visibility, and increasing trust with the consumer through a more comfortable experience. HERE is closing the data loop from vehicle to cloud to vehicle. Utilizing real-time sensor data from vehicles turning commercial vehicles into data collectors, providing our backend with data to effectively "self-heal" our map in real-time.
Multilayer and Multimodal Fusion of Deep Neural Networks for Video Classification
We'll present a novel framework to combine multiple layers and modalities of deep neural networks for video classification, which is fundamental to intelligent video analytics, including automatic categorizing, searching, indexing, segmentation, and retrieval of videos. We'll first propose a multilayer strategy to simultaneously capture a variety of levels of abstraction and invariance in a network, where the convolutional and fully connected layers are effectively represented by the proposed feature aggregation methods. We'll further introduce a multimodal scheme that includes four highly complementary modalities to extract diverse static and dynamic cues at multiple temporal scales. In particular, for modeling the long-term temporal information, we propose a new structure, FC-RNN, to effectively transform the pre-trained fully connected layers into recurrent layers. A robust boosting model is then introduced to optimize the fusion of multiple layers and modalities in a unified way. In the extensive experiments, we achieve state-of-the-art results on benchmark datasets.
Deep Learning for Self-Driving Cars
We'll talk about deep learning for self-driving cars, including sensing, perception, localization, and mapping. It'll be non-technical and a summary of my group's latest results in the field.