NVIDIA DRIVE Labs

Inside look at autonomous vehicle software

The DRIVE Labs video series takes an engineering-focused look at a range of self-driving challenges, from perceiving paths to handling intersections. These short clips illustrate how the NVIDIA DRIVE AV Software team is creating safe and robust self-driving systems.

How AI Reads the Writing on the Road

Watch how we evolved our LaneNet DNN into our high-precision MapNet DNN. This evolution includes an increase in detection classes to also cover road markings and vertical landmarks (e.g. poles) in addition to lane line detection. It also leverages end-to-end detection that provides faster in-car inference.

AIs on the Road: Surround Camera Radar Fusion Eliminates Blind Spots for Self-Driving Cars

The ability to detect and react to objects all around the vehicle makes it possible to deliver a comfortable and safe driving experience. In this DRIVE Labs video, we explain why it is essential to have a sensor fusion pipeline which can combine camera and radar inputs for robust surround perception.

Pixel-Perfect Perception: How AI Helps Autonomous Vehicles See Outside the Box

For highly complex driving scenarios, it’s helpful for the autonomous vehicle’s perception system to provide a more detailed understanding of its surroundings. With our panoptic segmentation DNN approach, we can obtain such fine-grained results by segmenting image content with pixel-level accuracy.

Blinded by the Light? How AI Avoids High Beam Glare for Other Vehicles

High beam lights can increase night-time visibility range of standard headlights significantly; however, they can create hazardous glare to other drivers. We've trained a camera-based deep neural network (DNN) — called AutoHighBeamNet — to automatically generate control outputs for the vehicle’s high beam light system, increasing night time driving visibility and safety.

Right On Track: Feature Tracking for Robust Self-Driving

Feature tracking estimates the pixel-level correspondences and pixel-level changes among adjacent video frames, providing critical temporal and geometric information for object motion/velocity estimation, camera self-calibration and visual odometry.

Searching for a Parking Spot? AI Got It

Our ParkNet deep neural network can detect an open parking spot under a variety of conditions. Watch how it handles both indoor and outdoor spaces, separated by single, double or faded lane markings, as well as differentiates between occupied, unoccupied and partially obscured spots.

Ride in NVIDIA's Self-Driving Car

This special edition DRIVE Labs episode shows how NVIDIA DRIVE AV Software combines the essential building blocks of perception, localization, and planning/control to drive autonomously on public roads around our headquarters in Santa Clara, Calif.

Classifying Traffic Signs and Traffic Lights with AI

NVIDIA DRIVE AV software uses a combination of DNNs to classify traffic signs and lights. Watch how our LightNet DNN classifies traffic light shape (e.g. solid versus arrow) and state (i.e. color), while the SignNet DNN identifies traffic sign type.

Eliminating Collisions with Safety Force Field

Our Safety Force Field (SFF) collision avoidance software acts as an independent supervisor on the actions of the vehicle’s primary planning and control system. SFF double-checks controls that were chosen by the primary system, and if it deems them to be unsafe, it will veto and correct the primary system’s decision.

High-Precision Lane Detection

Deep neural network (DNN) processing has emerged as an important AI-based technique for lane detection. Our LaneNet DNN increases lane detection range, lane edge recall, and lane detection robustness with pixel-level precision.

Perceiving a New Dimension

Computing distance to objects using image data from a single camera can create challenges when it comes to hilly terrain. With the help of deep neural networks, autonomous vehicles can predict 3D distances from 2D images.

Surround Camera Vision

See how we use our six-camera setup to see 360 degrees around the car and track objects as they move in the surrounding environment.

Predicting the Future with RNNs

Autonomous vehicles must use computational methods and sensor data, such as a sequence of images, to figure out how an object is moving in time.

ClearSightNet Deep Neural Network

ClearSightNet DNN is trained to evaluate cameras’ ability to see clearly and determine causes of occlusions, blockages and reductions in visibility.

WaitNet Deep Neural Network

Learn how the WaitNet DNN is able to detect intersections without using a map.

Path Perception Ensemble

This trio of DNNs builds and evaluates confidence for center path and lane line predictions, as well as lane changes/splits/merges.

Get the latest DRIVE Labs delivered straight to your inbox with the NVIDIA Automotive newsletter.