NVIDIA DRIVE VIDEOS

The NVIDIA DRIVE Software team is constantly innovating, developing redundant and diverse deep neural networks for safe and robust self-driving systems that are transforming the industry.

Experience Our Latest AV Innovations

Select tab below for an inside look at the process.

  • NVIDIA DRIVE Labs
  • NVIDIA DRIVE Dispatch

Short-form videos that dive into specific self-driving algorithms.

 

NVIDIA DRIVE IX AI Algorithms Perform Intuitive In-Cabin Perception

In this DRIVE Labs episode, we show how DRIVE IX perceives driver attention, activity, emotion, behavior, posture, speech, gesture and mood. Driver perception is a key aspect of the platform that enables the AV system to ensure a driver is alert and paying attention to the road. It also enables the AI system to perform cockpit functions that are more intuitive and intelligent.

 

Optimizing Light Source Perception with Software-Defined AI

In this DRIVE Labs episode, we show how software-defined AI techniques can be used to significantly improve performance and functionality of our light source perception deep neural network (DNN) — increasing range, adding classification capabilities and more — in a matter of weeks.

 

All the Right Moves: How AI Helps Self-Driving Cars Predict the Future

Self-driving cars rely on AI to anticipate traffic patterns and safely maneuver in a complex environment. In this DRIVE Labs episode, we demonstrate how our PredictionNet deep neural network can predict future paths of other road users using live perception and map data.

 

How AI Helps Autonomous Vehicles Perceive Intersection Structure

Handling intersections autonomously presents a complex set of challenges for self-driving cars. Earlier in the DRIVE Labs series, we demonstrated how we detect intersections, traffic lights and traffic signs with the WaitNet DNN. And how we classify traffic light state and traffic sign type with the LightNet and SignNet DNNs. In this episode, we go further to show how NVIDIA uses AI to perceive the variety of intersection structures that an autonomous vehicle could encounter on a daily drive.

 

How Active Learning Improves Nighttime Pedestrian Detection

Active learning makes it possible for AI to automatically choose the right training data. An ensemble of dedicated DNNs goes through a pool of image frames, flagging frames that it finds to be confusing. These frames are then labeled and added to the training dataset. This process can improve DNN perception in difficult conditions, such as nighttime pedestrian detection.

 

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Traditional methods for processing lidar data pose significant challenges, such as the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness. Our multi-view LidarNet deep neural network uses multiple perspectives, or views, of the scene around the car to address these lidar processing challenges.

 

Lost in Space? Localization Helps Self-Driving Cars Find Their Way

Localization is a critical capability for autonomous vehicles, computing their three dimensional (3D) location inside of a map, including 3D position, 3D orientation, and any uncertainties in these position and orientation values. In this DRIVE Labs, we show how our localization algorithms make it possible to achieve high accuracy and robustness using mass market sensors and HD maps.

 

How AI Reads the Writing on the Road

Watch how we evolved our LaneNet DNN into our high-precision MapNet DNN. This evolution includes an increase in detection classes to also cover road markings and vertical landmarks (e.g. poles) in addition to lane line detection. It also leverages end-to-end detection that provides faster in-car inference.

 

AIs on the Road: Surround Camera Radar Fusion Eliminates Blind Spots for Self-Driving Cars

The ability to detect and react to objects all around the vehicle makes it possible to deliver a comfortable and safe driving experience. In this DRIVE Labs video, we explain why it is essential to have a sensor fusion pipeline which can combine camera and radar inputs for robust surround perception.

Brief updates from our AV fleet, highlighting new breakthroughs.

 

NVIDIA DRIVE Dispatch - S1E3

See the latest advances in DepthNet, road marking detection, multi-radar egomotion estimation, cross-camera feature tracking, and more.

 

NVIDIA DRIVE Dispatch - S1E2

Explore progress in parking spot detection, 3D location in landmark detection, our first autonomous drive using an automatically generated MyRoute map and road plane, and suspension estimation.

 

NVIDIA DRIVE Dispatch - S1E1

Check out advances in scooter classification and avoidance, traffic light detection, 2D cuboid stability, 3D freespace from camera annotations, lidar perception pipeline, and headlight/tail light/street light perception.

Get the latest DRIVE videos delivered straight to your inbox with the NVIDIA Automotive newsletter.