NVIDIA DRIVE Videos

The NVIDIA DRIVE Team is constantly innovating, developing end-to-end autonomous driving solutions that are transforming the industry. 

Experience Our Latest AV Innovations

Select tab below for an inside look at the process.

  • NVIDIA DRIVE Labs
  • NVIDIA DRIVE Dispatch

Short-form videos highlighting the building blocks of our autonomous vehicle technology.

 

Self-Supervised Reconstruction of Dynamic Driving Scenarios

Autonomous vehicle simulation is effective only if it can accurately reproduce the real world. The need for fidelity increases—and becomes more challenging to achieve as scenarios become more dynamic and complex. In this episode, learn about EmerNeRF, a method for reconstructing dynamic driving scenarios.

 

Ensuring Precision with Dynamic View Synthesis

As automakers integrate autonomy into their fleets, challenges may emerge when extending autonomous vehicle technology to different types of vehicles. In this edition of NVIDIA DRIVE Labs, we dive into viewpoint robustness and explore how recent advancements provide a solution using dynamic view synthesis.

 

Pruning AI Models for Peak Performance

HALP (Hardware-Aware Latency Pruning), is a new method designed to adapt convolutional neural networks (CNNs) and transformer-based architectures for real-time performance. In this video, learn how HALP optimizes pre-trained models to maximize compute utilization.

 

Taking Autonomous Vehicle Occupancy Prediction Into the Third Dimension

The concept of "3D occupancy prediction" is critical to the development of safe and robust self-driving systems. In this episode, we go beyond the traditional bird's eye view approach and showcase NVIDIA's 3D perception technology, which won the 3D Occupancy Prediction Challenge at CVPR 2023.

 

Enhanced Obstacle Avoidance for Autonomous Parking in Tight Spaces

Early Grid Fusion (EGF) is a new technique that enhances near-field obstacle avoidance in automatic parking assist. EGF combines machine-learned cameras and ultrasonic sensors to accurately detect and perceive surrounding obstacles, providing a 360-degree surround view.

 

Enhancing AI Segmentation Models for Autonomous Vehicle Safety

Precise environmental perception is critical for autonomous vehicle (AV) safety, especially when handling unseen conditions. In this episode of DRIVE Labs, we discuss a Vision Transformer model called SegFormer, which generates robust semantic segmentation while maintaining high efficiency. This video introduces the mechanism behind SegFormer that enables its robustness and efficiency.

 

Generating Potential Accident Scenarios for Autonomous Vehicles Using AI

Testing autonomous vehicles (AVs) in potential near-accident scenarios is critical for evaluating safety, but is difficult and unsafe to do in the real world. In this episode of DRIVE Labs, we discuss a new method from NVIDIA researchers called STRIVE (Stress-Test Drive), which automatically generates potential accident scenarios in simulation for AVs.

 

Helping AVs Better Understand Speed Limit Signs

Understanding speed limit signs may seem like a straightforward task, but it can quickly become more complex in situations in which different restrictions apply to different lanes, or when driving in a new country. This episode of DRIVE Labs shows how AI-based live perception can help AVs better understand the complexities of speed limit signs, using both explicit and implicit cues.

 

How AI Improves Radar Perception for Autonomous Vehicles

Diverse and redundant sensors, such as camera and radar, are necessary for AV perception. However, radar sensors that leverage only traditional processing may not be up to the task. In this DRIVE Labs video, we show how AI can address the shortcomings of traditional radar signal processing in distinguishing moving and stationary objects to bolster AV perception.

 

NVIDIA DRIVE IX AI Algorithms Perform Intuitive In-Cabin Perception

In this DRIVE Labs episode, we show how DRIVE IX perceives driver attention, activity, emotion, behavior, posture, speech, gesture and mood. Driver perception is a key aspect of the platform that enables the AV system to ensure a driver is alert and paying attention to the road. It also enables the AI system to perform cockpit functions that are more intuitive and intelligent.

 

How AI Helps Autonomous Vehicles Perceive Intersection Structure

Handling intersections autonomously presents a complex set of challenges for self-driving cars. Earlier in the DRIVE Labs series, we demonstrated how we detect intersections, traffic lights and traffic signs with the WaitNet DNN. And how we classify traffic light state and traffic sign type with the LightNet and SignNet DNNs. In this episode, we go further to show how NVIDIA uses AI to perceive the variety of intersection structures that an autonomous vehicle could encounter on a daily drive.

 

How Active Learning Improves Nighttime Pedestrian Detection

Active learning makes it possible for AI to automatically choose the right training data. An ensemble of dedicated DNNs goes through a pool of image frames, flagging frames that it finds to be confusing. These frames are then labeled and added to the training dataset. This process can improve DNN perception in difficult conditions, such as nighttime pedestrian detection.

 

Laser Focused: How Multi-View LidarNet Presents Rich Perspective for Self-Driving Cars

Traditional methods for processing lidar data pose significant challenges, such as the ability to detect and classify different types of objects, scenes and weather conditions, as well as limitations in performance and robustness. Our multi-view LidarNet deep neural network uses multiple perspectives, or views, of the scene around the car to address these lidar processing challenges.

 

Lost in Space? Localization Helps Self-Driving Cars Find Their Way

Localization is a critical capability for autonomous vehicles, computing their three dimensional (3D) location inside of a map, including 3D position, 3D orientation, and any uncertainties in these position and orientation values. In this DRIVE Labs, we show how our localization algorithms make it possible to achieve high accuracy and robustness using mass market sensors and HD maps.

 

How AI Reads the Writing on the Road

Watch how we evolved our LaneNet DNN into our high-precision MapNet DNN. This evolution includes an increase in detection classes to also cover road markings and vertical landmarks (e.g. poles) in addition to lane line detection. It also leverages end-to-end detection that provides faster in-car inference.

 

AIs on the Road: Surround Camera Radar Fusion Eliminates Blind Spots for Self-Driving Cars

The ability to detect and react to objects all around the vehicle makes it possible to deliver a comfortable and safe driving experience. In this DRIVE Labs video, we explain why it is essential to have a sensor fusion pipeline which can combine camera and radar inputs for robust surround perception.

 

Pixel-Perfect Perception: How AI Helps Autonomous Vehicles See Outside the Box

For highly complex driving scenarios, it’s helpful for the autonomous vehicle’s perception system to provide a more detailed understanding of its surroundings. With our panoptic segmentation DNN approach, we can obtain such fine-grained results by segmenting image content with pixel-level accuracy.

 

Blinded by the Light? How AI Avoids High Beam Glare for Other Vehicles

High beam lights can increase night-time visibility range of standard headlights significantly; however, they can create hazardous glare to other drivers. We've trained a camera-based deep neural network (DNN) — called AutoHighBeamNet — to automatically generate control outputs for the vehicle’s high beam light system, increasing night time driving visibility and safety.

 

Right On Track: Feature Tracking for Robust Self-Driving

Feature tracking estimates the pixel-level correspondences and pixel-level changes among adjacent video frames, providing critical temporal and geometric information for object motion/velocity estimation, camera self-calibration and visual odometry.

 

Searching for a Parking Spot? AI Got It

Our ParkNet deep neural network can detect an open parking spot under a variety of conditions. Watch how it handles both indoor and outdoor spaces, separated by single, double or faded lane markings, as well as differentiates between occupied, unoccupied and partially obscured spots.

 

Ride in NVIDIA's Self-Driving Car

This special edition DRIVE Labs episode shows how NVIDIA DRIVE AV Software combines the essential building blocks of perception, localization, and planning/control to drive autonomously on public roads around our headquarters in Santa Clara, Calif.

 

Classifying Traffic Signs and Traffic Lights with AI

NVIDIA DRIVE AV software uses a combination of DNNs to classify traffic signs and lights. Watch how our LightNet DNN classifies traffic light shape (e.g. solid versus arrow) and state (i.e. color), while the SignNet DNN identifies traffic sign type.

 

Eliminating Collisions with Safety Force Field

Our Safety Force Field (SFF) collision avoidance software acts as an independent supervisor on the actions of the vehicle’s primary planning and control system. SFF double-checks controls that were chosen by the primary system, and if it deems them to be unsafe, it will veto and correct the primary system’s decision.

 

High-Precision Lane Detection

Deep neural network (DNN) processing has emerged as an important AI-based technique for lane detection. Our LaneNet DNN increases lane detection range, lane edge recall, and lane detection robustness with pixel-level precision.

 

Perceiving a New Dimension

Computing distance to objects using image data from a single camera can create challenges when it comes to hilly terrain. With the help of deep neural networks, autonomous vehicles can predict 3D distances from 2D images.

 

Surround Camera Vision

See how we use our six-camera setup to see 360 degrees around the car and track objects as they move in the surrounding environment.

 

Predicting the Future with RNNs

Autonomous vehicles must use computational methods and sensor data, such as a sequence of images, to figure out how an object is moving in time.

 

ClearSightNet Deep Neural Network

ClearSightNet DNN is trained to evaluate cameras’ ability to see clearly and determine causes of occlusions, blockages and reductions in visibility.

 

WaitNet Deep Neural Network

Learn how the WaitNet DNN is able to detect intersections without using a map.

 

Path Perception Ensemble

This trio of DNNs builds and evaluates confidence for center path and lane line predictions, as well as lane changes/splits/merges.

Brief updates from our AV fleet, highlighting new breakthroughs.

 

November, 2023

In the latest edition of NVIDIA DRIVE Dispatch, learn about generating 4D reconstruction from a single drive as well as PredictionNet, a deep neural network (DNN) that can be used for predicting future behavior and trajectories of road agents in autonomous vehicle applications. We also take a look at testing for the New Car Assessment Program (NCAP) with NVIDIA DRIVE Sim.

 

January, 2023

See the latest advances in autonomous vehicle perception from NVIDIA DRIVE. In this dispatch, we use ultrasonic sensors to detect the height of surrounding objects in low-speed areas such as parking lots. RadarNet DNN detects drivable free space, while the Stereo Depth DNN estimates the environment geometry.

 

February, 2022

DRIVE Dispatch returns for Season 2. In this episode, we show advances in end-to-end radar DNN-based clustering, Real2Sim, driver and occupant monitoring, and more.

 

July, 2021

In this episode of NVIDIA DRIVE Dispatch, we show advances in traffic motion prediction, road marking detection, 3D synthetic data visualization and more.

 

June, 2021

In this episode of NVIDIA DRIVE Dispatch, we show advances in driveable path perception, camera and radar localization, parking space detection and more.

 

March, 2021

In this episode of NVIDIA DRIVE Dispatch, we show advances in synthetic data for improved DNN training, radar-only perception to predict future motion, MapStream creation for crowdsourced HD maps and more.

 

February, 2021

See the latest advances in DepthNet, road marking detection, multi-radar egomotion estimation, cross-camera feature tracking, and more.

 

January, 2021

Explore progress in parking spot detection, 3D location in landmark detection, our first autonomous drive using an automatically generated MyRoute map and road plane, and suspension estimation.

 

December, 2020

Check out advances in scooter classification and avoidance, traffic light detection, 2D cuboid stability, 3D freespace from camera annotations, lidar perception pipeline, and headlight/tail light/street light perception.

Get the latest DRIVE videos delivered straight to your inbox with the NVIDIA Automotive newsletter.