Virtual Vehicles, Real-World Results DRIVE Sim uses high-fidelity and physically accurate simulation to create a safe, scalable, and cost-effective way to bring self-driving vehicles to our roads. It taps into NVIDIA’s core technologies—including NVIDIA RTX™, Omniverse™, and AI—to deliver a powerful, cloud-based computing platform capable of generating a wide range of real-world scenarios for AV development and validation. DRIVE Sim creates digital twins of real-world environments using precision map data, and can generate datasets to train the vehicle’s perception systems or test the decision-making processes. It can also be connected to the AV stack in software-in-the-loop (SIL) or hardware-in-the-loop (HIL) configurations to validate the complete integrated system. See DRIVE Sim in action
Expanding Capabilities with Omniverse DRIVE Sim is built on NVIDIA Omniverse, which provides the core simulation and rendering engines. Autonomous vehicle simulation has extremely tight timing, repeatability, and real-time performance requirements, and must be able to operate at scale. Additionally, generating data from AV sensor sets in physically based virtual worlds requires tremendous compute loads. NVIDIA Omniverse is architected from the ground up with multi-GPU support to provide large-scale, multi-sensor simulation for autonomous vehicles. It also enables physically accurate, real-time sensor simulation with NVIDIA RTX. Learn more about DRIVE Sim on Omniverse
Synthetic Data Generation for Training Autonomous Vehicles NVIDIA DRIVE Replicator generates physically based sensor data for camera, radar, lidar, and ultrasonics, along with the corresponding ground-truth. This data is then used for training AI perception networks for AVs. Using synthetic data reduces time and cost, is always accurate, and produces ground truth that humans can’t label, such as depth, velocity, and occluded objects. It also generates training data for rare and dangerous scenes to augment real data for a targeted approach to solving some of AVs’ biggest challenges. Learn more about synthetic data generation See DRIVE Replicator in action
DRIVE Constellation NVIDIA DRIVE Constellation™ provides a dedicated hardware platform for hardware-in-the-loop powered by two servers. The first is a GPU server to run DRIVE Sim and generate synthetic sensor data from the simulated environment. The second is a vehicle server containing the target vehicle computer that receives and responds as if it were operating in the real world. DRIVE Constellation is designed to run at scale in a data center, accelerating autonomous vehicle development and validation by testing the self-driving software on its target hardware with bit- and timing-accuracy. It’s fully compatible with the NVIDIA DRIVE AGX™ platform or can be customized with third-party hardware. DRIVE Sim can run on a local workstation or scale to multiple GPUs on one or more nodes. NVIDIA OVX Server is designed to run complex graphics workloads like DRIVE Sim. Simulation can be run as SIL or, additional hardware can be added to the server to run HIL using DRIVE Constellation.
Open and Extensible DRIVE Sim is open, modular, and extensible, using the Omniverse KIT SDK to let developers build compatible models, 3D content, and validation tools. Users can create their own extensions or choose from a rich library of vehicle, sensor, and traffic plug-ins provided by DRIVE Sim ecosystem partners. DRIVE Sim also allows developers to configure custom sensor models and supports the full DRIVE Hyperion 8 sensor set. This flexibility lets users customize DRIVE Sim for their unique use case and tailor the simulation experience to their development and validation needs. Learn more about the DRIVE Sim ecosystem Learn more about DRIVE Hyperion