Lightwheel, a simulation-first robotics solutions provider, addresses two critical physical AI barriers: data scarcity from expensive real-world data collection and the sim-to-real gap, where policies trained in simulation fail to translate to physical hardware. To help robot makers overcome these challenges, Lightwheel leveraged NVIDIA Omniverse™, Isaac Sim™, Isaac™ Lab, and the Isaac GR00T N1.5 vision-language-action foundation model to create the Lightwheel Simulation Platform—a simulation-first workflow that bridges the gap between research and real-world robotic deployment.
AgiBot
BYD
ByteDance
Figure
Fourier
Galbot
Geely
Google Deepmind
Zordi
Lightwheel
Robotics
NVIDIA Isaac
NVIDIA Omniverse
The embodied AI and robotics industry faces two fundamental barriers: data scarcity and the sim-to-real gap. Real-world data collection remains slow and expensive, constraining development speed and limiting the training datasets needed for intelligent autonomous systems. The sim-to-real gap presents an equally significant challenge, where AI policies trained in simulation environments fail to translate to reliable performance on physical hardware.
For robotics researchers, developers, and industries deploying autonomous systems in manufacturing, healthcare, logistics, and agriculture, these challenges create substantial operational limitations. Traditional approaches require extensive physical prototyping, costly real-world testing, and time-intensive data collection—severely limiting innovation speed and increasing development costs while constraining the deployment of advanced AI systems in practical applications.
Lightwheel
Lightwheel
To overcome these challenges, Lightwheel developed the Lightwheel Simulation Platform—a comprehensive solution built on NVIDIA Omniverse, Isaac Sim, and Isaac Lab, that addresses embodied AI development through three core components.
Lightwheel’s SimReady assets are engineered for real-world physics, designed with precise geometry and validated physical properties. These assets enable users to quickly assemble accurate digital twins, accelerating workflows for teleoperation data collection and reinforcement learning. With built-in support for Universal Scene Description (USD) format and MJCF, teams can seamlessly integrate assets into Omniverse and Isaac Sim, unlocking robust, interoperable simulation environments.
Leveraging OpenUSD, Lightwheel creates high-quality, physically accurate simulations to drive modern physical AI at scale. Lightwheel uses NVIDIA USD Search to streamline asset discovery, making it easy to locate the right SimReady assets for each simulation task and assemble scenes in minutes. This flexibility allows teams to accelerate development in their preferred simulation environment. Underpinning this asset library is Lightwheel’s simulation framework, which seamlessly integrates Isaac Sim.
Lightwheel enables high-quality teleoperation data collection through VR headsets (Apple Vision Pro, Meta Quest), Space Mouse, and exoskeleton solutions with robust quality assurance. The platform combines MimicGen and DexMimicGen with Lightwheel’s SimReady assets, environments, and Isaac Sim to generalize teleoperated simulation data, scaling the value of synthetic data by 100 to 1000 times.
To generate this diverse training dataset, operators control simulated Unitree H1 humanoid robots through complex industrial tasks, including cylindrical component manipulation with Dex Hand and dual-arm coordination for heavy tray lifting in automotive environments.
By leveraging the GR00T N1.5 visual-language-action (VLA) foundation model, Lightwheel fine-tuned the model using simulation-generated synthetic data from its SimReady environments. This data included RGB images, joint states, GPT-generated task descriptions, and scene metadata. This robust training process resulted in impressive downstream performance, validating the effectiveness of simulation-based pipelines for embodied AI.
The Lightwheel Simulation Platform applies rigorous two-phase quality assurance: automated validation for visual realism and annotation completeness, followed by manual review for realistic behavior under physical constraints.
For the Geely automotive deployment, the team further tailored GR00T N1.5 to the Unitree H1 robot’s specific morphology, customizing the vision-language planner with factory-optimized prompts. Using Isaac Sim and DexMimicGen data augmentation techniques, they expanded training diversity across varied lighting, materials, and object placements, enabling reliable performance in dynamic factory conditions.
During prototyping, the system runs on NVIDIA GeForce RTX™ 4090 GPUs, providing compute capacity for embodiment adaptation and task optimization before deployment.
Lightwheel
Lightwheel's NVIDIA-powered simulation platform delivers transformative improvements across development speed, deployment success, and real-world performance, establishing new benchmarks for embodied AI development in industrial applications.
The simulation-first approach reduced development cycles from months to weeks by enabling rapid iteration in virtual environments. The 100:1 simulated-to-real data ratio eliminated expensive real-world data collection while maintaining the physical accuracy necessary for reliable sim-to-real transfer, generating scalable, high-quality synthetic data with minimal manual intervention.
Lightwheel successfully deployed GR00T N1.5 foundation models in Unitree H1 humanoid robots at Geely's live automotive factory. The robots autonomously perform component transportation between workstations, precise part placement on inspection trays, and coordinated dual-arm manipulation for heavy components while maintaining balance in dynamic environments with human workers. These deployments demonstrate meaningful progress toward robust, factory-grade autonomy capable of scaling across diverse workflows.
Major technology partners including Google DeepMind, Figure, AgiBot, ByteDance, Geely, and BYD leverage the Lightwheel Simulation Platform assets and synthetic datasets to improve embodied AI performance across robotics and automation applications. The platform's integration with NVIDIA's broader ecosystem completes the end-to-end service chain for synthetic data generation while opening new revenue streams from the robotics industry.
Ongoing development focuses on expanding platform capabilities for deformable object modeling, building SimReady Assets for general-purpose tasks, and scaling data generation pipelines by leveraging GR00T N1.5 as a semi-autonomous demonstrator for initial task demonstrations at scale.
Lightwheel's collaboration with NVIDIA demonstrates how advanced simulation platforms and foundation models can transform embodied AI development, turning theoretical research into practical, deployable robotic solutions. Their successful deployment of GR00T N1.5-powered humanoid robots in live automotive manufacturing environments showcases how simulation-first strategies can deliver robust, scalable automation to the factory floor.
This comprehensive approach showcases how companies can leverage NVIDIA's AI ecosystem to overcome traditional barriers in robotics development, achieving unprecedented speed, cost efficiency, and deployment success across industries from automotive manufacturing to next-generation robot development.
“By leveraging NVIDIA AI technologies, we successfully fine-tuned our vision-language-action foundation model with our own high-quality synthetic and real-world data and deployed it on real robots. Using GR00T N1.5, we enabled robots to understand complex instructions and perform versatile tasks in dynamic, real-world environments—capabilities that were not possible before.”
Jay Yang
Chief Architect
Learn how NVIDIA Isaac Sim can accelerate your own sim-to-real robotics development with photorealistic simulation environments.