Robot Learning

Train robot policies in simulation.

Boston Dynamics

Workloads

Accelerated Computing Tools & Techniques
Data Center / Cloud
Robotics
Simulation / Modeling / Design

Industries

Healthcare and Life Sciences
Manufacturing
Retail/ Consumer Packaged Goods
Smart Cities/Spaces

Business Goal

Innovation
Return on Investment

Products

NVIDIA Isaac GR00T
NVIDIA Isaac Lab
NVIDIA Isaac Sim
NVIDIA Jetson AGX
NVIDIA Omniverse

Build Generalist Robot Policies

Preprogrammed robots operate using fixed instructions within set environments, which limits their adaptability to unexpected changes.

AI-driven robots address these limitations through simulation-based learning, allowing them to autonomously perceive, plan, and act in dynamic conditions. With robot learning, they can acquire and refine new skills by using learned policies—sets of behaviors for navigation, manipulation, and more—to improve their decision-making across various situations.

Benefits of Simulation-Based Robot Learning

Flexibility and Scalability

Iterate, refine, and deploy robot policies for real-world scenarios using a variety of data sources from your real robot-captured data and synthetic data in simulation. This works for any robot embodiment, such as autonomous mobile robots (AMRs), robotic arms, and humanoid robots. The “sim-first” based approach also lets you quickly train hundreds or thousands of robot instances in parallel.  

Accelerated Skill Development

Train robots in simulated environments to adapt to new task variations without the need for reprogramming physical robot hardware. 

Physically Accurate Environments

Easily model physical factors like object interactions (rigid or deformables), friction, etc., to significantly reduce the sim-to-real gap.  

Safe Proving Environment

Test potentially hazardous scenarios without risking human safety or damaging equipment.

Reduced Costs

Avoid the burden of real-world data collection and labeling costs by generating large amounts of synthetic data, validating trained robot policies in simulation, and deploying on robots faster. 

Robot Learning Algorithms

Robot learning algorithms—such as imitation learning or reinforcement learning—can help robots generalize learned skills and improve their performance in changing or novel environments. There are several learning techniques, including:

  • Reinforcement learning: A trial-and-error approach in which the robot receives a reward or a penalty based on the actions it takes. 
  • Imitation learning: The robot can learn from human demonstrations of tasks. 
  • Supervised learning: The robot can be trained using labeled data to learn specific tasks.
  • Diffusion policy: The robot uses generative models to create and optimize robot actions for desired outcomes.
  • Self-supervised learning: When there are limited labeled datasets, robots can generate their own training labels from unlabeled data to extract meaningful information.

Teach Robots to Learn and Adapt

A typical end-to-end robot workflow involves data processing, model training, validation in simulation, and deploying on a real robot.

Data Processing: To bridge the data gaps, you can consider a diverse set of high-quality data sources by combining internet-scale data, synthetic data, and live robot data. 

Training and Validating in Simulation: Robots need to be trained and deployed for task-defined scenarios and require accurate virtual representations of real-world conditions. The NVIDIA Isaac™ Lab open-source framework can help train robot policies by using reinforcement learning and imitation learning techniques in a modular approach. Isaac Lab can also be used with NVIDIA Isaac Sim™ or MuJoCo developer simulation platforms for rapid prototyping and deployment of robot policies.

Once the robot has been trained, its performance can be validated in Isaac Sim, a reference robotic simulation application built on NVIDIA Omniverse™

Deploying Onto the Real Robot: The trained robot policies and AI models can be deployed on NVIDIA Jetson™ on-robot computers that deliver the necessary performance and functional safety for autonomous operation.

NVIDIA Isaac GR00T for Humanoid Robot Developers

Imitation learning, a subset of robot learning, lets humanoids acquire new skills by observing and mimicking expert human demonstrations. But collecting these extensive, high-quality datasets in the real world is tedious, time consuming, and prohibitively expensive.

NVIDIA Isaac GR00T helps tackle these challenges by providing humanoid robot developers with robot foundation models, data pipelines, and simulation frameworks.

Foundation Models

Isaac GR00T N open foundation models are ideal for generalized humanoid robot reasoning and skills. This cross-embodiment solution takes multimodal input—including language and images—to perform manipulation tasks in diverse environments.

Synthetic Data Generation Pipelines

NVIDIA Isaac GR00T-Dreams is a blueprint that helps generate vast amounts of synthetic motion to teach robots new behaviors and how to adapt to changing environments.

Developers can first post-train Cosmos Predict 2 world foundation models (WFMs) for their robot. Then, using a single image as input, GR00T-Dreams can help generate multiple videos of the robot performing new tasks in new environments. The blueprint then extracts action tokens — compressed, digestible pieces of data that are used to teach robots how to perform these new tasks.

The GR00T-Dreams blueprint complements the Isaac GR00T-Mimic blueprint. While GR00T-Mimic uses NVIDIA Omniverse and Cosmos to augment existing data, GR00T-Dreams uses Cosmos to generate entirely new data.

Fourier

Get Started

Build adaptable robots with robust, perception-enabled, simulation-trained policies using NVIDIA Isaac Lab open-source modular framework for robot learning.

Accelerate Physical AI Workloads With NVIDIA RTX PRO 6000 Blackwell Series GPUs

NVIDIA RTX PRO™ 6000 Blackwell Series GPUs accelerate physical AI by running every robot development workload across training, synthetic data generation, robot learning, and simulation.

Resources

Synthetic Data

Close the sim-to-real gap by creating physically accurate virtual scenes and objects to train AI models while saving on training time and costs. 

Reinforcement Learning

Apply reinforcement learning (RL) techniques to any type of robot embodiment and build robot policies.

Simulation

Use the Isaac Sim robot simulation framework built on top of NVIDIA Omniverse for high-fidelity photorealistic simulations to train humanoid robots

Humanoid Robots

Accelerate humanoid robot development using NVIDIA GR00T, a research initiative and development platform for general-purpose robot foundation models and data pipelines to accelerate humanoid robotics.

News