Skip to content

Isaac Lab

Practical

Isaac Lab is NVIDIA’s open-source, GPU-accelerated framework for robot learning built on Isaac Sim. It enables training reinforcement learning and imitation learning policies at massive scale—4096+ parallel environments on a single GPU—making sophisticated robot training practical without supercomputer access.

Prerequisites

Why Isaac Lab Matters

ChallengeIsaac Lab Solution
Training is slow on CPUGPU-parallel: 4096+ environments simultaneously
Need expensive hardwareTrain on single RTX GPU, ~90,000 FPS
Sim-to-real gapDomain randomization + photorealistic rendering
Complex sensor simulationFull sensor suite: cameras, LiDAR, IMU, contact
Fragmented toolsUnified framework for RL, IL, motion planning

Architecture

┌─────────────────────────────────────────────────────────────────┐
│ Isaac Lab │
├─────────────────────────────────────────────────────────────────┤
│ ┌──────────────────┐ ┌──────────────────┐ ┌───────────────┐ │
│ │ RL Frameworks │ │ Imitation │ │ Motion │ │
│ │ RSL-RL, SKRL │ │ Learning │ │ Planning │ │
│ │ RL-Games, SB3 │ │ (Mimic) │ │ (cuRobo) │ │
│ └────────┬─────────┘ └────────┬─────────┘ └───────┬───────┘ │
│ │ │ │ │
│ ┌────────▼─────────────────────▼─────────────────────▼───────┐ │
│ │ Environment API (Manager/Direct) │ │
│ │ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │
│ │ │ Robots │ │ Sensors │ │ Domain │ │ │
│ │ │ Actuators │ │ Cameras,IMU │ │ Randomization │ │ │
│ │ │ Controllers │ │ Contact,Ray │ │ ADR, PBT │ │ │
│ │ └─────────────┘ └─────────────┘ └─────────────────────┘ │ │
│ └────────────────────────────┬────────────────────────────────┘ │
├───────────────────────────────┼───────────────────────────────────┤
│ Isaac Sim │
│ ┌─────────────┐ ┌─────────────┐ ┌────────────────────────┐ │
│ │ PhysX 5 │ │ RTX Render │ │ USD Scene Graph │ │
│ │ (GPU) │ │ (Tiled) │ │ (OpenUSD) │ │
│ └─────────────┘ └─────────────┘ └────────────────────────┘ │
├───────────────────────────────────────────────────────────────────┤
│ NVIDIA Omniverse / GPU │
└───────────────────────────────────────────────────────────────────┘

Getting Started

System Requirements

ComponentMinimumRecommended
GPUVolta+ (CC 7.0+)RTX 4090 / A6000
VRAM16GB24GB+
RAM32GB64GB+
OSUbuntu 22.04Ubuntu 22.04/24.04
Python3.10 (Sim 4.X)3.11 (Sim 5.X)

Installation

Terminal window
# Install Isaac Sim first
pip install isaacsim==5.1.0 --extra-index-url https://pypi.nvidia.com
# Clone Isaac Lab
git clone https://github.com/isaac-sim/IsaacLab.git
cd IsaacLab
# Create conda environment and install
./isaaclab.sh --install

Training Your First Policy

Train a quadruped locomotion policy:

Terminal window
# Train ANYmal-D on flat terrain with RSL-RL
python source/standalone/workflows/rsl_rl/train.py \
--task Isaac-Velocity-Flat-Anymal-D-v0 \
--num_envs 4096 \
--headless

This trains a policy at ~90,000 FPS on an RTX A6000.

Environment Workflows

Isaac Lab provides two workflow styles for creating environments:

Modular, structured approach with separate managers for observations, rewards, and actions:

from omni.isaac.lab.envs import ManagerBasedRLEnv
# Create parallel environments
env = ManagerBasedRLEnv(
cfg=MyRobotEnvCfg,
num_envs=4096,
)
# Training loop
obs = env.reset()
for _ in range(1000000):
actions = policy(obs)
obs, rewards, dones, infos = env.step(actions)

Direct (Performance-Critical)

Lower-level API for maximum control, similar to Isaac Gym:

from omni.isaac.lab.envs import DirectRLEnv
from omni.isaac.lab.utils import configclass
@configclass
class MyRobotEnvCfg(DirectRLEnvCfg):
scene: MySceneCfg = MySceneCfg(num_envs=4096)
observation_space = 48
action_space = 12
episode_length_s = 20.0

Supported RL Frameworks

FrameworkFeatures
RSL-RLJIT/ONNX export, fast training
SKRLPyTorch + JAX, multi-agent (MAPPO, IPPO)
RL-GamesVectorized training
Stable-Baselines3Extensive docs, numpy-based

Pre-Built Environments (30+)

Locomotion

  • Isaac-Velocity-Flat-Anymal-D-v0 — Quadruped flat terrain
  • Isaac-Velocity-Rough-Anymal-C-v0 — Quadruped rough terrain
  • Isaac-Humanoid-v0 — Humanoid walking

Manipulation

  • Isaac-Lift-Cube-Franka-v0 — Pick and lift with Franka
  • Isaac-Stack-Cube-Franka-v0 — Stack cubes

Dexterous (DexSuite v2.3+)

  • Isaac-Dexsuite-Kuka-Allegro-Lift-v0 — Dexterous lifting
  • Isaac-Dexsuite-Kuka-Allegro-Reorientation-v0 — Object reorientation

Supported Robots

Quadrupeds: ANYmal-B/C/D, Unitree A1/Go1/Go2, Boston Dynamics Spot

Humanoids: Unitree H1, G1

Manipulators: Franka Panda, Kuka arms, Universal Robots

Hands: Allegro Hand, Unitree three-finger, Inspire five-finger

Policy Export and Deployment

Export trained policies for deployment:

from isaaclab_rl.rsl_rl import export_policy_as_jit
# Export to TorchScript for deployment
export_policy_as_jit(agent, path="policy.pt")

Deploy on Jetson Orin or other edge devices via Isaac ROS.

Performance Metrics

MetricValue
State-based training1.6M+ FPS (multi-GPU)
Vision-based training60K+ FPS (multi-GPU)
Locomotion training~90,000 FPS (RTX A6000)
Parallel environments4096+ (single GPU)

Isaac Gym Migration

Migrating from Isaac Gym? Key differences:

AspectIsaac GymIsaac Lab
Config formatYAMLPython configclass
Scene creationManual loopCloner API (automatic)
RenderingBasicRTX photorealistic
ROS supportNoYes (ROS 2 bridge)
Soft body physicsNoYes (PhysX 5)

Sources