1. Simulate
Run Isaac Sim locally with full ray-tracing. Create digital twins of your robots and environments.
DGX Spark is NVIDIA’s desktop-class AI supercomputer, announced at CES 2025. It brings datacenter AI performance to developers’ desks, making it ideal for robotics development, simulation, and foundation model training without cloud dependencies.
DGX Spark fills a critical gap in the robotics development workflow:
| Spec | DGX Spark |
|---|---|
| GPU | NVIDIA GB10 Grace Blackwell Superchip |
| AI Performance | 1,000 TOPS (INT8), 1 PFLOP (FP4 sparse) |
| CUDA Cores | 6,144 |
| Tensor Cores | 192 (5th gen) |
| RT Cores | 48 (4th gen) |
| Memory | 128GB unified LPDDR5X |
| Memory Bandwidth | 273 GB/s |
| CPU | 20-core ARM (10x Cortex-X925 + 10x Cortex-A725) |
| Storage | 4TB NVMe (Founders), 1TB+ (partners) |
| Connectivity | 1x 10GbE, 2x QSFP (200Gbps), WiFi 7, BT 5.3, 4x USB 4.0 |
| Power | 240W PSU, 140W TDP |
| Dimensions | 150 × 150 × 50.5 mm (1.2 kg) |
┌─────────────────────────────────────────────────────────────┐│ DGX Spark │├─────────────────────────────────────────────────────────────┤│ ┌─────────────────────────────────────────────────────┐ ││ │ NVIDIA GB10 Grace Blackwell Superchip │ ││ │ ┌───────────────┐ ┌───────────────────────────┐ │ ││ │ │ CUDA Cores │ │ Transformer Engine │ │ ││ │ │ 6,144 │ │ FP4 / FP8 / BF16 │ │ ││ │ └───────────────┘ └───────────────────────────┘ │ ││ │ ┌───────────────┐ ┌───────────────────────────┐ │ ││ │ │ Tensor Cores │ │ RT Cores (Gen 4) │ │ ││ │ │ 192 (Gen 5) │ │ 48 cores │ │ ││ │ └───────────────┘ └───────────────────────────┘ │ ││ └─────────────────────────────────────────────────────┘ ││ ┌─────────────────────────────────────────────────────┐ ││ │ 128GB Unified Memory (CPU + GPU) │ ││ └─────────────────────────────────────────────────────┘ ││ ┌──────────────┐ ┌──────────────┐ ┌───────────────┐ ││ │ ARM CPU │ │ 4TB NVMe │ │ 10GbE + QSFP │ ││ │ 20 cores │ │ Storage │ │ WiFi 7 │ ││ └──────────────┘ └──────────────┘ └───────────────┘ │└─────────────────────────────────────────────────────────────┘1. Simulate
Run Isaac Sim locally with full ray-tracing. Create digital twins of your robots and environments.
2. Train
Fine-tune foundation models (GR00T, VLA) on your own data without cloud uploads.
3. Validate
Test perception and control stacks in simulation before real hardware.
4. Deploy
Export to Jetson Thor/Orin with identical software stack.
DGX Spark runs the same software as NVIDIA’s datacenter DGX systems:
┌─────────────────────────────────────────────────────────────┐│ Your Development │├─────────────────────────────────────────────────────────────┤│ Isaac Sim 5.x │ Isaac Lab │ Omniverse Kit │ USD Composer │├─────────────────────────────────────────────────────────────┤│ PyTorch 2.9 │ TensorRT │ Triton │ NeMo │ cuML │ RAPIDS │├─────────────────────────────────────────────────────────────┤│ CUDA 13 │ cuDNN │ Transformer Engine │ NCCL │├─────────────────────────────────────────────────────────────┤│ DGX OS (Ubuntu 24.04, HWE kernel 6.14) │└─────────────────────────────────────────────────────────────┘# Isaac Sim 5.x runs natively on DGX Spark# Full ray-tracing, PhysX 5, domain randomization
# Launch Isaac Sim~/.local/share/ov/pkg/isaac-sim-5.1.0/isaac-sim.sh
# Or via Omniverse Launcheromniverse-launcherDGX Spark can run complex warehouse scenes at 30+ FPS with ray-tracing enabled.
Note: Some Isaac Lab features are not supported on aarch64: SkillGen, OpenXR, SKRL/JAX training.
# Fine-tune GR00T or custom VLA models locallyimport torchfrom nemo.collections.multimodal import VLAModel
# Load base model (fits in 128GB unified memory)model = VLAModel.from_pretrained("nvidia/gr00t-base")
# Fine-tune on your robot's datatrainer = Trainer( accelerator="gpu", precision="bf16-mixed", max_epochs=10,)trainer.fit(model, your_dataloader)
# Export for Jetson Thormodel.export("robot_policy.onnx", target="jetson-thor")# Reinforcement learning for roboticsfrom omni.isaac.lab import SimulationAppfrom omni.isaac.lab_tasks.manager_based import ManagerBasedRLEnv
# DGX Spark can run 4096+ parallel environmentsapp = SimulationApp({"headless": False})
env = ManagerBasedRLEnv( cfg=my_robot_cfg, num_envs=4096, # Massive parallelism)
# Train with stable-baselines3 or rl_gamesfrom stable_baselines3 import PPOmodel = PPO("MlpPolicy", env, verbose=1)model.learn(total_timesteps=10_000_000)# Train a 7B parameter VLA model entirely on DGX Spark# 128GB unified memory eliminates GPU memory constraints
from transformers import AutoModelForVision2Seq
# Load large model - fits in unified memorymodel = AutoModelForVision2Seq.from_pretrained( "nvidia/gr00t-7b", torch_dtype=torch.bfloat16, device_map="auto",)
# Fine-tune on robot demonstration datatrainer = Trainer( model=model, train_dataset=robot_demos, args=TrainingArguments( bf16=True, per_device_train_batch_size=8, # Large batches possible gradient_accumulation_steps=4, ),)trainer.train()Run photorealistic simulations for:
# Simulate Jetson Thor performance on DGX Spark# Use power/performance profiles
nvidia-smi --power-limit=150 # Match Thor power envelope# Run same JetPack 7.0 containers as productiondocker run --gpus all nvcr.io/nvidia/jetson-thor:jp7.0-runtime| Aspect | DGX Spark | Gaming PC | Cloud GPU | DGX Station |
|---|---|---|---|---|
| Price | $2,999+ | $2,000+ | $/hour | $50,000+ |
| AI TOPS | 1,000 | ~300 | Varies | 2,000+ |
| Unified Memory | 128GB | 16-24GB | 80GB (A100) | 256GB+ |
| Isaac Sim | Full | Limited | Network latency | Full |
| Transformer Engine | Yes | No | Yes (H100) | Yes |
| Form Factor | NUC-style | Tower | N/A | Workstation |
| Air-gapped | Yes | Yes | No | Yes |
# DGX Spark comes with DGX OS pre-installed# On first boot, create user account
# Verify GPUnvidia-smi
# Check Transformer Enginepython -c "import transformer_engine; print(transformer_engine.__version__)"# Install Isaac Sim via Omniverse Launcher# Or command line:./omniverse-launcher-linux.AppImage
# Install Isaac Labpip install omni-isaac-lab
# Install Isaac ROS development toolssudo apt install ros-jazzy-isaac-ros-dev-tools