Humanoid Robots
Full-body control, real-time VLA models, multi-camera perception at 800+ TOPS
Jetson Thor is NVIDIA’s most powerful edge AI computing platform, purpose-built for humanoid robots, autonomous vehicles, and advanced robotics requiring datacenter-class AI at the edge. Announced at GTC 2024, Thor delivers up to 2,070 FP4 TFLOPS with native Transformer Engine support.
Thor represents a generational leap designed specifically for the foundation model era:
| Spec | T5000 | T4000 |
|---|---|---|
| AI Performance | 2,070 FP4 TFLOPS (sparse) | 1,200 FP4 TFLOPS |
| GPU Architecture | NVIDIA Blackwell | NVIDIA Blackwell |
| GPU Cores | 2,560 CUDA, 96 Tensor (5th gen) | 1,536 CUDA, 64 Tensor |
| Transformer Engine | Yes (FP4/FP8) | Yes (FP4/FP8) |
| CPU | 14-core Arm Neoverse V3AE @ 2.6 GHz | 12-core Arm Neoverse V3AE |
| Memory | 128GB LPDDR5X unified | 64GB LPDDR5X unified |
| Memory Bandwidth | 273 GB/s | 273 GB/s |
| Power | 40W - 130W (configurable) | 40W - 70W |
| Process | 4nm | 4nm |
┌──────────────────────────────────────────────────────────────────┐│ Jetson Thor SoC (T5000) │├──────────────────────────────────────────────────────────────────┤│ ┌────────────────────┐ ┌────────────────────────────────────┐ ││ │ Arm CPU │ │ NVIDIA Blackwell GPU │ ││ │ Neoverse V3AE │ │ ┌──────────┐ ┌───────────────┐ │ ││ │ 14 cores @2.6GHz │ │ │ CUDA │ │ Transformer │ │ ││ │ │ │ │ Cores │ │ Engine │ │ ││ └────────────────────┘ │ │ 2,560 │ │ FP4/FP8 │ │ ││ │ └──────────┘ └───────────────┘ │ ││ ┌────────────────────┐ │ ┌──────────┐ ┌───────────────┐ │ ││ │ Safety Island │ │ │ Tensor │ │ RT Cores │ │ ││ │ Lockstep cores │ │ │ Cores │ │ Ray tracing │ │ ││ │ ASIL-D capable │ │ │ 96 (5th)│ │ │ │ ││ └────────────────────┘ └──┴──────────┴──┴───────────────┴──┘ ││ ┌──────────────────────────────────────────────────────────┐ ││ │ 128GB LPDDR5X Unified Memory (273 GB/s) │ ││ └──────────────────────────────────────────────────────────┘ ││ ┌─────────────┐ ┌─────────────┐ ┌─────────────────────────┐ ││ │ NVDLA v3 │ │ PVA v3 │ │ Video: 8K60 decode │ ││ │ (2x) │ │ (2x) │ │ 4K120 encode │ ││ └─────────────┘ └─────────────┘ └─────────────────────────┘ │└──────────────────────────────────────────────────────────────────┘Humanoid Robots
Full-body control, real-time VLA models, multi-camera perception at 800+ TOPS
Autonomous Vehicles
Level 4/5 autonomy with functional safety, sensor fusion, redundant compute
Industrial Manipulation
High-DOF arms, force feedback, real-time path planning with foundation models
Medical Robotics
Surgical assistance, diagnostic AI, safety-critical applications
┌─────────────────────────────────────────────────────────────┐│ Your Application │├─────────────────────────────────────────────────────────────┤│ Isaac Lab │ Isaac ROS 4.0 │ Omniverse │ cuMotion │ OSMO │├─────────────────────────────────────────────────────────────┤│ TensorRT 10.13 │ cuDNN 9.12 │ CUDA 13.0 │ Triton Server │├─────────────────────────────────────────────────────────────┤│ JetPack 7.1 SDK │├─────────────────────────────────────────────────────────────┤│ Linux Kernel 6.8 LTS + Ubuntu 24.04 LTS │└─────────────────────────────────────────────────────────────┘# JetPack 7.1 - Latest Thor SDK (Jetson Linux 38.4)# Ubuntu 24.04, CUDA 13.0, TensorRT 10.13# Transformer Engine support included
# Flash Thor developer kitsudo ./flash.sh jetson-thor-devkit internal
# Install full SDKsudo apt updatesudo apt install nvidia-jetpack# Isaac ROS 4.0 generally available for Thor# Includes GR00T foundation model support
# Add Isaac ROS apt repositorysudo apt-add-repository ppa:nvidia/isaac-rossudo apt update
# Install Isaac ROS packagessudo apt install ros-jazzy-isaac-ros-coresudo apt install ros-jazzy-isaac-ros-visual-slamsudo apt install ros-jazzy-isaac-ros-gr00tThor’s Transformer Engine enables running foundation models at the edge:
import torchimport transformer_engine.pytorch as te
# Run GR00T-style VLA model on Thorclass RobotPolicy(torch.nn.Module): def __init__(self): super().__init__() # FP8 automatic mixed precision self.vision_encoder = te.Linear(768, 1024) self.transformer = te.TransformerLayer( hidden_size=1024, ffn_hidden_size=4096, num_attention_heads=16, ) self.action_head = te.Linear(1024, 32) # Joint commands
def forward(self, images, proprioception): # Runs in FP8 automatically on Thor x = self.vision_encoder(images) x = self.transformer(x) return self.action_head(x)| Aspect | Jetson Thor (T5000) | Jetson AGX Orin |
|---|---|---|
| AI Performance | 2,070 FP4 TFLOPS | 275 TOPS |
| GPU Architecture | Blackwell | Ampere |
| Transformer Engine | Yes (FP4/FP8) | No |
| Max Memory | 128GB | 64GB |
| Memory Bandwidth | 273 GB/s | 204 GB/s |
| Power Range | 40-130W | 15-60W |
| Foundation Models | Native support | Limited |
| Target | Humanoids, L4/5 AV | AMRs, drones, industrial |
Thor developer kits available through NVIDIA partners. Includes:
# Download JetPack 7.1 from NVIDIA# Use SDK Manager or command linesudo ./flash.sh jetson-thor-devkit internal
# After boot, verifytegrastatsnvidia-smiimport torchimport time
# Verify Transformer Enginedevice = torch.device('cuda')x = torch.randn(32, 1024, 4096, device=device, dtype=torch.float16)
# Measure FP8 inferencewith torch.cuda.amp.autocast(dtype=torch.float8_e4m3fn): start = time.time() for _ in range(100): y = torch.nn.functional.linear(x, torch.randn(4096, 4096, device=device)) torch.cuda.synchronize() print(f"FP8 throughput: {100/(time.time()-start):.1f} iter/s")