Jetson Orin
Jetson Orin is NVIDIA’s edge AI computing platform launched in 2022, designed for robotics, autonomous machines, and AI applications requiring real-time performance at the edge. All Orin modules remain in active production with support extended through January 2032.
The Orin Family
Jetson AGX Orin — The flagship module
| Spec | 64GB | 32GB |
|---|---|---|
| AI Performance | 275 TOPS (INT8) | 200 TOPS (INT8) |
| GPU | 2048 CUDA cores, 64 Tensor cores | 1792 CUDA cores, 56 Tensor cores |
| CPU | 12-core Arm Cortex-A78AE | 8-core Arm Cortex-A78AE |
| Memory | 64GB LPDDR5 | 32GB LPDDR5 |
| Power | 15W - 60W | 15W - 40W |
| Status | Long-term support | Long-term support |
Best for: Production deployments, multi-sensor fusion, existing Orin-based designs
Jetson Orin NX — High performance, compact form factor
| Spec | 16GB | 8GB |
|---|---|---|
| AI Performance | 100 TOPS (157 w/ Super Mode) | 70 TOPS (117 w/ Super Mode) |
| GPU | 1024 CUDA cores | 1024 CUDA cores |
| CPU | 8-core Arm Cortex-A78AE | 6-core |
| Memory | 16GB LPDDR5 | 8GB LPDDR5 |
| Power | 10W - 25W (40W Super Mode) | 10W - 20W |
| Status | Active production |
Best for: Delivery robots, drones, industrial AMRs, cost-sensitive applications
Jetson Orin Nano — Entry-level AI at the edge
| Spec | 8GB | 4GB |
|---|---|---|
| AI Performance | 40 TOPS (67 w/ Super Mode) | 20 TOPS (32 w/ Super Mode) |
| GPU | 1024 CUDA cores | 512 CUDA cores |
| CPU | 6-core Arm Cortex-A78AE | 6-core |
| Memory | 8GB LPDDR5 | 4GB LPDDR5 |
| Power | 7W - 15W (25W Super Mode) | 5W - 10W |
| Status | Active production |
Best for: Entry-level robots, education, prototyping, high-volume deployments
Orin vs Thor: Which to Choose?
| Consideration | Choose Orin | Choose Thor |
|---|---|---|
| AI Performance needed | <300 TOPS | 2,070 TFLOPS (FP4) |
| Budget | Cost-sensitive | Performance-critical ($3,499 dev kit) |
| Existing design | Migrating from Xavier | New humanoid/advanced robot |
| Transformer workloads | Limited | Native Transformer Engine |
| Memory | Up to 64GB | 128GB LPDDR5X |
Architecture Overview
┌─────────────────────────────────────────────────────────────┐│ Jetson Orin SoC │├─────────────────────────────────────────────────────────────┤│ ┌─────────────────┐ ┌─────────────────────────────────┐ ││ │ Arm CPU │ │ NVIDIA Ampere GPU │ ││ │ Cortex-A78AE │ │ ┌─────────┐ ┌─────────────┐ │ ││ │ Up to 12 cores │ │ │ CUDA │ │ Tensor │ │ ││ │ │ │ │ Cores │ │ Cores │ │ ││ └─────────────────┘ │ └─────────┘ └─────────────┘ │ ││ └─────────────────────────────────┘ ││ ┌─────────────────┐ ┌─────────────────────────────────┐ ││ │ Deep Learning │ │ Video Engines │ ││ │ Accelerator │ │ NVENC │ NVDEC │ JPEG │ OFA │ ││ │ (DLA x2) │ │ │ ││ └─────────────────┘ └─────────────────────────────────┘ ││ ┌─────────────────────────────────────────────────────┐ ││ │ Memory: LPDDR5 (256-bit) │ ││ └─────────────────────────────────────────────────────┘ │└─────────────────────────────────────────────────────────────┘Key Components
- Ampere GPU: CUDA and Tensor cores for parallel compute and AI inference
- DLA (Deep Learning Accelerator): Dedicated AI inference engines (2x on AGX Orin)
- PVA (Programmable Vision Accelerator): Computer vision preprocessing
- Video Engines: Hardware encode/decode for camera streams
Software Stack
# JetPack 6.2.1 - Long Term Support (Recommended for Orin)# Ubuntu 22.04, CUDA 12.6, TensorRT 10.3, cuDNN 9.3sudo apt updatesudo apt install nvidia-jetpackRecommended for production Orin deployments requiring stability.
# JetPack 7.x - Latest features (Primary target: Thor)# Ubuntu 24.04, CUDA 13.0, Linux Kernel 6.8# Orin support expected in JetPack 7.2 (Q1 2026)Primary target is Jetson Thor; Orin support expected in JetPack 7.2.
┌─────────────────────────────────────────┐│ Your Application │├─────────────────────────────────────────┤│ Isaac ROS │ ROS 2 │ DeepStream │ TAO │├─────────────────────────────────────────┤│ TensorRT │ cuDNN │ CUDA │ OpenCV │├─────────────────────────────────────────┤│ JetPack SDK (L4T) │├─────────────────────────────────────────┤│ Linux Kernel + Drivers │└─────────────────────────────────────────┘Getting Started
1. Flash the Device
# Using NVIDIA SDK Manager (recommended)# Or command line:sudo ./flash.sh jetson-agx-orin-devkit internal2. Install JetPack Components
sudo apt updatesudo apt install nvidia-jetpack3. Verify Installation
# Check CUDAnvcc --version
# Check TensorRTdpkg -l | grep tensorrt
# Monitor systemtegrastatsPower Management
Orin supports multiple power modes via nvpmodel:
# List available modessudo nvpmodel -q --verbose
# Set to max performance (AGX Orin)sudo nvpmodel -m 0 # MAXN: 60W
# Set to power-efficient modesudo nvpmodel -m 3 # 30W
# Maximize clocks (for benchmarking)sudo jetson_clocks| Mode | AGX Orin Power | Use Case |
|---|---|---|
| MAXN | 60W | Maximum performance |
| 50W | 50W | High performance |
| 30W | 30W | Balanced |
| 15W | 15W | Power-constrained |
Related Terms
Learn More
- Jetson Orin Official Page
- JetPack 6.x LTS Documentation
- Orin → Thor Migration Guide
- Jetson Developer Forums
Sources
- NVIDIA Jetson Orin — Official specifications
- NVIDIA AGX Orin Technical Brief — AGX Orin 64GB/32GB variant specifications
- NVIDIA JetPack SDK — Current JetPack 6.2.1 versions
- JetPack 6.2 Super Mode Blog — Super Mode feature details
- Jetson Product Lifecycle — LTS extended through January 2032
- NVIDIA Newsroom - Jetson Thor — Thor availability and specs