Skip to content

Jetson Orin

Practical

Jetson Orin is NVIDIA’s edge AI computing platform launched in 2022, designed for robotics, autonomous machines, and AI applications requiring real-time performance at the edge. All Orin modules remain in active production with support extended through January 2032.

The Orin Family

Jetson AGX Orin — The flagship module

Spec64GB32GB
AI Performance275 TOPS (INT8)200 TOPS (INT8)
GPU2048 CUDA cores, 64 Tensor cores1792 CUDA cores, 56 Tensor cores
CPU12-core Arm Cortex-A78AE8-core Arm Cortex-A78AE
Memory64GB LPDDR532GB LPDDR5
Power15W - 60W15W - 40W
StatusLong-term supportLong-term support

Best for: Production deployments, multi-sensor fusion, existing Orin-based designs

Orin vs Thor: Which to Choose?

ConsiderationChoose OrinChoose Thor
AI Performance needed<300 TOPS2,070 TFLOPS (FP4)
BudgetCost-sensitivePerformance-critical ($3,499 dev kit)
Existing designMigrating from XavierNew humanoid/advanced robot
Transformer workloadsLimitedNative Transformer Engine
MemoryUp to 64GB128GB LPDDR5X

Architecture Overview

┌─────────────────────────────────────────────────────────────┐
│ Jetson Orin SoC │
├─────────────────────────────────────────────────────────────┤
│ ┌─────────────────┐ ┌─────────────────────────────────┐ │
│ │ Arm CPU │ │ NVIDIA Ampere GPU │ │
│ │ Cortex-A78AE │ │ ┌─────────┐ ┌─────────────┐ │ │
│ │ Up to 12 cores │ │ │ CUDA │ │ Tensor │ │ │
│ │ │ │ │ Cores │ │ Cores │ │ │
│ └─────────────────┘ │ └─────────┘ └─────────────┘ │ │
│ └─────────────────────────────────┘ │
│ ┌─────────────────┐ ┌─────────────────────────────────┐ │
│ │ Deep Learning │ │ Video Engines │ │
│ │ Accelerator │ │ NVENC │ NVDEC │ JPEG │ OFA │ │
│ │ (DLA x2) │ │ │ │
│ └─────────────────┘ └─────────────────────────────────┘ │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ Memory: LPDDR5 (256-bit) │ │
│ └─────────────────────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘

Key Components

  • Ampere GPU: CUDA and Tensor cores for parallel compute and AI inference
  • DLA (Deep Learning Accelerator): Dedicated AI inference engines (2x on AGX Orin)
  • PVA (Programmable Vision Accelerator): Computer vision preprocessing
  • Video Engines: Hardware encode/decode for camera streams

Software Stack

Terminal window
# JetPack 6.2.1 - Long Term Support (Recommended for Orin)
# Ubuntu 22.04, CUDA 12.6, TensorRT 10.3, cuDNN 9.3
sudo apt update
sudo apt install nvidia-jetpack

Recommended for production Orin deployments requiring stability.

┌─────────────────────────────────────────┐
│ Your Application │
├─────────────────────────────────────────┤
│ Isaac ROS │ ROS 2 │ DeepStream │ TAO │
├─────────────────────────────────────────┤
│ TensorRT │ cuDNN │ CUDA │ OpenCV │
├─────────────────────────────────────────┤
│ JetPack SDK (L4T) │
├─────────────────────────────────────────┤
│ Linux Kernel + Drivers │
└─────────────────────────────────────────┘

Getting Started

1. Flash the Device

Terminal window
# Using NVIDIA SDK Manager (recommended)
# Or command line:
sudo ./flash.sh jetson-agx-orin-devkit internal

2. Install JetPack Components

Terminal window
sudo apt update
sudo apt install nvidia-jetpack

3. Verify Installation

Terminal window
# Check CUDA
nvcc --version
# Check TensorRT
dpkg -l | grep tensorrt
# Monitor system
tegrastats

Power Management

Orin supports multiple power modes via nvpmodel:

Terminal window
# List available modes
sudo nvpmodel -q --verbose
# Set to max performance (AGX Orin)
sudo nvpmodel -m 0 # MAXN: 60W
# Set to power-efficient mode
sudo nvpmodel -m 3 # 30W
# Maximize clocks (for benchmarking)
sudo jetson_clocks
ModeAGX Orin PowerUse Case
MAXN60WMaximum performance
50W50WHigh performance
30W30WBalanced
15W15WPower-constrained

Learn More

Sources