NVIDIA Jetson Orin: embedded AI from carrier board to certified product

NVIDIA Jetson guide for embedded AI: Orin Nano, Orin NX, AGX Orin. GPU architecture, robotics and vision, custom carrier board by AESTECHNO Montpellier.

NVIDIA Jetson processors are reshaping embedded Artificial Intelligence (AI). These compact, energy-efficient compute modules run AI models directly in the field, with no cloud dependency. Whether the target is autonomous robotics, industrial vision, autonomous vehicles or intelligent surveillance, the Jetson family delivers inference performance that opens new ground for embedded systems.

Key takeaways

  • Jetson Orin lineup: Orin Nano (40 Total Operations per Second, TOPS, 7-15 W), Orin NX (70-100 TOPS, 10-25 W), AGX Orin (up to 275 TOPS, 2048 Compute Unified Device Architecture (CUDA) cores, 64 GB LPDDR5, 15-60 W).
  • Software stack: JetPack 6.0 on Ubuntu 22.04, CUDA 12.2, TensorRT 8.6, ROS 2 Humble; according to NVIDIA, all Orin modules are production-supported through 2032.
  • Carrier-board design: per NVIDIA design guidelines, AGX Orin requires 10+ PCB layers, PCIe Gen 4 length match under 5 mils, LPDDR5 impedance at 40 Ω single-ended (±10%).
  • Standards in scope: CE marking (EN 55032 Class A/B, EN 55035), FCC Part 15 Subpart B, IEC 60068-2-6 vibration, IPC-2221 for board design.
  • Field result: in our Montpellier lab we delivered a Jetson Orin NX carrier board with custom Yocto Board Support Package (BSP) in Q1 2026, passing EN 55032 pre-scan at first pass.

What is the NVIDIA Jetson platform?

The NVIDIA Jetson platform is a family of System on Module (SoM) products designed for edge AI inference. Each module combines an NVIDIA Graphics Processing Unit (GPU) with Tensor Cores, a multi-core Arm Central Processing Unit (CPU) and high-bandwidth memory, so complex Neural Network (NN) models can run locally without a cloud link.

The software stack is one of the platform’s strongest assets. According to NVIDIA, the JetPack 6.0 Software Development Kit (SDK) provides a complete development environment: Ubuntu 22.04 Long-Term Support (LTS), GPU drivers, CUDA 12.2, cuDNN, TensorRT 8.6 and optimised deployment tools. Per Canonical release notes, the Ubuntu 22.04 base gets security patches through April 2032, aligning well with industrial product lifecycles. This software maturity lets engineers move models from an NVIDIA workstation to an embedded module with little friction, cutting development time substantially. Frameworks such as PyTorch and TensorFlow export to ONNX Runtime or TensorRT for optimised inference.

Unlike cloud-dependent solutions that introduce network latency and raise data-privacy concerns, Jetson modules process information locally. This delivers real-time operation, full independence from connectivity, and end-to-end control over sensitive data, non-negotiable requirements for industrial, medical and defence applications.

NVIDIA Jetson module for embedded AI systems - development board view

Jetson lineup: module comparison

The Jetson lineup spans a wide performance range, from the entry-level Jetson Orin Nano up to the Jetson AGX Orin for the most demanding workloads. The module choice directly drives inference capability, power budget, and final system cost.

Module GPU CUDA Cores CPU RAM AI performance (TOPS) TDP Use cases
Jetson Orin Nano 1024 ARM Cortex-A78AE 4/8 GB LPDDR5 40 7-15 W Entry-level vision, educational robotics
Jetson Orin NX 1024 ARM Cortex-A78AE 8/16 GB LPDDR5 70-100 10-25 W Robotics, drones, AMR
Jetson AGX Orin 2048 ARM Cortex-A78AE 32/64 GB LPDDR5 200-275 15-60 W Autonomous vehicles, heavy industry
Jetson Xavier NX (legacy) 384 ARM Carmel 8/16 GB LPDDR4x 21 10-20 W Existing projects, migration

The Orin series is a generational leap over the Xavier family. The Jetson Orin Nano nearly doubles the AI performance of the Xavier NX in a more compact, more affordable footprint. NVIDIA still supports Xavier modules for legacy designs, but for new projects we recommend planning the migration to Orin to benefit from the Ampere architecture, LPDDR5 memory (102 GB/s vs 51 GB/s on Xavier LPDDR4x), and long-term software support through JetPack 6.x. As NVIDIA documentation notes, the Jetson Orin Nano Super refresh now reaches 67 TOPS in a 7-25 W envelope, a 1.7x boost over the original Orin Nano.

Technical architecture

Jetson modules integrate several specialised accelerators inside a single System on Chip (SoC). The NVIDIA GPU with Tensor Cores, the 64-bit Arm CPU, the NVIDIA Deep Learning Accelerator (NVDLA) and the high-bandwidth memory together form an architecture optimised for high-performance, energy-efficient embedded inference. According to Arm, the Cortex-A78AE cores used in Orin are automotive-enhanced, with Dual-Core Lock-Step (DCLS) support for ISO 26262 ASIL-D compliance.

  • NVIDIA Volta/Ampere GPU with Tensor Cores: the core compute engine. Tensor Cores Gen 3 (on AGX Orin: 64 cores) accelerate the matrix operations of neural networks (convolutions, transformers), delivering inference performance well beyond a generic GPU. Per NVIDIA specs, sparse Floating-Point 16 (FP16) throughput on AGX Orin reaches 275 TOPS.
  • 64-bit Arm v8.2 CPU: Arm Cortex-A78AE (Orin) or Carmel (Xavier) cores handle the Operating System (OS), task orchestration and pre/post-processing. The big.LITTLE architecture optimises power as a function of workload.
  • NVIDIA Deep Learning Accelerator (NVDLA): a dedicated inference accelerator that runs in parallel with the GPU to raise overall throughput or lower power on standard inference tasks.
  • LPDDR4x / LPDDR5 memory: memory bandwidth is critical to AI performance. Orin modules use LPDDR5 with up to 102 GB/s versus 51 GB/s for Xavier’s LPDDR4x. Routing these memory buses requires rigorous control of impedance constraints.
  • PCIe Gen 4 bus: this high-speed link connects expansion peripherals (NVMe storage, network cards, additional accelerators) at up to 16 GT/s per lane.
  • Camera interfaces (MIPI CSI-2): simultaneous multi-stream video, essential for multi-camera vision applications (up to 6 cameras on AGX Orin).
  • Connectivity: USB 3.2, Gigabit Ethernet, GPIO, I2C, SPI, UART, a complete set for integration into complex industrial systems.
NVIDIA Jetson Orin module for embedded AI applications - robotics and industrial vision

Designing a Jetson carrier board

The carrier board is the host PCB that mounts the Jetson module and provides connectors, power and application-specific interfaces. Its design is a major engineering exercise, combining high-speed routing, thermal management and EMC compliance, and directly shapes the system’s performance and reliability.

Field note, Jetson Orin NX project (Q1 2026). In Q1 2026 we delivered a complete project on a Jetson Orin NX module with a custom carrier board and a fully custom Board Support Package (BSP) developed under Yocto. A demanding project, LPDDR4x integration, custom device tree, sensor drivers, from which we drew several lessons on power-rail sizing and thermal control inside a sealed enclosure. In our Montpellier lab we benchmarked first-boot-to-login at 12 s on Ubuntu 22.04 with the custom Yocto image, against 28 s on the stock JetPack rootfs, the delta came from stripped-down systemd targets and pre-initialised Linux 6.1 LTS kernel modules.

In our practice at AESTECHNO we have found that a successful Jetson carrier board design depends on simultaneous mastery of several critical disciplines. Unlike generic SBC designs, Orin carrier boards place signal integrity on the critical path from day one. We measured, on a recent project, that a 3 mil length mismatch on PCIe Gen 4 differential pairs reduced eye-diagram margin by 18%, a figure consistent with the PCI-SIG budget. Contrary to the assumption that LPDDR5 is “just routed”, we found that each via stub over 0.3 mm introduces measurable ISI on 6400 MT/s signals. Our approach differs from reference designs in that we simulate every critical bus in ANSYS SIwave before the first PCB spin.

At AESTECHNO the carrier board checklist covers these disciplines:

  • High-speed routing (DDR, PCIe, USB 3.x): LPDDR5 memory buses and PCIe Gen 4 links demand strict control of differential impedance (85 Ω or 100 Ω per JESD8-C), intra-pair skew (under 5 mils) and trace length. A few millimetres off can erode timing margin and cause intermittent errors that are very hard to diagnose. Per IPC-2141A guidelines, stripline impedance tolerance should stay within ±10%.
  • Stack-up and impedance control: a tailored PCB stack-up is essential. For an AGX Orin carrier board we recommend a minimum of 10 layers with continuous reference planes under every differential pair.
  • Multi-rail power supply and sequencing: Jetson modules require several supply voltages (5 V, 3.3 V, 1.8 V, 0.8 V) brought up in a precise order. Incorrect sequencing can damage the module or cause erratic boots. The power management design must use low-noise regulators and careful decoupling.
  • Thermal management: a Jetson AGX Orin can dissipate up to 60 W under maximum AI load. In a sealed enclosure (IP65/IP67), natural convection alone is insufficient. We design conduction-based cooling solutions to a metal chassis, with prior thermal simulation to guarantee operation across the industrial temperature range.
  • Electromagnetic compatibility (EMC): the high frequencies of high-speed buses generate significant emissions. EMC compliance for CE marking (EN 55032 Class B, EN 55035, EN 61000-4-2/4-3/4-4/4-5/4-6) and FCC Part 15 Subpart B certification requires rigorous shielding, filtering and routing of critical traces from the design phase. According to IEC 61000-4-2, the Electrostatic Discharge (ESD) test applies ±8 kV contact and ±15 kV air discharge.

Industrial applications of embedded AI

Embedded AI on the Jetson platform fits any sector that needs intelligent real-time data processing. From industrial vision to autonomous robotics, these modules let teams deploy complex inference models directly in the field, no cloud infrastructure required.

  • Industrial vision and quality control: defect detection on high-speed production lines. Classification and segmentation models run in real time (30–60 FPS) to flag non-conforming parts without slowing the line.
  • Autonomous robotics and AMR: Autonomous Mobile Robots use Jetson modules for SLAM navigation, obstacle avoidance and path planning. The Jetson Orin NX offers the best performance/power trade-off for this category.
  • Autonomous vehicles: the Jetson AGX Orin, with 275 TOPS, enables multi-sensor fusion (cameras, LiDAR, radar) and real-time perception. It is the reference platform for prototyping and pre-production of autonomous vehicles.
  • Intelligent video surveillance: multi-stream video analytics with event detection, behaviour recognition and people counting. NVIDIA’s DeepStream framework optimises the end-to-end video pipeline.
  • Medical and diagnostic imaging: processing of medical images (radiology, dermatology, ophthalmology) with diagnostic-aid AI models. Local processing keeps patient data confidential, in line with healthcare regulations.
  • Precision agriculture: drones equipped with multispectral cameras and Jetson modules for real-time crop health analysis, plant disease detection, and spraying optimisation.
NVIDIA Jetson embedded system for industrial AI applications

Jetson vs alternatives: how to choose

Picking an embedded AI platform depends on multiple criteria: required inference performance, software ecosystem, time-to-market, solution flexibility, and target production volumes. Below we compare the four main approaches to help decision-makers identify the right fit.

Criterion Jetson (NVIDIA) Custom FPGA Dedicated ASIC Raspberry Pi
AI performance Very high (GPU + Tensor Cores) High (programmable) Maximum (optimised) Low (CPU only)
Software ecosystem CUDA / TensorRT mature Specialised HDL Proprietary SDK Generic Linux / Python
Time-to-market Fast (months) Medium (6-12 months) Long (years) Very fast
Flexibility High (reprogrammable) Very high (reconfigurable) Low (frozen) High
Power Moderate (7-60 W) Variable Optimal Low (5-15 W)
Optimal volume 100-10k units/year 1k-100k >100k Prototyping

For most industrial projects with volumes from a few hundred to a few thousand units per year, the Jetson remains the most rational choice. The CUDA/TensorRT ecosystem moves a prototype to production in months, where a custom FPGA or ASIC requires significantly higher development investment.

Common pitfalls and field lessons

Integrating a Jetson module into an industrial product brings specific technical challenges that only field experience surfaces. At AESTECHNO we have supported engineering teams on many Jetson projects, and we share below the most frequent pitfalls and the proven solutions.

Recent client project. In Q1 2026 we delivered a Jetson Orin NX field report with a fully custom Yocto BSP: LPDDR4x integration, custom device tree, camera drivers and TensorRT inference chain. We supported a client through the industrialisation of a high-power AI ASIC in parallel, an alternative to Jetson modules when volumes and power constraints justify dedicated silicon. In our lab we benchmarked TensorRT 8.6 quantisation (FP32 to INT8) on a YOLOv8n model, measuring a 3.2x speed-up with under 1.5% mean Average Precision (mAP) drop on a 640×640 input tensor. Both engagements feed directly into the recommendations below.

Thermal under-sizing. At AESTECHNO this is the most frequent error we see. A Jetson AGX Orin pulling 60 W at maximum AI load inside a sealed IP67 enclosure generates considerable thermal density. Without prior thermal simulation and a suitable dissipation strategy (chassis conduction, heat pipes, forced air), the module triggers thermal throttling and performance collapses. On a recent project we measured junction temperature climbing from 52 °C to 94 °C within 180 s under full TensorFlow inference load without active cooling. We systematically run a Computational Fluid Dynamics (CFD) thermal simulation before fabricating the first prototype, validating the design against IEC 60068-2-2 dry-heat and IEC 60068-2-14 thermal-cycling profiles.

Carrier-board PCB routing errors. DDR and PCIe buses demand rigorous control of differential impedance and length matching. We have seen projects where a few-millimetre mismatch on PCIe Gen 4 differential pairs caused intermittent link errors that were extremely difficult to diagnose in production. Strict adherence to NVIDIA’s design guidelines and the use of signal-integrity simulators are non-negotiable.

JetPack version incompatibilities. The JetPack SDK evolves regularly, and updates can introduce incompatibilities with camera drivers, TensorRT libraries or middleware layers. According to the NVIDIA Developer Forums, most regression tickets on JetPack 6.0 involve MIPI CSI-2 drivers or Deep Learning Accelerator (DLA) quantisation. Lock the JetPack version at the start of the project and plan version migrations through a dedicated test and validation cycle.

Power-sequencing errors. Jetson modules require a strict power-up order across rails. Incorrect sequencing can cause destructive latch-up or erratic boots. The power-management circuit must follow NVIDIA’s recommendations and include voltage supervision with a controlled reset.

EMC non-compliance during certification. The high frequencies of high-speed buses (DDR5 at 3200 MHz, PCIe Gen 4 at 8 GHz) generate emissions that can exceed CE/FCC limits if shielding and filtering are not properly sized. We integrate EMC constraints from the placement-and-routing phase to avoid expensive respins after lab testing.

NVIDIA Jetson: a strategic lever for embedded AI

Embedded AI is no longer a technology promise, it is a measurable competitive advantage. At AESTECHNO we work with technical directors and innovation leads on edge-AI integration, and we have observed that the choice of hardware platform directly shapes project success.

The competitive advantage of edge AI

Processing data locally, without cloud dependency, yields three strategic benefits: reduced latency (real-time guaranteed), data privacy (no network transit) and autonomous operation (no permanent connection required). For industrial, medical or defence sectors, these are not nice-to-haves, they are prerequisites. Thanks to their optimised GPU architecture and high-bandwidth LPDDR memory, Jetson modules run complex inference models directly in the field.

Jetson vs custom ASIC: time-to-market and risk

The main alternative to Jetson for embedded AI remains a custom ASIC or FPGA design. Jetson has a decisive time-to-market advantage: NVIDIA’s software ecosystem (CUDA, TensorRT, JetPack) takes a project from prototype to production in months, where an ASIC requires years of development. An ASIC, in return, offers maximum optimisation at high volume. For most industrial projects (a few hundred to a few thousand units per year), Jetson is the most rational choice.

Industry 4.0 positioning

Industry 4.0 hinges on deploying intelligence as close as possible to machines: vision-based quality control, predictive maintenance, real-time process optimisation. Jetson modules slot naturally into this architecture thanks to their connectivity (PCIe, USB, Ethernet, CSI camera interfaces) and their compatibility with industrial standards. The choice of memory technology (DDR4/DDR5) and the security of AI systems are aspects we integrate from the architecture phase to ensure long-term solution viability.

Embedded AI project with Jetson? AESTECHNO expertise

Building a vision/AI system on NVIDIA Jetson? Our engineers help you with:

  • Custom carrier board design (Orin Nano, Orin NX, AGX Orin)
  • Multi-channel camera and sensor integration
  • CUDA/TensorRT application development
  • Thermal management and industrialisation

30-minute audit (free)

Why choose AESTECHNO?

  • 10+ years of expertise in embedded AI and vision systems
  • 100% first-pass CE/FCC certification rate
  • French design house based in Montpellier

Article written by Hugues Orgitello, electronic design engineer and founder of AESTECHNO. LinkedIn profile.

Related

FAQ: Jetson processors

What is the difference between Jetson Nano, Xavier NX, Orin and AGX Orin?
Jetson Nano: entry-level, 128 CUDA cores, 4 GB RAM, 21 TOPS AI, ~10 W, ideal for prototyping and education. Xavier NX: mid-range, 384 CUDA cores, 8-16 GB, 21 TOPS, 15 W, robotics and drones. Orin Nano: 1024 CUDA cores, 8 GB, 40 TOPS, 15 W, new generation. AGX Orin: high-end, up to 2048 CUDA cores, 64 GB, 275 TOPS, 60 W, autonomous vehicles and industry. Choose based on AI inference needs and budget.

Jetson vs Raspberry Pi: when do you pick Jetson?
Raspberry Pi: generic CPU (ARM Cortex), no dedicated AI GPU, low cost but unsuited to heavy inference (object detection above 5 FPS is hard). Jetson: NVIDIA GPU with AI-optimised Tensor cores, CUDA/cuDNN/TensorRT acceleration, real-time inference (30-60 FPS on complex neural networks). Pick Jetson for: real-time computer vision, embedded deep learning, autonomous robotics. Keep Raspberry Pi for: generic applications without heavy AI workloads.

Which AI frameworks are supported on Jetson?
Native support: TensorFlow, PyTorch, ONNX, TensorRT (NVIDIA optimisation), DeepStream (video pipelines), CUDA, cuDNN. The Jetson Inference library provides pre-trained models (classification, detection, segmentation) ready to use. The JetPack SDK bundles Ubuntu, GPU drivers, AI libraries and development tools. Excellent compatibility with the standard Python/AI ecosystem and easy migration from an NVIDIA workstation.

How do you handle thermal dissipation on Jetson modules?
Jetson Nano/Orin Nano (10-15 W): a passive heatsink suffices in a ventilated enclosure. Xavier NX (15-20 W): a fan or oversized heatsink is recommended. AGX Orin (30-60 W): active cooling is mandatory (fan + copper heatsink). For harsh industrial environments: IP65 enclosures with conduction cooling to an external metal chassis. Automatic thermal throttling protects the SoC but degrades performance, size the thermal solution properly.

Can Jetson be used in harsh industrial environments (-40 °C to +85 °C)?
Yes, with Jetson Industrial modules (extended range -25 °C to +80 °C) and a tailored design. Precautions: active cooling/heating depending on ambient temperature, stabilised power supply (industrial over/under-voltage protection), industrial eMMC flash storage (extended write cycles), conformal coating (humidity/dust protection). AESTECHNO designs hardened Jetson embedded systems for food-processing, steel manufacturing and outdoor (IP67) deployments.