Our educational resources are designed to give you hands-on, practical instruction about using the Jetson platform, including the NVIDIA Jetson AGX Xavier, Jetson TX2, Jetson TX1 and Jetson Nano Developer Kits. With step-by-step videos from our in-house experts, you will be up and running with your next project in no time.


Hello AI World — Meet Jetson Nano

Find out more about the hardware and software behind Jetson Nano. See how you can create and deploy your own deep learning models along with building autonomous robots and smart devices powered by AI.

AI for Makers — Learn with JetBot

Want to take your next project to a whole new level with AI? JetBot is an open source DIY robotics kit that demonstrates how easy it is to use Jetson Nano to build new AI projects.

Creating Intelligent Machines with the Isaac SDK

Learn to program a basic Isaac codelet to control a robot, create a robotics application using the Isaac compute-graph model, test and evaluate your application in simulation and deploy the application to a robot equipped with an NVIDIA Jetson.

Use Nvidia’s DeepStream and Transfer Learning Toolkit to Deploy Streaming Analytics at Scale

Learn about the latest tools for overcoming the biggest challenges in developing streaming analytics applications for video understanding at scale. NVIDIA’s DeepStream SDK framework frees developers to focus on the core deep learning networks and IP…

Jetson AGX Xavier and the New Era of Autonomous Machines

Learn about the Jetson AGX Xavier architecture and how to get started developing cutting-edge applications with the Jetson AGX Xavier Developer Kit and JetPack SDK. You’ll also explore the latest advances in autonomy for robotics and intelligent devices.

Streamline Deep Learning for Video Analytics with DeepStream SDK 2.0

Learn how AI-based video analytics applications using DeepStream SDK 2.0 for Tesla can transform video into valuable insights for smart cities. Our latest version offers a modular plugin architecture and a scalable framework for application development. It comes with the most frequently used plugins for multi-stream decoding/encoding, scaling, color space conversion, tracking…

Deep Reinforcement Learning in Robotics with NVIDIA Jetson

Discover the creation of autonomous reinforcement learning agents for robotics in this NVIDIA Jetson webinar. Learn about modern approaches in deep reinforcement learning for implementing flexible tasks and behaviors like pick-and-place and path planning in robots.

TensorFlow Models Accelerated for NVIDIA Jetson

The TensorFlow models repository offers a streamlined procedure for training image classification and object detection models. In this tutorial we will discuss TensorRT integration in TensorFlow, and how it may be used to accelerate models sourced from the TensorFlow models repository for use on NVIDIA Jetson.

TensorFlow to TensorRT on Jetson

NVIDIA GPUs already provide the platform of choice for Deep Learning Training today. This whitepaper investigates Deep Learning Inference on a Geforce Titan X and Tegra TX1 SoC. The results show that GPUs …

Develop and Deploy Deep Learning Services at the Edge with IBM

IBM's edge solution enables developers to securely and autonomously deploy Deep Learning services on many Linux edge devices including GPU-enabled platforms such as the Jetson TX2. Leveraging JetPack 3.2's Docker support, developers can easily build, test, and deploy complex cognitive services with GPU access for vision and audio inference, analytics, and other deep learning services.

Building Advanced Multi-Camera Products with Jetson

NVIDIA Jetson is the fastest computing platform for AI at the edge. With powerful imaging capabilities, it can capture up to 6 images and offers real-time processing of Intelligent Video Analytics (IVA). Learn how our camera partners provide product development support in addition to image tuning services for other advanced solutions such as frame synchronized multi-images.

Deep Learning in MATLAB

Learn how you can use MATLAB to build your computer vision and deep learning applications and deploy them on NVIDIA Jetson.

Get Started with the JetPack Camera API

Learn about the new JetPack Camera API and start developing camera applications using the CSI and ISP imaging components available with the Jetson platform.

Embedded Deep Learning with NVIDIA Jetson

Watch this free webinar to get started developing applications with advanced AI and computer vision using NVIDIA's deep learning tools, including TensorRT and DIGITS.

Build Better Autonomous Machines with NVIDIA Jetson

Watch this free webinar to learn how to prototype, research, and develop a product using Jetson. The Jetson platform enables rapid prototyping and experimentation with performant computer vision, neural networks, imaging peripherals, and complete autonomous systems.

Breaking New Frontiers in Robotics and Edge Computing with AI

Watch Dustin Franklin, GPGPU developer and systems architect from NVIDIA’s Autonomous Machines team, cover the latest tools and techniques to deploy advanced AI at the edge in this webinar replay. Get up to speed on recent developments in robotics and deep learning.

Two Days to a Demo

Two Days to a Demo is our introductory series of deep learning tutorials for deploying AI and computer vision to the field with NVIDIA Jetson AGX Xavier, Jetson TX1 and Jetson TX2.


Get Started with NVIDIA Jetson Nano Developer Kit

NVIDIA® Jetson Nano™ Developer Kit is a small, powerful computer that lets you run multiple neural networks in parallel for applications like image classification, object detection, segmentation, and speech processing.

Jetson AGX Xavier Developer Kit - Introduction

The NVIDIA Jetson AGX Xavier Developer Kit is the latest addition to the Jetson platform. It’s an AI computer for autonomous machines, delivering the performance of a GPU workstation in an embedded module under 30W. Jetson AGX Xavier is designed for robots, drones and other autonomous machines.

Jetson AGX Xavier Developer Kit Initial Setup

This video will quickly help you configure your NVIDIA Jetson AGX Xavier Developer Kit, so you can get started developing with it right away. In addition to this video, please see the user guide (linked below) for full details about developer kit interfaces and the NVIDIA JetPack SDK.

NVIDIA System Profiler - Introduction

An introduction to the latest NVIDIA Tegra System Profiler. Includes an UI workthrough and setup details for Tegra System Profiler on the NVIDIA Jetson Platform. Download and learn more here.

Multimedia API Overview

This video gives an overview of the Jetson multimedia software architecture, with emphasis on camera, multimedia codec, and scaling functionality to jump start flexible yet powerful application development.

Develop a V4L2 Sensor Driver

The video covers camera software architecture, and discusses what it takes to develop a clean and bug-free sensor driver that conforms to the V4L2 media controller framework.

Jetson Security and Secure Boot

This video gives an overview of security features for the Jetson product family and explains in detailed steps the secure boot process, fusing, and deployment aspects.

V4L2 Sensor Driver Development Tutorial

This video will dive deep into the steps of writing a complete V4L2 compliant driver for an image sensor to connect to the NVIDIA Jetson platform over MIPI CSI-2. It will describe the MIPI CSI-2 video input, implementing the driver registers and tools for conducting verification.

Episode 0: Introduction to OpenCV

Learn to write your first ‘Hello World’ program on Jetson with OpenCV. You’ll learn a simple compilation pipeline with Midnight Commander, cmake, and OpenCV4Tegra’s mat library, as you build for the first time.

Episode 1: CV Mat Container

Learn to work with mat, OpenCV’s primary container. You’ll learn memory allocation for a basic image matrix, then test a CUDA image copy with sample grayscale and color images.

Episode 2: Multimedia I/O

Learn to manipulate images from various sources: JPG and PNG files, and USB webcams. Run standard filters such as Sobel, then learn to display and output back to file. Implement a rudimentary video playback mechanism for processing and saving sequential frames.

Episode 3: Basic Operations

Start with an app that displays an image as a Mat object, then resize, rotate it or detect “canny” edges, then display the result. Then, to ignore the high-frequency edges of the image’s feather, blur the image and then run the edge detector again. With higher window sizes, the feather’s edges disappear, leaving behind only the more significant edges present in the input image.

Episode 4: Feature Detection and Optical Flow

Take an input MP4 video file (footage from a vehicle crossing the Golden Gate Bridge) and detect corners in a series of sequential frames, then draw small marker circles around the identified features. Watch as these demarcated features are tracked from frame to frame. Then, color the feature markers depending on how far they move frame to frame. This simplistic analysis allows points distant from the camera—which move less—to be demarcated as such.

Episode 5: Descriptor Matching and Object Detection

Use features and descriptors to track the car from the first frame as it moves from frame to frame. Store (ORB) descriptors in a Mat and match the features with those of the reference image as the video plays. Learn to filter out extraneous matches with the RANSAC algorithm. Then multiply points by a homography matrix to create a bounding box around the identified object. The result isn’t perfect, but try different filtering techniques and apply optical flow to improve on the sample implementation. Getting good at computer vision requires both parameter-tweaking and experimentation.

Episode 6: Face Detection

Use cascade classifiers to detect objects in an image. Implement a high-dimensional function and store evaluated parameters in order to detect faces using a pre-fab HAAR classifier. Then, to avoid false positives, apply a normalization function and retry the detector. Classifier experimentation and creating your own set of evaluated parameters is discussed via the OpenCV online documentation.

Episode 7: Detecting Simple Shapes Using Hough Transform

Use Hough transforms to detect lines and circles in a video stream. Call the canny-edge detector, then use the HoughLines function to try various points on the output image to detect line segments and closed loops. These lines and circles are returned in a vector, and then drawn on top of the input image. Adjust the parameters of the circle detector to avoid false positives; begin by applying a Gaussian blur, similar to a step in Part 3.

Episode 8: Monocular Camera Calibration

Learn how to calibrate a camera to eliminate radial distortions for accurate computer vision and visual odometry. Using the concept of a pinhole camera, model the majority of inexpensive consumer cameras. Using several images with a chessboard pattern, detect the features of the calibration pattern, and store the corners of the pattern. Using a series of images, set the variables of the non-linear relationship between the world-space and the image-space. Lastly, apply rotation, translation, and distortion coefficients to modify the input image such that the input camera feed will match the pinhole camera model, to less than a pixel of error. Lastly, review tips for accurate monocular calibration.