What is DeepStream SDK? Why use it?
The DeepStream SDK is a general-purpose streaming analytics SDK that enables system software engineers and developers to build high performance intelligent video analytics applications using NVIDIA Jetson or NVIDIA Tesla platforms.
What is GStreamer and how do I get started with it?
The DeepStream SDK uses the open-source GStreamer framework to deliver high throughput with low latency. GStreamer is a library for constructing graphs of media-handling components. You can build applications ranging from simple video streaming and playback to complex graphs for processing AI.
What applications are deployable using the DeepStream SDK?
The DeepStream SDK can be used to build end-to-end AI-powered applications to analyze video and sensor data. Some popular use cases are: retail analytics, parking management, managing logistics, optical inspection and managing operations.
Where can I find the DeepStream SDK developer guide?
Starting with DeepStream 3.0, the documentation is packaged separately and regularly updated. The full package including the SDK download links can be found here:
DeepStream Version / Platform / OS Compatibility
|DS 1.0||DS 1.5||DS 2.0||DS 3.0|
|Jetson Platforms||Jetson TX1 / TX2||Jetson TX1 / TX2||Not supported.||Jetson Xavier|
|OS||L4T Ubuntu 16.04||L4T Ubuntu 16.04||—||L4T Ubuntu 18.04/16.04|
|CUDA||CUDA 8.0||CUDA 9.0||—||CUDA 10.0|
|cuDNN||cuDNN 6.0||cuDNN 7.0.5||—||cuDNN 7.3|
|TRT||TRT 2.1||TRT 3.0||—||TRT 5.0|
|OpenCV||OpenCV 2.4.13||OpenCV 3.3.1||—||OpenCV 3.3.1|
|VisionWorks||VisionWorks 1.6||VisionWorks 1.6||—||VisionWorks 1.6|
|DS 1.0||DS 1.5||DS 2.0||DS 3.0|
|GPU Platforms||P4, P40||P4, P40||P4 , P40||P4, P40, Volta, Turing T4|
|OS||LTS Ubuntu 16.04||LTS Ubuntu 16.04||LTS Ubuntu 16.04||LTS Ubuntu 16.04|
|GCC||GCC 5.4||GCC 5.4||GCC 5.4||GCC 5.4|
|CUDA||CUDA 8.0||CUDA 9.0||CUDA 9.2||CUDA 10.0|
|cuDNN||cuDNN 6.0||cuDNN 7.0||cuDNN 7.1||cuDNN 7.3|
|TRT||TRT 2.1||TRT 3.0||TRT 4.0||TRT 5.0|
|Display Driver||ver. 375||R384||ver. 396+||ver. 410+|
|VideoSDK||SDK 7.1||SDK 7.9||SDK 7.9||SDK 8.2|
|GStreamer||N/A (Not included)||GStreamer 1.8.3||GStreamer 1.8.3||GStreamer 1.8.3|
|OpenCV||N/A||OpenCV2.4.13||OpenCV 3.4.x||OpenCV 3.4.x|
|Docker Image||Not available.||Yes|
What are some of the main differences and new DeepStream features across all versions?
|DeepStream 1.0||DeepStream 2.0||DeepStream 3.0|
|Platform specific APIs||Unified APIs across platform||Container support for Tesla|
|Multi-stream Tesla||Multi-streams / Multi-DNNs||Integration with Transfer Learning Toolkit (TLT)|
|Single stream Jetson||Custom graphs||Dynamic stream management|
|—||Support for KAFKA for IOT services|
|—||Support for 360D cameras|
How do I verify if all the DeepStream dependencies are correctly installed?
Check your version of CUDA:
> nvcc --version
Check your version of TensorRT:
> nm -D /usr/lib/<ARCH>/libnvinfer.so.<VERSION> | grep tensorrt_version > nm -D /usr/lib/aarch64-linux-gnu/libnvinfer.so.5.0.3 | grep tensorrt_version
What are the built-in plugins?
|gst-nvvideocodecs||Accelerated video decoders|
|gst-nvstreammux||Stream aggregator — muxer and batching|
|gst-nvinfer||TensorRT-based inference for detection and classification|
|gst-nvtracker||Reference KLT tracker implementation|
|gst-nvosd||On-screen display API to draw boxes and text overlay|
|gst-tiler||Renders frames from multi-source into 2D grid array|
|gst-eglglesink||Accelerated X11 / EGL-based rendered plugin|
|gst-nvvidconv||Scaling, format conversion, rotation|
|gst-nvdewarp||Dewarping for 360-degree camera input|
|gst-nvmsgbroker||Messaging to Cloud|
What are the supported video codecs?
The supported codecs are H.264, H.265, VP8, VP9, and MPEG2/4.
What’s the max number of streams that can be processed with DeepStream?
How about max resolution and fps?
The maximum number of streams that can be processed depends on the application itself. The total stream could be limited by decode capacity, compute capacity, memory or memory bandwidth.
The reference examples made available with the DeepStream releases can achieve the following stream counts:
- Jetson AGX
30 streams @1080p 30fps with H.265 format
- Tesla T4
52 streams @1080p 30fps with H.265 format
Where can I find all the DeepStream sample apps?
Directory for built-in sample configs:
<DS installation dir>/samples/configs/deepstream-app/
Built-in sample apps:
DeepStream Sample AppDirectory:
<DS installation dir>/sources/apps/sample_apps/deepstream-app
Description: End-to-end example to demonstrate multi-camera streams with 4 cascaded neural networks (1 primary detector and 3 secondary classifiers) with tiled output to display.
DeepStream Test 1Directory:
<DS installation dir>/sources/apps/sample_apps/deepstream-test1
Description: Simple example of how to use DeepStream elements for single H.264 stream - filesrc, decode, nvstreammux, nvinfer (primary detector), nvosd, renderer.
DeepStream Test 2Directory:
<DS installation dir>/sources/apps/sample_apps/deepstream-test2
Description: Simple app that builds on top of test1 to show additional attributes like tracking and secondary classification attributes.
DeepStream Test 3Directory:
<DS installation dir>/sources/apps/sample_apps/deepstream-test3
Description: Simple app that builds on top of test1 to show multiple input sources and batching using nvstreammuxer.
DeepStream Test 4Directory:
<DS installation dir>/sources/apps/sample_apps/deepstream-test4
Description: This builds on top of the deepstream-test1 sample for single H.264 stream - filesrc, decode, nvstreammux, nvinfer, nvosd, renderer to demonstrate the use of "nvmsgconv" and "nvmsgbroker" plugins in the pipeline for IOT connection. For test4, user have to modify kafka broker connection string for successful connection. Need to setup analytics server docker before running test4. The DeepStream Analytics Documentation has more information on setting up analytics servers.
FasterRCNN Object DetectorDirectory:
<DS installation dir>/sources/objectDetector_FasterRCNN
Description: FasterRCNN object detector sample.
SSD Object DetectorDirectory:
<DS installation dir>/sources/objectDetector_SSD
Description: SSD object detector sample.
Additional apps from Github:
360 Degree Smart Parking App
Sample application for aisle tracking and parking occupancy detection. Demonstrates dewarper functionality for single or multiple 360 degrees streams. Reads camera calibration parameter from CSV file and renders aisle and spot surfaces on screen.
DeepStream SDK for Redaction
Sample application that highlights how to use DS to redact faces and license plate in video streams simultaneously. Offers option to dynamically add or delete channels when the pipeline is running.
Yolo Reference App
Sample application that uses NvYolo inference plugin similar to nvinfer.
Anomaly Detection reference App
Sample application that uses parallel pipelines to process 2 streams.
What’s the official DeepStream docker image and where do I get it from?
The official DS docker image for NVIDIA Tesla can be downloaded from the DeepStream docker image. Note that DeepStream in Docker is currently supported on NVIDIA Tesla platforms.
Support for Jetson platforms is coming soon.
Can I make my own Docker image recipes?
You can use the DeepStream container as the base image and add your own custom layers on top of it using a standard technique in Docker.
How can I display graphical output remotely over VNC?
How to verify if X11 is running?
If the host machine is running X, starting it is trivial. Otherwise, you have to start X and then VNC. To verify if X is running, you can check the
DISPLAY environment variable.
Can I run DeepStream if I don't have X running?
If the system is not running X, go to [sink*] group in your DeepStream config file and set the type=1 to select fakesink, or type=3 to save the output to file. If you are using V100 or P100 which are both compute-only cards without display, you will have to use type=1 or type=3.
How can I run the DeepStream sample app in debug mode?
deepstream-app -c <config> --gst-debug=<debug #>
For more information on the debug capabilities of Gstreamer, please see the debug documentation here.