![]() |
VisionWorks Toolkit ReferenceDecember 18, 2015 | 1.2 Release |
Before installing NVIDIA® VisionWorks™, ensure you have met the following prerequisites:
$ sudo apt-get install g++ $ sudo apt-get install make
The installation of VisionWorks consists of the following steps:
The following sections contain detailed instructions for the installation of these packages. These instructions use the following variables:
<distro>
specifies the OS version (e.g., ubuntu1404).<architecture>
specifies the architecture (e.g. amd64, armhf, arm64).<ocv_version>
specifies the package release version (e.g., 2.4.12.3).<cuda_version>
specifies the package release version (e.g., 7.0).<visworks_ver>
specifies the VisionWorks version (e.g., 1.2).<platform>
specifies the supported device.The NVIDIA invented CUDA technology provides a parallel computing platform and programming model that enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU). Follow the direct target installation instructions based on your platform:
Download the CUDA Debian package from NVIDIA CUDA Downloads.
To install the package, execute the following commands:
$ sudo dpkg -i cuda-repo-<distro>_<cuda_version>_amd64.deb $ sudo apt-get update $ sudo apt-get install cuda
CUDA packages are installed to the following location on your host system:
$ /usr/local/cuda-<cuda_version>
Download the CUDA version defined in the corresponding Platforms section for the target operating system from NVIDIA CUDA Downloads.
The armhf foreign architecture must be enabled in order to install the cross-armhf toolkit. To enable armhf as a foreign architecture, the following commands must be executed:
$ sudo sh -c \ 'echo "foreign-architecture armhf" >> /etc/dpkg/dpkg.cfg.d/multiarch'
$ sudo dpkg --add-architecture armhf
To complete the installation, execute the following:
$ sudo dpkg -i cuda-repo-<distro>_<cuda_version>_amd64.deb $ sudo apt-get update $ sudo apt-get install cuda cuda-cross-armhf
The cross-build installation of CUDA includes the standalone development files, which are installed under the same location on your host system:
$ /usr/local/cuda-<cuda_version>
CUDA installation on Jetson TK1 Pro, NVIDIA® DRIVE™ PX, NVIDIA® DRIVE™ CX and E2580 Vibrante Systems is part of the Vibrante PDK installation and must be carried out on the host before rootfs setup and first boot of the target using:
$ bash vibrante-<platform|ver>-cuda-<cuda_version>.run
The CUDA toolkit installs after running the following self-deleting shell script that is intended to be run 1 time from the target console:
$ vibrante-setup.sh install-run-once-pkgs
This installs all deferred run-once scripts in the target's /etc/rcS.d directory, which is why it is important to run the CUDA installer before flashing and first boot of the target.
xfce4
. The following warning message may appear when launching a CUDA application and can safely be ignored:libkmod: ERROR ../libkmod/libkmod.c:554 kmod_search_moddep: could not open moddep file '/lib/modules/3.10.17-g3850ea5/modules.dep.bin'
Execute the following commands:
$ sudo dpkg -i cuda-repo-<distro>_<cuda_version>_<architecture>.deb $ sudo apt-get update $ sudo apt-get install cuda-toolkit-<cuda_version>
To access the CUDA binaries and libraries from any folder on Linux, add the CUDA location to your system's PATH
and LD_LIBRARY_PATH
environment variables. The exact steps depends on whether your target device has a 32-bit or 64-bit CPU.
For 64-bit systems (e.g., desktop PCs, DRIVE PX, DRIVE CX and E2580 boards):
$ echo "# Add 64-bit CUDA library & binary paths:" >> ~/.bashrc $ echo "export PATH=/usr/local/cuda-<cuda_version>/bin:$PATH" >> ~/.bashrc $ echo "export LD_LIBRARY_PATH=/usr/local/cuda-<cuda_version>/lib64:$LD_LIBRARY_PATH" >> ~/.bashrc $ source ~/.bashrc
For 32-bit systems (e.g., most ARM systems):
$ echo "# Add 32-bit CUDA library & binary paths:" >> ~/.bashrc $ echo "export PATH=/usr/local/cuda-<cuda_version>/bin:$PATH" >> ~/.bashrc $ echo "export LD_LIBRARY_PATH=/usr/local/cuda-<cuda_version>/lib:$LD_LIBRARY_PATH" >> ~/.bashrc $ source ~/.bashrc
NVIDIA provides VisionWorks as a local repository Debian package named libvisionworks-repo_<visworks_ver>_<architecture>.deb
. Its installation creates a local repository of all the VisionWorks Debian packages and points the OS software manager to that new repository. Then, you install the VisionWorks components using apt-get or any other Debian package software manager. All the dependent libraries are automatically installed. Follow the directions for installing VisionWorks based on your platform:
$ sudo dpkg -i libvisionworks-repo_<visworks_ver>_<architecture>.deb $ sudo apt-get update $ sudo apt-get install libvisionworks libvisionworks-dev libvisionworks-docs
$ sudo dpkg -i libvisionworks-repo_<visworks_ver>_<architecture>.deb $ sudo apt-get update $ sudo apt-get install libvisionworks libvisionworks-dev libvisionworks-docs
$ sudo add-apt-repository universe $ sudo dpkg -i libvisionworks-repo_<visworks_ver>_<architecture>.deb $ sudo apt-get update $ sudo apt-get install libvisionworks libvisionworks-dev libvisionworks-docs
At this point, VisionWorks libraries, headers, samples, and documentation (and some extra files) are installed to your Jetson or desktop device.
The following table shows various directories where the VisionWorks packages are located after installation on Jetson Pro, Jetson TK1 and Desktop platforms.
Package Name | Description | Installed Location |
---|---|---|
libvisionworks | Main package with pre-built shared libraries. | /usr/lib |
libvisionworks-dev | Development package with headers, documentation, and source code for demos. | /usr/include /usr/lib /usr/lib/pkgconfig /usr/share/visionworks |
libvisionworks-docs | Documentation package for this release of VisionWorks. | /usr/share/visionworks/docs |
After VisionWorks installation, use the installed script to copy samples from /usr/share/visionworks/sources
to a directory with write access and compile them using make:
$ /usr/share/visionworks/sources/install-samples.sh ~/ $ cd ~/VisionWorks-<visworks_ver>-Samples/ $ make
In this section uninstallation of VisionWorks and CUDA is described.
Uninstall VisionWorks packages with the following command:
$ sudo apt-get remove --purge libvisionworks-dev libvisionworks-docs libvisionworks
Then uninstall local repositories:
$ sudo apt-get purge libvisionworks-repo
Update apt cache after removing the repository:
$ sudo apt-get update
Uninstall CUDA repo package, CUDA toolkit package, run autoremove
command and update the apt cache using the following commands:
$ sudo apt-get remove --purge cuda-repo-<distro> $ sudo apt-get remove --purge cuda-toolkit-<cuda_version> $ sudo apt-get autoremove $ sudo apt-get update
autoremove
command removes all abandoned packages on the your system. An abandoned package is a package that was installed as a dependency, and the package that depends on it has since been removed.There are optional VisionWorks samples and demos that leverage OpenCV for Tegra packages. For these samples and demos, if OpenCV for Tegra packages is installed prior to VisionWorks installation, all samples will be available. If OpenCV for Tegra is installed after installing VisionWorks, then you must rebuild the samples before the OpenCV for Tegra NPP Interop sample will be available.
If the target system has other OpenCV version pre-installed, it may cause conflicts with the newer version.
If the old OpenCV library was installed from a public Ubuntu repository, it will be automatically replaced by the package manager.
If the old OpenCV library was installed via the make install
command, it must be removed manually:
/usr/local/include/opencv*
folders./usr/local/lib/libopencv*
files./usr/local/bin/opencv*
files/usr/local/share/OpenCV
folder/usr/local/lib/pkgconfig/opencv.pc
file.More information on getting started with CUDA is available in the NVIDIA CUDA Getting Started Guide for Linux:
Instructions on cross-compiling are available in the NVIDIA CUDA Compiler Driver NVCC:
More information on CUDA is also available at: