Deep learning is a subset of AI and machine learning that uses multi-layered artificial neural networks to deliver state-of-the-art accuracy in tasks such as object detection, speech recognition, language translation and others.
Deep learning differs from traditional machine learning techniques in that they can automatically learn representations from data such as images, video or text, without introducing hand-coded rules or human domain knowledge. Their highly flexible architectures can learn directly from raw data and can increase their predictive accuracy when provided with more data.
Deep learning is responsible for many of the recent breakthroughs in AI such as Google DeepMind’s AlphaGo, self-driving cars, intelligent voice assistants and many more. With NVIDIA GPU-accelerated deep learning frameworks, researchers and data scientists can significantly speed up deep learning training, that could otherwise take days and weeks to just hours and days. When models are ready for deployment, developers can rely on GPU-accelerated inference platforms for the cloud, embedded device or self-driving cars, to deliver high-performance, low-latency inference for the most computationally-intensive deep neural networks.
NVIDIA AI Platform for Developers
Developing AI applications start with training deep neural networks with large datasets. GPU-accelerated deep learning frameworks offer flexibility to design and train custom deep neural networks and provide interfaces to commonly-used programming languages such as Python and C/C++. Every major deep learning framework such as TensorFlow, PyTorch and others, are already GPU-accelerated, so data scientists and researchers can get productive in minutes without any GPU programming.
For AI researchers and application developers, NVIDIA Volta and Turing GPUs powered by tensor cores give you an immediate path to faster training and greater deep learning performance. With Tensor Cores enabled, FP32 and FP16 mixed precision matrix multiply dramatically accelerates your throughput and reduces AI training times.
For developers integrating deep neural networks into their cloud-based or embedded application, Deep Learning SDK includes high-performance libraries that implement building block APIs for implementing training and inference directly into their apps. With a single programming model for all GPU platform - from desktop to datacenter to embedded devices, developers can start development on their desktop, scale up in the cloud and deploy to their edge devices - with minimal to no code changes.
NVIDIA provides optimized software stacks to accelerate training and inference phases of the deep learning workflow. Learn more on the links below.
Every AI Framework - Accelerated
Deep learning frameworks offer building blocks for designing, training and validating deep neural networks, through a high level programming interface. Every major deep learning framework such as Caffe2, Chainer, Microsoft Cognitive Toolkit, MxNet, PaddlePaddle, Pytorch and TensorFlow rely on Deep Learning SDK libraries to deliver high-performance multi-GPU accelerated training. As a framework user, it’s as simple as downloading a framework and instructing it to use GPUs for training. Learn more about deep learning frameworks and explore these examples to getting started quickly.Deep Learning Frameworks Tensor Core Optimized Model Scripts
Unified Developer Platform - Development to Deployment
Deep learning frameworks are optimized for every GPU platform from Titan V desktop developer GPU to data center grade Tesla GPUs. This allows researchers and data scientist teams to start small and scale out as data, number of experiments, models and team size grows. Since Deep Learning SDK libraries are API compatible across all NVIDIA GPU platforms, when a model is ready to be integrated into an application, developers can test and validate locally on the desktop, and with minimal to no code changes validate and deploy to Tesla datacenter platforms, Jetson embedded platform or DRIVE autonomous driving platform. This improves developer productivity and reduces chances of introducing bugs when going from prototype to production.
- Deep Learning SDK for Desktop and Data Center GPUs
- DriveWorks SDK for DRIVE Platform for Autonomous Vehicles
- Jetpack SDK for Jetson Embedded Systems
Get Started With Hands-On Training
The NVIDIA Deep Learning Institute (DLI) offers hands-on training for developers, data scientists, and researchers in AI and accelerated computing. Get certified in the fundamentals of Computer Vision through the hands-on, self-paced course online. Plus, check out two-hour electives on Digital Content Creation, Healthcare, and Intelligent Video Analytics.