Accelerate Applications on GPUs with OpenACC Directives
OpenACC allows parallel programmers to provide simple hints, known as “directives,” to the compiler, identifying which areas of code to accelerate, without requiring programmers to modify or adapt the underlying code itself. By exposing parallelism to the compiler, directives allow the compiler to do the detailed work of mapping the computation onto the accelerator.
- Open standard targeting accelerators, such as NVIDIA GPUs or Intel Xeon Phis.
- Code compiled with OpenACC compilers for NVIDIA GPUs can be profiled using the NVIDIA Visual Profiler or the command-line version, nvprof.
- OpenACC directives can be used in conjunction with OpenMP directives, GPU libraries, MPI, and CUDA C or Fortran code.
Watch the following CUDACast to see what OpenACC is all about:
- Look at the OpenACC Resources.
- Try OpenACC by taking a self-paced lab on nvidia.qwiklab.com. These labs only require a supported web browser and a network that allows Web Sockets. Click here to verify that your network & system support Web Sockets in section "Web Sockets (Port 80)", all check marks should be green.
- Install the 30-day trial compiler from PGI, or see if your institution already has an OpenACC compiler available.
- Browse through educational content at the OpenACC website.
- Browse and ask questions about OpenACC at stackoverflow.com or at NVIDIA’s DevTalk forums.
- Look for and attend an OpenACC workshop such as the ones hosted by the Pittsburgh Supercomputing Center.
- Watch & learn about the recently announced OpenACC 2.0 specification from PGI’s Michael Wolfe.
As reported on openacc.org, customers typically see a 3-6x speed-up when executing OpenACC accelerated code on NVIDIA GPUs vs. CPU-only systems. In most cases, the number of OpenACC directives added is extremely small compared to the code-base they’re used on.