Tesla

  • What is GPU Computing
  • GPU Applications
  • Servers and Workstations
Subscribe

WHAT IS GPU COMPUTING?

GPGPU, CUDA and Kepler explained

WHAT IS GPU ACCELERATED COMPUTING?

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate scientific, engineering, and enterprise applications. Pioneered in 2007 by NVIDIA, GPUs now power energy-efficient datacenters in government labs, universities, enterprises, and small-and-medium businesses around the world.

How Applications Accelerate with GPUs

GPU-accelerated computing offers unprecedented application performance by offloading compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run significantly faster.

How GPU Acceleration Works
 

CPU VERSUS GPU

A simple way to understand the difference between a CPU and GPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU consists of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

 

GPUs have thousands of cores to process parallel workloads efficiently

GPUs have thousands of cores to process parallel workloads efficiently

Check out the video clip below for an entertaining CPU versus GPU.

Check out the video clip below for an entertaining CPU versus GPU


Hundreds of industry-leading applications are already GPU-accelerated. Find out if the applications you use are GPU-accelerated by looking in our application catalog.

GET Started TODAY

There are three basic approaches to adding GPU acceleration to your applications:
  • Dropping in GPU-optimized libraries
  • Adding compiler “hints” to auto-parallelize your code
  • Using extensions to standard languages like C and Fortran

Learning how to use GPUs with the CUDA parallel programming model is easy.

For free online classes and developer resources visit CUDA zone.

VISIT CUDA ZONE