Tesla

Subscribe
Accelerated Computing
Solving the World's Most Important Challenges
Accelerated Computing - Solving the World's Most Important Challenges

WHAT IS GPU-ACCELERATED COMPUTING?

GPU-accelerated computing is the use of a graphics processing unit (GPU) together with a CPU to accelerate deep learning, analytics, and engineering applications. Pioneered in 2007 by NVIDIA, GPU accelerators now power energy-efficient data centers in government labs, universities, enterprises, and small-and-medium businesses around the world. They play a huge role in accelerating applications in platforms ranging from artificial intelligence to cars, drones, and robots.

HOW GPUs ACCELERATE SOFTWARE APPLICATIONS

GPU-accelerated computing offloads compute-intensive portions of the application to the GPU, while the remainder of the code still runs on the CPU. From a user's perspective, applications simply run much faster.

How GPU Acceleration Works
 

GPU vs CPU Performance

A simple way to understand the difference between a GPU and a CPU is to compare how they process tasks. A CPU consists of a few cores optimized for sequential serial processing while a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously.

 

GPUs have thousands of cores to process parallel workloads efficiently

GPU Vs GPU: Which is better?

Check out the video clip below for an entertaining GPU versus CPU

Check out the video clip below for an entertaining GPU versus CPU
Video: Mythbusters Demo: GPU vs CPU (01:34)

With over 400 HPC applications acceleratedincluding 9 out of top 10—all GPU users can experience dramatic throughput boost for their workloads. Find out if the applications you use are GPU-accelerated in our application catalog (PDF 1.9 MB).

GET Started TODAY

There are three basic approaches to adding GPU acceleration to your applications:
  • Dropping in GPU-optimized libraries
  • Adding compiler "hints" to auto-parallelize your code
  • Using extensions to standard languages like C and Fortran

Learning how to use GPUs with the CUDA parallel programming model is easy.

For free online classes and developer resources visit CUDA zone.

VISIT CUDA ZONE