- NVIDIA DGX-1
- NVIDIA GRID
- Quadro VCA
- NVIDIA DRIVE
- About NVIDIA
Today, CIOs running enterprise data centers are looking to drive down energy costs and increase performance — goals seemingly at odds. For researchers, scientists and engineers, the power consumption of high-performance computing (HPC) systems can impose crippling limits on their work. On a smaller scale, limited battery life can pull the plug on the access, pleasure, and productivity gains enjoyed by the billions of people participating in the mobile computing revolution. In short, the energy efficiency of computing products affects nearly everyone and the environment in which we live.
GPUs — which have grown from their computer-gaming roots to enhance everything from medical imaging to oil exploration — recently have been redesigned to be the most energy efficient processors in the market. On a per-instruction basis, GPUs are dramatically more power efficient than CPUs, which traditionally have handled the bulk of computations that make computers work.
"GPUs are inherently more energy efficient than other ways of computation because they are optimized for throughput and performance per watt and not absolute performance," said Bill Dally, chief scientist and vice president of research at NVIDIA.
"Most people run out of power before they run out of floor space. This happens at every level, in the machine room, at the rack level, and it happens at the component level," said Dally. "As a company, we are very focused, at each design step, on accounting for every joule of energy dissipated and optimizing our designs to be energy efficient."
NVIDIA researchers are exploring efficiency all the way down to the level of logic gates — the basic building blocks of chips that convert the "0s" and "1s" of computer language into decisions — and all the way up to the power supply.
The work is paying off. The "Fermi" generation of GPUs from NVIDIA requires about 200 picojoules of energy to execute one instruction (that is, a single computing task, like adding two numbers together). By comparison, the most efficient x86 CPUs require 10 times more energy, or 2 nanojoules, to do the same thing. Researchers are working to advance the GPU by another order of magnitude in the coming years.
For smart phone and tablet users, the focus on performance per watt will result in longer-lasting battery life and a better overall experience. In the HPC world, it will save energy, costs, space, and unleash the imaginations of people across disciplines to tackle their biggest challenges.
On the Top500 Supercomputers list — a biannual ranking of supercomputing sites around the world — the number of GPU-powered systems is rapidly growing. Today, three of the five fastest supercomputers in the world are NVIDIA GPU-powered. And these systems are much more energy efficient.
One of the world's fastest supercomputers, China's Tianhe-1A, which uses more than 7,000 NVIDIA Tesla GPUs, uses about half as much power as the CPU-powered Jaguar, number three on the list. The GPU-powered Tsubame 2.0, the fourth fastest supercomputer, is also the second most energy-efficient supercomputer in the world, according to the latest Green500 list. Located at the Tokyo Institute of Technology, Tsubame nearly achieves the performance of Jaguar, but uses 92 percent fewer servers and consumes only 1/7th the power. Tsubame is helping scientists tackle such varied and complex subjects as pulmonary airflow and typhoon simulation.
GPU-powered supercomputers are setting the benchmark in efficiency. And a growing number of research sites are following suit, including the Lincoln cluster at the University of Illinois at Urbana-Champaign, TeraDRE at Purdue University, the Keeneland Project at Georgia Tech, and Nautilus at the National Institute for Computational Sciences in Oak Ridge, Tenn.
In finance, HPC systems handle the complex transactions behind global markets. To reduce costs and save energy, Bloomberg shifted one bond pricing application running on 2,000 CPUs to a 48 GPU rack of NVIDIA Tesla GPUs. The CPU system cost $4 million and $1.2 million in annual energy bills; the GPU one cost under $150,000, with about $30,000 yearly in energy. Similarly, the French bank BNP Paribas swapped out a 64 CPU system for a pair of NVIDIA Tesla S1070 systems – just eight GPUs – and cut energy use from 44 kilowatts to 2.9 kilowatts.
In the oil and gas industry, exploring for new energy resources requires taking a sonic image of very large areas and then processing the seismic data. The massive amount of data gathered requires lots of computing horsepower. HESS, a major oil and gas firm in the United States, replaced a 2,000 CPU cluster with 32 Tesla S1070 servers. The GPU-based system consumes just 47 kilowatts, versus 1.34 megawatts drawn by the old cluster. The annual energy bill dropped from $2.3 million to $82,000. Today, more than 20 energy firms are in the process of migrating to GPU-based processing, including Chevron, Schlumberger and BR Petrobras.
GPUs are even making research in the packaged goods industry more energy efficient. Working with Proctor & Gamble, researchers at Temple University ran molecular dynamics simulations to find better shampoos and detergents. To increase efficiency, they replaced 32 CPU servers with a single tower server running NVIDIA Tesla C2050 GPUs. Power consumption went from 21 to 1 kilowatt, energy costs were cut from $37,000 to just $2,000 per year.
To address this challenge, NVIDIA's Optimus technology optimizes battery life and performance in notebook PCs. Optimus works by automatically directing routine tasks to the integrated GPU and tapping into the discrete GPU for multimedia and other tasks that require heavy lifting. This ensures the best notebook experience, from playing the latest games to editing HD videos, at the lowest power. As experiences improve across mobile devices, people's expectations rise.
Learn how Optimus works: http://www.nvidia.com/object/optimus_technology.html
Today, people want phones and tablets that combine rich multimedia capabilities with long battery life. NVIDIA's Tegra mobile chip does just that. Tegra is a system-on-a-chip the size of a thumbnail. It integrates eight specialized processors, including the world's first dual-core CPU for mobile applications, which turn on only when needed. The result: a new class of mobile devices that can deliver beautiful graphics, HD video, and a full Web experience with all-day battery life.
Whether it's increasing the performance of supercomputing sites around the world, reducing the energy consumption of HPC systems across industries, or expanding the boundaries of mobile computing, GPUs are driving energy efficiency across the computing industry for the betterment of all.