Power Efficiency

Power efficiency refers to a compute resource’s ability to convert electrical power into useful work with minimal waste or loss. It’s typically measured in tasks per watt (or watts per task) and is increasingly important for coping with power-limited data centers and achieving sustainable computing.

What Is Power Efficiency?

The more useful work a computing environment can accomplish for a given rate of electricity, the better the power efficiency. Increasing the energy efficiency of the compute equipment—so it accomplishes more work per unit of energy consumed—also improves overall power efficiency.

2014 Typical Power Shares (PUE=1.75)

Typical U.S. data center energy use breakdown in 2014, with 57 percent of power used for IT equipment and 43 percent used for cooling, power distribution, lighting, and other purposes.

Power efficiency can be improved by decreasing the power usage effectiveness (PUE) ratio, so more of the electricity going into the data center is used for computing while less of it is used for cooling or lost in the power distribution infrastructure. It can also be improved by making servers more energy -efficient with purpose-built accelerators such as GPUs and DPUs, which accomplish specific tasks more efficiently than general- purpose CPUs. 

Why Is Power Efficiency Important?

Growing computing clusters demand more electrical power to run and cool equipment, power that generates additional greenhouse gases (GHG), increases costs, and often exceeds what’s available in data centers.

  1. Lower operating expenses: Improving power efficiency reduces operating expenses, meaning more useful work can be accomplished for the same amount of money spent on electricity. 
  2. Overcome power constraints: Many existing data centers can’t allocate additional electricity, and many new data centers have a hard limit on how much power they can consume. The only way to expand the amount of work they can do is improving power efficiency. 
  3. Protect the environment: Traditional power generation produces GHG, accelerating climate change. Increasing power efficiency reduces power consumption and GHG production. Data centers can also switch to power from renewable sources to further reduce the amount of GHG produced per unit of electricity used. 
  4. Reduce cooling costs: Every watt of power requires cooling. Reducing power consumption at the server and networking levels—along with finding alternative ways to manage heat without electricity—reduces the power needed for cooling.

Using accelerator technology to improve server efficiency and increase power distribution and cooling efficiencies can significantly reduce power consumption for data centers. This lowers operating costs, allows more compute power in data centers, and lowers GHG emissions. 

How Does Power Efficiency Work?

Power efficiency gains come from making servers and networking more energy efficient and by improving the PUE of data centers.

  1. Accelerated computing: GPUs perform specific types of computing faster and more efficiently than general-purpose CPUs, letting servers accomplish more work in less time while  consuming less electricity.  
  2. Infrastructure offload: DPUs such as NVIDIA® BlueField® take care of networking, security, monitoring, and management tasks more quickly than CPUs, often reducing the power needs of each server and reducing the number of servers needed to run an application. 
  3. More efficient CPUs: For many popular AI and machine learning workloads, Arm®-based CPUs such as NVIDIA Grace™ can accomplish up to 2X more work per watt than x86 CPUs. 
  4. Server interconnects and networking: Using innovative interconnects such as NVIDIA NVLink® and NVSwitch™ between CPUs and GPUs speeds up computing so tasks consume less energy. Using higher-bandwidth, higher-radix switches and more efficient network transceivers improves the network’s power efficiency. 
  5. Cooling and power distribution: Implementing more efficient power distribution, such as  uninterruptible power supplies and power distribution units (PDUs), along with more efficient cooling solutions, such as hot aisle and cold aisle separation and free air cooling, reduces the amount of power lost before reaching the compute and networking equipment. This improves the PUE ratio.

Combining these solutions greatly decreases the amount of electricity consumed for each application or computing task, increasing power efficiency.

What Can Help Boost Server Power Efficiency?

GPU Acceleration

NVIDIA GPUs can process hundreds of threads in parallel and perform many math and graphics tasks much more efficiently than general-purpose CPUs. Shifting highly parallel and/or math- and graphics-intensive workloads to GPUs lets them run an order-of-magnitude faster, completing them more quickly and with less energy. In addition, NVIDIA AI frameworks improve energy efficiency even further when shifting workloads from CPU to GPU. The combination of NVIDIA GPUs and AI, high-performance computing (HPC), or visualization software delivers huge gains in power efficiency to data centers.

DPU Acceleration

The NVIDIA BlueField DPU offloads, accelerates, and isolates infrastructure workloads from the CPU, improving performance and power efficiency. BlueField shifts networking, storage, security, and management tasks to purpose-built silicon, performing them more efficiently than general-purpose CPUs and freeing up CPU cores to run business and scientific applications.

CPU Efficiencies

The NVIDIA Grace CPU delivers superior power efficiency for AI and scientific computing tasks, and the Grace CPU also uses LPDDR5X memory to deliver up to 2X more bandwidth and 10X better energy efficiency than the previous generation of server memory. For traditional computing tasks, newer x86 CPUs from AMD and Intel are more energy efficient than older x86 CPUs.  

Interconnect and Networking Efficiencies

Using more efficient interconnects between CPUs, GPUs, and memory significantly improves power efficiency within the server. NVIDIA NVLink and NVSwitch connect GPUs with up to 7X higher bandwidth and several times better energy efficiency than PCie Gen5. NVIDIA Quantum-2 InfiniBand with in-network computing connects AI and HPC clusters with the best possible performance and efficiency by performing compute tasks in the network and reducing the number of switches required. NVIDIA Spectrum™ switches deliver the most efficient 200G/400G/800G Ethernet networks for AI. NVIDIA LinkX® cables and transceivers with ConnectX® adapters and BlueField DPUs support direct drive to reduce power consumption on each transceiver.

Examples of NVIDIA Power Efficiency

The NVIDIA H100 Tensor Core GPU demonstrates almost 2X the energy efficiency of the previous NVIDIA A100 Tensor Core GPU. 

NVIDIA DGX™ A100 systems deliver a nearly 5X improvement in energy efficiency for AI training applications compared to the previous generation of DGX. 

As of November 2022, NVIDIA GPU and networking technologies power 23 of the top 30 supercomputing systems on the Green500 list, including the #1 Green500 system.  

The NVIDIA Grace CPU delivers up to 2X better energy efficiency than x86 CPUs for selected applications.

NVIDIA BlueField DPUs can help servers consume up to 30 percent less power per unit of work

When running the Redis in-memory caching service on VMware vSphere 8, offloading networking to a BlueField DPU can reduce power consumption per task by up to 34 percent. 

NVIDIA GeForce RTX™ 40 series laptops, with the NVIDIA Ada Lovelace GPU architecture and fifth-generation Max-Q technology, are up to 3X more power efficient than the previous generation.

How Can You Get Started With Power Efficiency?

Here are some ways you can start improving power efficiency in your data center:

  1. Discover which of your workloads can be accelerated by NVIDIA GPUs and frameworks. These are typically AI, HPC, scientific computing, visualization, and digital twin applications. 
  2. Upgrade to the latest CPUs, GPUs, and acceleration frameworks to increase performance and efficiency.
  3. Learn about the abilities of DPUs to offload and accelerate data center infrastructure, including networking (SDN, firewalls, load balancers, packet inspection, etc.), encryption, telemetry, and management. 
  4. Evaluate which AI and HPC workloads can run more efficiently on NVIDIA Grace CPUs or on the NVIDIA Grace Hopper superchip
  5. Estimate how many fewer switches and cables you would need and how much power you could save by upgrading your network with NVIDIA 200G/400G Quantum InfiniBand or 200G/400G/800G Spectrum Ethernet switches.
  6. Calculate the PUE ratio for your data center and/or cloud service provider then determine how much it can be improved. 
  7. Increase the percentage of your energy that comes from renewable sources, and consider locating your next data center or colocation center where there’s more renewable electricity and more frequent free-air cooling.

Explore More Resources

Energy Efficiency Explained

Want more information on energy efficiency? Check out the NVIDIA Energy Efficiency Glossary page.

NVIDIA BlueField DPUs Drive Data Center Efficiency

Learn how DPUs reduced power consumption in testing with key NVIDIA partners.

DPU Power-Efficiency Research

Learn how DPUs can reduce power consumption by 30 percent, saving $56 million for large data centers.

Creating a Power-Efficient Data Center With DPUs

Check out research on BlueField DPU power savings and get answers to data center efficiency questions.

Next Steps

Explore NVIDIA’s resource library for sustainable computing.