Whether you’re a scientist working on climate modeling, an engineer designing new products, or a data analyst making sense of large datasets, NVIDIA’s solutions can help you do your life’s work better and more efficiently.

Use this simple tool to compare the costs and energy use of a workload running on an x86 CPU-based server versus an NVIDIA GPU-accelerated server. You’ll see:

  • A node count comparison for equal throughput
  • The annual energy and cost savings for each system at equal throughput
  • Estimates of CO2-equivalent savings

To use it, you’ll need to know:

  • The type of NVIDIA GPU you have,
  • The number of GPUs,
  • The software application, and
  • The dataset (model of interest)

If you’d like to estimate the savings for an application not on our list, you’ll need to calculate the Node Factor Replacement (NRF). NRF is the number of CPU-only servers replaced by a single GPU-accelerated server. Alternatively, NRF is the number of CPU servers required to provide equivalent throughput to a single GPU server. NRF will vary by application.

For additional comparisons, see the AI and high-performance computing (HPC) performance pages for Training, Inference, and HPC.

Discover Cost and Energy Savings—GPU Versus CPU

Performance and energy comparison is for a full dual-socket Intel 8480+ CPU node versus a four- or eight-way GPU node.

Share the Real-Life Applications of the Efficiency Formula

These estimates are approximate and should not be used for emission inventories or formal carbon emissions analysis.

Share the Energy Savings of Accelerated Computing