Built for the AI factory.
NVIDIA CPU platforms provide the foundation for modern AI infrastructure and AI factories. As the host for accelerated computing, orchestrate the data movement, analytics, storage, and system execution that keep infrastructure running efficiently. NVIDIA Grace™ delivers performance and energy efficiency for hyperscale cloud, edge, and telco environments, while NVIDIA Vera is designed for agentic AI systems, accelerating the data and execution pipelines that support large-scale AI factories.
The NVIDIA Vera CPU is purpose-built for next-generation agentic AI systems, pairing seamlessly with NVIDIA GPUs for AI factories or operating independently across post-training and agentic pipelines, analytics, cloud, orchestration, and storage workloads.
The second-generation NVIDIA NVLink™ Chip-to-Chip (C2C) interconnect offers 1.8 terabytes per second (TB/s) of bidirectional bandwidth, 7x faster than PCIe Gen 6, enabling focus on development instead of memory management.
NVIDIA CPUs use LPDDR5X memory with error-correction code (ECC) for server-class reliability, while offering 5x better energy efficiency—ideal for cloud, enterprise, and high-performance computing (HPC) workloads.
NVIDIA Vera features custom Olympus Arm®-compatible CPU cores and the NVIDIA Scalable Coherency Fabric (SCF) to deliver high single-thread performance and predictable scaling. NVIDIA Grace uses high-performance Arm-based CPU cores with SCF to provide efficient performance across cloud, edge, and data center workloads.
NVIDIA CPU platforms support a wide range of system designs, from tightly integrated CPU–GPU architectures like NVIDIA Vera Rubin and Grace Blackwell for accelerated computing to standalone single- and dual-socket CPU servers for cloud, enterprise, HPC, and edge deployments.
Breakthrough CPU performance and efficiency for the modern data center.
Connecting the NVIDIA CPU and GPU for large-scale AI and HPC applications.
Next Steps
Sign up for the latest news, updates, and more from NVIDIA.