NVIDIA Converged Accelerators

Delivering maximum performance and enhanced security for accelerated workloads from data center to edge.

Networking and Compute, Unified

NVIDIA converged accelerators combine the powerful performance of the NVIDIA Ampere architecture with the enhanced security and latency-reduction capabilities of the NVIDIA® BlueField®-2 data processing unit (DPU). Enterprises can use converged accelerators to create faster, more efficient, and secure AI systems in data centers and at the edge.

NVIDIA Ampere Architecture

Unprecedented GPU Performance

The NVIDIA Ampere architecture delivers the largest-ever generational leap in performance for a wide range of compute-intensive workloads to secure and accelerate enterprise and edge infrastructure.

NVIDIA Mellanox® ConnectX®-6 Dx

Enhanced Security

The NVIDIA BlueField-2 DPU provides innovative acceleration, security, and efficiency in every host. BlueField-2 combines the power of the NVIDIA ConnectX®-6 Dx with programmable Arm® cores and hardware offloads for software-defined storage, networking, security, and management workloads.

NVIDIA Converged Accelerators

Faster Data Speeds

NVIDIA converged accelerators include an integrated PCIe Gen4 switch. This allows data to travel between the GPU and DPU without flowing across the server PCIe system. Even in systems with PCIe Gen3 on the host, communication occurs at the full PCIe Gen4 speed. This enables a new level of data center efficiency and security for GPU-accelerated workloads, including AI-based security, 5G telecommunications, and other edge applications.

A More Powerful, Secure Enterprise

Faster 5G

NVIDIA Aerial is an application framework for building high-performance, software-defined, cloud-native 5G applications to address increasing consumer demand. It enables GPU-accelerated signal and data processing for 5G wireless radio area networks (RANs). NVIDIA converged accelerators provide the highest-performing platform for running 5G applications. Because data doesn’t need to go through the host PCIe system, processing latency is greatly reduced. The resulting higher throughput also allows for a greater subscriber density per server.

Faster 5G
AI-Based Cybersecurity

AI-Based Cybersecurity

Converged accelerators open up a new range of possibilities for AI-based cybersecurity and networking. The DPU’s Arm cores can be programmed using the NVIDIA Morpheus application framework to perform GPU-accelerated advanced network functions, such as threat detection, data leak prevention, and anomalous behavior profiling. GPU processing can be applied directly to network traffic at a high data rate, and data travels on a direct path between the GPU and DPU, providing better isolation.

Accelerating Edge AI-on-5G

NVIDIA AI-on-5G is made up of the NVIDIA EGX platform, the NVIDIA Aerial SDK for software-defined 5G virtual RANs (vRANs), and enterprise AI frameworks, including SDKs such as NVIDIA Isaac and NVIDIA Metropolis. This platform enables edge devices such as video cameras and industrial sensors and robots to use AI and communicate with the data center over 5G. Converged cards make it possible to provide all this functionality in a single enterprise server, without having to deploy more costly purpose-built systems. The same converged card used to accelerate 5G signal processing can also be used for edge AI, with NVIDIA’s Multi-Instance GPU (MIG) technology making it possible to share the GPU among several different applications.

NVIDIA AI-on-5G
Balanced, Optimized Design

Balanced, Optimized Design

Integrating a GPU, DPU, and PCIe switch into a single device creates a balanced architecture by design. In systems where multiple GPUs and DPUs are desired, a converged accelerator card avoids contention on the server’s PCIe system, so the performance scales linearly with additional devices. In addition, a converged card provides much more predictable performance. Having these components on one physical card also improves space and energy efficiency. Converged cards significantly simplify deployment and ongoing maintenance, particularly when installing in volume servers at scale.

Meet NVIDIA’s Converged Accelerators

This device enables data-intensive edge and data center workloads to run with maximum security and performance.

DPU A100X

A30X

The A30X combines the NVIDIA A30 Tensor Core GPU with the BlueField-2 DPU. WIth MIG, the GPU can be partitioned into as many as four GPU instances, each running a separate service. The design of this card provides a good balance of compute and input/output (IO) performance for use cases such as 5G vRAN and AI-based cybersecurity. Multiple services can run on the GPU, with the low latency and predictable performance provided by the onboard PCIe switch.

A100X

The A100X brings together the power of the NVIDIA A100 Tensor Core GPU with the BlueField-2 DPU. With MIG, each A100 can be partitioned into as many as seven GPU instances, allowing even more services to run simultaneously. The A100X is ideal for use cases where the compute demands are more intensive. Examples include 5G with massive multiple-input and multiple-output (MIMO) capabilities, AI-on-5G deployments, and specialized workloads such as signal processing and multi-node training.

Sign Up for the NVIDIA Converged Accelerator Developer Kit

Interested in building the next generation of edge AI and cybersecurity applications? Want to be one of the first people to get hands-on experience with the new converged accelerators? Sign up to receive information about the Converged Accelerator Developer Kit and to get early access to the hardware and software components.

Stay Up To Date

Sign up to get the latest information and resources on NVIDIA EGX converged accelerators, straight to your inbox