NVIDIA ConnectX InfiniBand Adapters

Driving efficiency and innovation in supercomputing, AI, and cloud data centers.

Leveraging faster speeds and innovative In-Network Computing, NVIDIA® ConnectX® InfiniBand smart adapters achieve extreme performance and scale. NVIDIA ConnectX lowers cost per operation, increasing ROI for high-performance computing (HPC), AI and machine learning (ML), accelerated data platforms and clustered databases, and more.

Products

ConnectX-9

NVIDIA ConnectX-9 SuperNIC™ delivers up to 1.6 terabits per second (Tb/s) throughput with breakthrough networking, optimized connectivity, and accelerated performance to power gigascale AI factories.

ConnectX-8

The ConnectX-8 InfiniBand SuperNIC provides up to 800 gigabits per second (Gb/s) of data throughput with support for NVIDIA In-Network Computing acceleration engines to deliver the performance and robust feature set needed to power trillion-parameter-scale AI factories and scientific computing workloads.

ConnectX-7

The ConnectX-7 smart host channel adapter (HCA) provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing acceleration engines for additional acceleration. ConnectX-7 delivers the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers.

Resources

Products

ConnectX-8

The ConnectX-8 InfiniBand SuperNIC provides 800 gigabits per second (Gb/s) of data throughput with support for NVIDIA In-Network Computing acceleration engines to deliver the performance and robust feature set needed to power trillion-parameter-scale AI factories and scientific computing workloads.


ConnectX-7

The ConnectX-7 smart host channel adapter (HCA) provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing acceleration engines for additional acceleration. ConnectX-7 delivers the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers.


ConnectX-6

The ConnectX-6 smart host channel adapter (HCA), featuring the NVIDIA Quantum InfiniBand architecture, delivers high-performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in HPC, artificial intelligence, cloud, hyperscale, and storage platforms.


ConnectX-5

The ConnectX-5 smart host channel adapter (HCA) with intelligent acceleration engines enhances HPC, ML, and data analytics, as well as cloud and storage platforms. With support for two ports of 100Gb/s InfiniBand and Ethernet network connectivity, PCIe Gen3 and Gen4 server connectivity, very high message rates, PCIe switches, and NVMe over Fabrics offloads, ConnectX-5 is a high-performance and cost-effective solution for a wide range of applications and markets.

ConnectX-4 VPI EDR/100GbE

ConnectX-4 Virtual Protocol Interconnect (VPI) smart adapters support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity. Providing data centers high performance and flexible solutions for HPC (high performance computing), Cloud, database, and storage platforms, ConnectX-4 smart adapters combine 100Gb/s bandwidth in a single port with the lowest available latency, 150 million messages per second and application hardware offloads.

ConnectX-3 Pro VPI FDR and 40/56GbE

ConnectX-3 Pro smart adapters with Virtual Protocol Interconnect (VPI) support InfiniBand and Ethernet connectivity with hardware offload engines for Overlay Networks ("Tunneling"). ConnectX-3 Pro provides great performance and flexibility for PCI Express Gen3 servers deployed in public and private clouds, enterprise data centers, and high-performance computing.


OCP Adapters

Open Compute Project (OCP) defines a mezzanine form factor that features best-in-class efficiency to enable the highest data center performance.

Multi-Host Solutions

The innovative NVIDIA Multi-Host® technology allows multiple compute or storage hosts to connect into a single adapter.

Socket-Direct Adapters

NVIDIA Socket Direct® technology enables direct PCIe access to multiple CPU sockets, eliminating the need for network traffic to traverse the inter-process bus.