Leveraging faster speeds and innovative In-Network Computing, NVIDIA ConnectX InfiniBand smart adapters achieve extreme performance and scale. NVIDIA ConnectX lowers cost per operation, increasing ROI for high-performance computing (HPC), machine learning, advanced storage, clustered databases, low-latency embedded I/O applications, and more.
The ConnectX-7 smart host channel adapter (HCA), featuring the NVIDIA Quantum-2 InfiniBand architecture, provides the highest networking performance available to take on the world’s most challenging workloads. ConnectX-7 provides ultra-low latency, 400Gb/s throughput, and innovative NVIDIA In-Network Computing acceleration engines to provide additional acceleration to deliver the scalability and feature-rich technology needed for supercomputers, artificial intelligence, and hyperscale cloud data centers.
The ConnectX-6 smart host channel adapter (HCA), featuring the NVIDIA Quantum InfiniBand architecture, delivers high-performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in HPC, artificial intelligence, cloud, hyperscale, and storage platforms.
The ConnectX-5 smart host channel adapter (HCA) with intelligent acceleration engines enhances HPC, machine learning, data analytics, cloud, and storage platforms. With support for two ports of 100Gb/s InfiniBand and Ethernet network connectivity, PCIe Gen3 and Gen4 server connectivity, a very high message rate, PCIe switch, and NVMe over Fabrics offloads, ConnectX-5 is a high-performance and cost-effective solution for a wide range of applications and markets.
ConnectX-4 Virtual Protocol Interconnect (VPI) smart adapters support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity. Providing data centers high performance and flexible solutions for HPC (high performance computing), Cloud, database, and storage platforms, ConnectX-4 smart adapters combine 100Gb/s bandwidth in a single port with the lowest available latency, 150 million messages per second and application hardware offloads.
ConnectX-3 Pro smart adapters with Virtual Protocol Interconnect (VPI) support InfiniBand and Ethernet connectivity with hardware offload engines for Overlay Networks ("Tunneling"). ConnectX-3 Pro provides great performance and flexibility for PCI Express Gen3 servers deployed in public and private clouds, enterprise data centers, and high-performance computing.
Open Compute Project (OCP) defines a mezzanine form factor that features best-in-class efficiency to enable the highest data center performance.
The innovative NVIDIA Multi-Host® technology allows multiple compute or storage hosts to connect into a single adapter.
NVIDIA Socket Direct® technology enables direct PCIe access to multiple CPU sockets, eliminating network traffic having to traverse the inter-process bus.
ConnectX-6: The World's First 200G Adapter
NVMe over Fabrics Solutions
Socket Direct
ConnectX-7 InfiniBand
ConnectX-6 InfiniBand
ConnectX-5 InfiniBand
ConnectX InfiniBand Adapters Portfolio
HPC and Bioscience
NVIDIA GPUDirect RDMA
NVIDIA Quantum-2 InfiniBand Platform
See how you can build the most efficient, high-performance network.