NVIDIA Mellanox ConnectX-6 VPI

The World’s First HDR 200Gb/s InfiniBand Host Channel Adapter

Intelligent NVIDIA® Mellanox® ConnectX®-6 adapter cards deliver high performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in high-performance computing (HPC), artificial intelligence (AI), cloud, hyperscale, and storage platforms.

Single/Dual-Port Adapter Supporting 200Gb/s

ConnectX-6 Virtual Protocol Interconnect® (VPI) adapter cards offer up to two ports of 200Gb/s throughput for InfiniBand and Ethernet connectivity, provide ultra-low latency, deliver 215 million messages per second, and feature innovative smart offloads and in-network computing accelerations that drive performance and efficiency.

ConnectX-6 is a groundbreaking addition to the ConnectX series of industry-leading adapter cards, providing innovative features such as in-network memory capabilities, message passing interface (MPI) tag matching hardware acceleration, out-of-order RDMA write and read operations, and congestion control over HDR, HDR100, EDR, and FDR InfiniBand speeds.

NVIDIA Mellanox ConnectX-6 Single/Dual-Port Adapter Supporting 200Gb/s


Use Case Applications

NVIDIA Mellanox HPC on ConnectX-6

High-Performance Computing (HPC)

ConnectX-6 delivers the highest throughput and message rate in the industry and is the perfect product to lead HPC data centers toward exascale levels of performance and scalability.

ConnectX-6 offers enhancements to HPC infrastructures by providing MPI acceleration and offloading, as well as support for network atomic and PCIe atomic operations.

NVIDIA Mellanox Machine Learning and AI

Machine Learning and AI

Machine learning relies on high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200Gb/s throughput supporting NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) , ConnectX-6 and Quantum switches provide machine learning applications with the performance and scalability they need.

NVIDIA Mellanox Storage


NVMe storage devices are gaining momentum by offering exceptionally fast remote direct-memory access (RDMA) to storage media. With its NVMe over Fabrics (NVMe-oF) target and initiator offloads, ConnectX-6 brings further optimizations to NVMe-oF, enhancing CPU utilization and scalability.

Key Features

  • HDR / HDR100 / EDR / FDR / QDR / SDR InfiniBand connectivity
  • Up to 200Gb/s bandwidth 
  • Up to 215 M messages per second
  • Low latency
  • RDMA, send/receive semantics
  • Hardware-based congestion control
  • Atomic operations
  • Collective operations offloads
  • Support for NVIDIA Mellanox Multi-Host® and NVIDIA Mellanox Socket Direct® configurations
  • Embedded PCIe switch
  • Available in a variety of form factors:  PCIe standup, NVIDIA Socket Direct, OCP3.0, NVIDIA Multi-Host, and integrated circuit (IC) standalone


  • Delivers the highest throughput and message rate in the industry
  • Offloads computation to save CPU cycles and increase network efficiency
  • Highest performance and most intelligent fabric for compute and storage infrastructures
  • Support for x86, Power, Arm, and GPU-based compute and storage platforms


See how you can build the most efficient, high-performance network.

Configure Your Cluster

Take Networking Courses

Ready to Purchase?