The World’s First HDR 200Gb/s InfiniBand Host Channel Adapter
Intelligent NVIDIA® Mellanox® ConnectX®-6 adapter cards deliver high performance and NVIDIA In-Network Computing acceleration engines for maximizing efficiency in high-performance computing (HPC), artificial intelligence (AI), cloud, hyperscale, and storage platforms.
ConnectX-6 Virtual Protocol Interconnect® (VPI) adapter cards offer up to two ports of 200Gb/s throughput for InfiniBand and Ethernet connectivity, provide ultra-low latency, deliver 215 million messages per second, and feature innovative smart offloads and in-network computing accelerations that drive performance and efficiency.
ConnectX-6 is a groundbreaking addition to the ConnectX series of industry-leading adapter cards, providing innovative features such as in-network memory capabilities, message passing interface (MPI) tag matching hardware acceleration, out-of-order RDMA write and read operations, and congestion control over HDR, HDR100, EDR, and FDR InfiniBand speeds.
PORT SPEEDS
2x200Gb/s
TOTAL BANDWIDTH
200Gb/s
MESSAGE RATE
215 million msgs/sec
PCIe LANES
32x Gen3/Gen4
ConnectX-6 delivers the highest throughput and message rate in the industry and is the perfect product to lead HPC data centers toward exascale levels of performance and scalability.
ConnectX-6 offers enhancements to HPC infrastructures by providing MPI acceleration and offloading, as well as support for network atomic and PCIe atomic operations.
Machine learning relies on high throughput and low latency to train deep neural networks and to improve recognition and classification accuracy. As the first adapter card to deliver 200Gb/s throughput supporting NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) , ConnectX-6 and Quantum switches provide machine learning applications with the performance and scalability they need.
NVMe storage devices are gaining momentum by offering exceptionally fast remote direct-memory access (RDMA) to storage media. With its NVMe over Fabrics (NVMe-oF) target and initiator offloads, ConnectX-6 brings further optimizations to NVMe-oF, enhancing CPU utilization and scalability.
InfiniBand and Adapter Card Brochure
ConnectX-6 VPI Cards Product Brief
ConnectX-6 InfiniBand/VPI Cards User Manual
ConnectX-6 InfiniBand/VPI OCP 3.0 User Manual
Maximizing Server Performance with Mellanox Socket Direct Adapter
See how you can build the most efficient, high-performance network.