Socket Direct Adapters

Maximize Data Center Performance and Increase ROI

An innovative network adapter architecture—NVIDIA® Mellanox® Socket Direct®—enables direct PCIe access to multiple CPU sockets, eliminating the need for network traffic having to traverse the inter-process bus. This optimizes overall system performance and maximum throughput for the most demanding applications and markets.

Eliminate Traffic Bottlenecks

Based on NVIDIA Mellanox  Multi-Host® technology, NVIDIA Mellanox Socket Direct technology enables several CPUs within a multi-socket server to connect directly to the network, each through its own dedicated PCIe interface. Through either a connection harness that splits the PCIe lanes between two cards or by bifurcating a PCIe slot for a single card. This results in eliminating the network traffic traversing over the internal bus between the sockets, significantly reducing overhead and latency, in addition to reducing CPU utilization and increasing network throughput. Mellanox Socket Direct also improves Artificial Intelligence and Machine Learning application performance, as it enables native GPU-Direct® technologies.

NVIDIA Mellanox Socket Direct Adapters

Highlights

Benefits

Speed

Flexible Form Factors Across Multiple Data Speeds

  • ConnectX-6 Dx Multi-Host OCP 3.0 cards can connect a 200GbE port to up to 4 PCIe Gen4 x4 slots
  • ConnectX-6 Socket Direct cards provide HDR 200Gb/s or 200GbE ports over two PCIe Gen3 x16 slots
  • ConnectX-6 OCP3.0 cards provide HDR 200Gb/s or 200GbE ports to up to 4 PCIe Gen4 x4 slots
  • ConnectX-5 Socket Direct cards provide EDR 100Gb/s or 100GbE transmission rate over two PCIe Gen3 x8 slots
Performance

Enhanced Performance That is Easy to Manage

Socket Direct adapters can be connected to a BMC using MCTP over SMBus, or MCTP over PCIe, similar to a standard NVIDIA PCIe stand-up adapter. The chosen management interface facilitates communication between the platform management and subsystem component and the Socket Direct adapters can then be configured transparently by the chosen server management solution.

Socket Direct

Socket Direct Removes the Load on the Inter-processor Bus

Socket Direct technology utilizes the same underlying technology that enables Multi-Host, only to different CPUs within the same server. When comparing the servers’ external throughput while applying the inter-processor load compared to when Socket Direct is implemented, throughput is improved by 16%-28% compared to the standard adapter connection to a single CPU.

Multi-Host and Socket Direct Comparison

  MULTI-HOST SOCKET DIRECT
Motivation Reduces CAPEX / OPEX Improve performance
Server Config Individual servers; each with its own OS instance Single server running single OS
PCIe Signals Individual PCIe Reset, Clock, etc. Common PCIe Reset, Clock, etc.
BMC Supports individual BMCs Single BMC
Pre-boot Individual pre-boot instance Single pre-boot instance

ConnectX Comparison

  ORDERING PART NO. MAX. SPEED PORTS CONNECTORS ASIC & PCI DEV ID PCI LANES
ConnectX-6 Dx MCX623435MN-CDAB 100GbE 1 QSFP56 ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
Contact NVIDIA 100GbE 1 DSFP ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
Contact NVIDIA 100GbE 1 QSFP56 ConnectX-6 Dx Socket Direct PCIe 4.0 x16, split into two x8 2x8 in a row
Contact NVIDIA 200GbE 1 QSFP56 ConnectX-6 Dx OCP3.0, Multi Host or Socket Direct, PCIe 4.0 x16 1x16/2x8/4x4
ConnectX-6 VPI MCX653105A-EFAT HDR100, EDR IB (100Gb/s) and 100GbE 1 QSFP56 ConnectX-6 Socket Direct 3.0/4.0 x16, split into two x8 2x8 in a row
MCX653106A-EFAT HDR100, EDR IB (100Gb/s) and 100GbE 2 QSFP56 ConnectX-6 Socket Direct 3.0/4.0 x16, split into two x8 2x8 in a row
MCX653106A-EFAT HDR IB (200Gb/s) and 200GbE 1 QSFP56 ConnectX-6 Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card 2x16
MCX654106A-HCAT HDR IB (200Gb/s) and 200GbE 2 QSFP56 ConnectX-6 Socket Direct PCIe3.0 x16 + PCIe3.0x16 auxiliary card 2x16
ConnectX-5 VPI MCX556M-ECAT-S25 EDR IB (100Gb/s) and 100GbE 2 QSFP28 ConnectX-5 Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 25cm harness 2x8
MCX556M-ECAT-S35A EDR IB (100Gb/s) and 100GbE 2 QSFP28 ConnectX-5 Socket Direct PCIe3.0 x8 + PCIe3.0x8 auxiliary card, 35cm harness 2x8

Resources

We're here to help you build the most efficient, high performance network.

SmartNIC Selector

Academy Online Courses

Ready to Purchase