NVIDIA Mellanox MetroX-2

Bring together remote resources with expanded InfiniBand interconnectivity

Using InfiniBand across multiple, geographically distributed data centers requires high-volume, remote direct-memory access (RDMA)-based data connectivity. NVIDIA® Mellanox® MetroX®-2 systems, based on the NVIDIA Mellanox Quantum™ HDR 200Gb/s InfiniBand switch, extends InfiniBand to data centers up to 40 kilometers apart, enabling data center expansion, rapid disaster recovery, and improved utilization of remote storage and compute infrastructures.

Faster Data Center Recovery

Designed for today’s business continuity and simplified disaster recovery design, MetroX-2 enables aggregate data and storage networking over a single, consolidated fabric. As a cost-effective, power-efficient, and scalable long haul solution, NVIDIA MetroX-2 guarantees high-performance, high-volume data-sharing between remote InfiniBand  sites, easily managed as a single unified network fabric.

NVIDIA Mellanox MetroX-2 long haul systems

Highlights

World-Class InfiniBand Performance

NVIDIA Mellanox long haul technology

In-Network Computing

MetroX-2 leverages low-latency RDMA connectivity to enable remote data centers to share storage, for independent computing and disaster recovery. 

Using two MetroX-2 EDR InfiniBand long-haul ports, with LR4 /ER4 transceivers, easily interconnect remote InfiniBand data centers together.

NVIDIA Mellanox self-healing networking

Self-Healing Networking

MetroX-2’s optimized design leverages InfiniBand-based self-healing capabilities to overcome link failures and achieve network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.

NVIDIA Mellanox data center management with MetroX-2 and UFM

Data Center Management

NVIDIA MetroX-2 systems can be coupled with NVIDIA Mellanox Unified Fabric Management (UFM®) to manage scale-out computing environments, efficiently provisioning, monitoring and operating the modern data center fabric.

Key Features

  • 2x EDR InfiniBand long-distance ports and 8 x HDR InfiniBand
    standard-distance ports in a 1U switch
  • Complete solution using LR4 MetroX-2 transceivers (sold separately)
  • Self-healing networking
  • Enhancement for intelligent data centers
  • N+1 redundant and hot-swappable fans
  • 80 Gold+ and ENERGY STAR-certified power supplies
  • 1+1 redundant and hot-swappable power
  • x86 COMEX Broadwell CPU
  • NVIDIA In-Networking Computing for offloading of collective operations from the CPU to the switch network
  • Adaptive routing (AR)
  • Congestion control

Benefits

  • Industry-leading switch platform in performance, power, and density
  • Designed for energy and cost savings
  • RDMA over long distance
  • Collective communication acceleration
  • Maximizes performance by removing fabric congestions
  • Quick and easy setup and management

NVIDIA MetroX-2 Series Specifications

  Height Long-Haul Ports Downlink Ports Long-Haul Port Speed Downlink Port Speed Total Throughput Distance Transceiver Power Supply Unit Redundancy
TQ8100-HS2F 1 RU 2 8 100Gb/s EDR 200Gb/s HDR 3.6Tb/s 10km MMA1L10-CR (LR4) Yes
TQ8200-HS2F 1 RU 2 8 100Gb/s EDR 200Gb/s HDR 3.6Tb/s 40km SPQ-CE-ER-CDFL-M (ER4) Yes

Resources

See how you can build the most efficient, high-performance network.

Configure Your Cluster

Take Networking Courses

Ready to Purchase?