Bring together remote resources with expanded InfiniBand interconnectivity
Using InfiniBand across multiple, geographically distributed data centers requires high-volume, remote direct-memory access (RDMA)-based data connectivity. NVIDIA® Mellanox® MetroX®-2 systems, based on the NVIDIA Mellanox Quantum™ HDR 200Gb/s InfiniBand switch, extends InfiniBand to data centers up to 40 kilometers apart, enabling data center expansion, rapid disaster recovery, and improved utilization of remote storage and compute infrastructures.
Designed for today’s business continuity and simplified disaster recovery design, MetroX-2 enables aggregate data and storage networking over a single, consolidated fabric. As a cost-effective, power-efficient, and scalable long haul solution, NVIDIA MetroX-2 guarantees high-performance, high-volume data-sharing between remote InfiniBand sites, easily managed as a single unified network fabric.
LINK SPEED
2x100Gb/s
NUMBER OF PORTS
2EDR/8HDR
MAX. THROUGHPUT
3.6Tb/s
CHASSIS SIZE
1U
POWER CONSUMPTION (ATIS)
253w
MetroX-2 leverages low-latency RDMA connectivity to enable remote data centers to share storage, for independent computing and disaster recovery.
Using two MetroX-2 EDR InfiniBand long-haul ports, with LR4 /ER4 transceivers, easily interconnect remote InfiniBand data centers together.
MetroX-2’s optimized design leverages InfiniBand-based self-healing capabilities to overcome link failures and achieve network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.
NVIDIA MetroX-2 systems can be coupled with NVIDIA Mellanox Unified Fabric Management (UFM®) to manage scale-out computing environments, efficiently provisioning, monitoring and operating the modern data center fabric.
NVIDIA MetroX-2 Product Brief
NVIDIA MetroX-2 HDR 200Gb/s InfiniBand Switch
UFM—Software for Data Center Management
MLNX-OS—Integrated Switch Management Solution
See how you can build the most efficient, high-performance network.