SB7780/7880 InfiniBand Series

Switch-IB 2 EDR 100Gb/s InfiniBand Router Switch

The unprecedented growth of the global HPC market coupled with the insatiable demands of Exascale ecosystems around the world, have caused a rapid increase in the number of connected servers. The NVIDIA Mellanox SB7780/SB7880 InfiniBand Routers allow scaling up to an unlimited number of nodes, while sustaining the data processing demands of machine learning, IoT, HPC and cloud applications. 

Resiliency and Ease of Scale

The InfiniBand router brings two major enhancements to the Mellanox switch portfolio. increasing resiliency, the router segregates the data center  network into several subnets. Each subnet runs its own subnet manager, effectively isolating each subnet from the others’ availability or instability. Additionally, the router also enables the fabric to scale up to an  unlimited number of nodes.

NVIDIA Mellanox switch portfolio withSB7780/SB7880 InfiniBand routers

Highlights

World-Class InfiniBand Performance

NVIDIA Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)  In-Network Computing technology

In-Network Computing

NVIDIA Mellanox InfiniBand Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ In-Network Computing offloads collective communication operations from the CPU to the switch network, improving application performance by an order of magnitude.

NVIDIA Mellanox InfiniBand with self-healing networking capabilities overcomes link failures and achieves network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.

Self-Healing Networking

NVIDIA Mellanox InfiniBand with self-healing networking capabilities, overcomes  link failures and achieves network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.

NVIDIA Mellanox Unified Fabric Management (UFM)

UFM Management

NVIDIA Mellanox Unified Fabric Management (UFM®) platforms combine enhanced, real-time network telemetry with AI-powered cyber intelligence and analytics; to realize higher utilization of fabric resources and a competitive advantage, while reducing OPEX.

Key Features

Standard Switches

  • 36x EDR 100Gb/s ports in a 1U switch
  • Up to 7.2Tb/s aggregate switch throughput
  • Ultra-low latency between InfiniBand subnets
  • InfiniBand Trade Association (IBTA) specification 1.3 and 1.2.1 compliant
  • Quality-of-service enforcement
  • 1+1 power supply

Managed Switches

  • Integ rated subnet manager agent (up to 2k nodes)
  • Quick and easy setup and management
  • Intuitive CLI
  • Can be enhanced with Mellanox's Unified Fabric Manager (UFM®)
  • Temperature sensors and voltage monitors
  • Fan speed controlled by management software

Benefits

  • Industry-leading switch platform in performance, power, and density
  • Designed for next level of scale and resiliency
  • Designed for energy and cost savings
  • Quick and easy setup and management
  • Flexible port allocation to support up to six different InfiniBand subnets

InfiniBand Routers Comparison Table

  Link Speed Switch ASIC Ports Height Switching Capacity Cooling System Interface Number of PSUs Management Subnet Manager
SB7780 100Gb/s Switch IB 36 1U 7.2Tb/s Air cooled QSFP28 2 Inband/Outband +
SB7880 100Gb/s Switch IB-2 36 1U 7.2Tb/s Air cooled QSFP28 2 Inband/Outband +

Resources

See how you can build the most efficient, high-performance network.

Configure Your Cluster

Take Networking Courses

Ready to Purchase?