NVIDIA Mellanox InfiniBand Switches

40/56/100/200 Gb/s Port Speeds

NVIDIA® Mellanox® smart InfiniBand switch systems deliver the highest performance and port density for high performance computing (HPC), AI, Web 2.0, big data, clouds, and enterprise data centers. Support for 36 to 800-port configurations at up to 200 Gb/s per port, allows compute clusters and converged data centers to operate at any scale, reducing operational costs and infrastructure complexity.

World-Class InfiniBand Performance

NVIDIA's family of InfiniBand switches deliver a complete chassis and fabric management that enables managers to build highly cost-effective and scalable switch fabrics ranging from small clusters up to 10’s-of-thousands of nodes, reducing operational costs and infrastructure complexity. Features like static routing, adaptive routing, and congestion management ensure the maximum effective fabric performance under all types of traffic conditions.

Edge Switches

NVIDIA's family of edge switch systems provides the highest-performing fabric solutions in a 1RU form factor, by delivering up to 16 Tb/s aggregate data of non-blocking bandwidth with extremely low port-to-port latency. Each port supports up to 200 Gb/s (QSFP connector) full bidirectional bandwidth. These edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium sized clusters. NVIDIA's edge switches are offered as managed or externally managed to meet a variety of deployment scenarios.

Director Switches

NVIDIA’s InfiniBand family of director switches provide the highest density switching solution, scaling from 43 Tb/s up to 320 Tb/s of bandwidth in a single enclosure, with low-latency and the highest per port speeds up to 200 Gb/s. Its smart design provides unprecedented levels of performance and simplifies the building of clusters that can scale out to thousands-of-nodes. Moreover, the leaf, spine blades, management modules, power supplies and fan units are all hot-swappable, eliminating downtime.

Products

NVIDIA Mellanox QM8700 InfiniBand 40-port Non-blocking HDR Switch Series

QM8700 InfiniBand 40-port Non-blocking HDR Switch Series

NVIDIA Mellanox provides the world’s fastest and smartest switches, enabling in-network computing through Mellanox SHARP technology. The Quantum QM8700 series has the highest fabric performance available in the market with up to 16 Tb/s of non-blocking bandwidth with sub 130ns port-to-port latency. Built with Mellanox’s Quantum InfiniBand switch device, the QM8700 series provides up to forty 200Gb/s full bi-directional bandwidth per port. Additionally, the QM8700 switch is available in both managed and externally managed models.

NVIDIA Mellanox SB7800 InfiniBand Switch Series

SB7800 InfiniBand Switch Series

Based on the NVIDIA Mellanox Switch-IB® 2 device, the Mellanox’s SB7800 switch series provide up to 36x 100Gb/s full bi-directional bandwidth per port. The SB7800 switch systems are smart network switches,that enable in-network computing through Mellanox SHARP technology. This architecture enables all active data center devices to accelerate the communications frameworks using embedded hardware, that results in order of magnitude application performance improvements. The SB7800 switch is available in both managed and externally managed models.

NVIDIA Mellanox SB7780 InfiniBand Router

SB7780 InfiniBand Router

The NVIDIA Mellanox SB7780 InfiniBand router enables a new level of scalability and isolation that is critical for the next generation of data-centers. The SB7780 InfiniBand router is based on the Switch-IB switch ASIC and offers fully flexible 36 EDR 100 Gb/s ports, which can be split among six different subnets. The SB7780 InfiniBand Router can connect between different types of topologies. Therefore, it enables each subnet topology to best fit and maximize its applications performance. For example, the storage subnets may use a Fat-Tree topology while the compute subnets may use 3D-torus, DragonFly+, Fat-Tree or other topologies that best fit the local application. The SB7780 can also help split the cluster in order to segregate between applications that run best on localized resources and between applications that need a full fabric.

NVIDIA Mellanox Quantum CS8500 HDR Director Switch Series

Quantum CS8500 HDR Director Switch Series

The NVIDIA Mellanox CS8500 switch series provides up to 800-ports HDR 200 Gb/s InfiniBand, enabling the fastest interconnect speed for high-performance computing and cloud infrastructures. As the world’s smartest network switch, CS8500 switches deliver an optimally-performing fabric solution in a 29U form factor, with 320 Tb/s of full bi-directional bandwidth and ultra-low port latency. The CS8500 combines the advantages of Mellanox’s SHARP-based in-network computing, adaptive routing, and congestion control, and more to enable order of magnitude HPC and AI application performance improvements and extremely high scalability. The CS8500 also leverages Mellanox’s SHIELD™ (Self-Healing Interconnect Enhancement for Intelligent Datacenters) to overcome link failures, enabling network recovery 5000x faster than any other software-based solution.

NVIDIA Mellanox CS7500 InfiniBand Director Switch Series

CS7500 InfiniBand Director Switch Series

The NVIDIA Mellanox CS7500 series of smart Director switches provide up to 648-ports EDR 100Gb/s—enabling the highest performing fabric solution for Enterprise Data Centers and High-Performance Computing environments in an up to 28U form factor. Networks built on the CS7500 series deliver 130 Tb/s of full bi-directional bandwidth with 400ns port latency, and carry converged traffic combining assured bandwidth and granular quality of service. CS7500 smart network switches also enable in-network computing using Mellanox SHARP software—an architecture that enables accelerating the communications frameworks using embedded hardware, resulting in order of magnitude application performance improvements. The SB7800 switch is available in 3 different port configurations: 648 / 324 / 216 ports.

InfiniBand Switch Software

MLNX-OS is NVIDIA’s InfiniBand switch operating system for data centers with storage, enterprise, high-performance and machine learning computing and cloud fabrics. Building networks with MLNX-OS enables scaling to 1000s of compute and storage nodes with monitoring and provisioning capabilities running InfiniBand. Tailored for data centers, MLNX-OS provides a robust bridging package and a complete solution for lossy and lossless networks.

ScalableHPC Software

The HPC-X ScalableHPC® Toolkit is a comprehensive MPI and SHMEM/PGAS software suite for high performance computing environments. HPC-X provides enhancements to significantly increase the scalability and performance of message communications in the network. HPC-X enables you to rapidly deploy and deliver maximum application performance without the complexity and costs of licensed third-party tools and libraries.

Unified Fabric Manager (UFM)

The UFM platform empowers research and industrial data center operators to efficiently provision, monitor, manage and proactively troubleshoot modern data center fabrics. From workload optimizations and configuration checks, to improving fabric performance through AI-based detection of network anomalies and predictive maintenance, UFM consists of a comprehensive feature set to meet the broadest range of modern scale-out data center requirements.

Resources

We're here to help you build the most efficient, high performance network.

Configuration Tools

Academy Online Courses

Ready to Purchase