Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing platform—provides the dramatic leap in performance needed to achieve unmatched data center performance with less cost and complexity.
NVIDIA Quantum InfiniBand switches deliver a complete switch system and fabric management portfolio for connecting cloud-native supercomputing at any scale. NVIDIA Quantum InfiniBand also provides self-healing network capabilities, enhanced quality of service (QoS), congestion control, and adaptive routing to provide the highest overall application throughput.
The NVIDIA Quantum InfiniBand family of fixed-configuration switch systems provides the highest-performing fabric solutions in a 1U form factor with up to 64 ports of 400Gb/s non-blocking bandwidth and extremely low port-to-port latency. These switches are ideal for top-of-rack leaf connectivity or for building data center switch networks at any scale.
The NVIDIA Quantum InfiniBand family of modular switches provide low latency and the highest density, scaling up to 2,048 ports of 400Gb/s non-blocking bandwidth in a single enclosure. Its smart design, with hot-swappable components provides unprecedented levels of performance and simplifies the building of data centers that can scale out to over a million nodes.
NVIDIA Quantum-2 InfiniBand switches deliver massive throughput, In-Network Computing, smart acceleration engines, flexibility, and a robust architecture to achieve unmatched performance in high-performance computing (HPC), AI, and hyperscale cloud infrastructures—with less cost and complexity.
NVIDIA Quantum InfiniBand switches provide high-bandwidth performance, low power, and scalability, reducing capital and operating expenses and providing the best return on investment. NVIDIA Quantum switches optimize data center connectivity with advanced routing and congestion avoidance capabilities.
NVIDIA SB7800 switch series delivers 100Gb/s of full bidirectional bandwidth per port. The 100Gb/s InfiniBand switch systems provide cost effective building blocks for deploying high-performance data centers.
The NVIDIA SB7880 InfiniBand router enables isolation and connectivity between up to six different InfiniBand subnets. The router is based on Switch-IB 2 and offers fully flexible 36 100Gb/s ports. The connected InfiniBand subnets can have different network topologies to maximize application performance.
NVIDIA MLNX-OS® is an InfiniBand switch operating system for high-performance data centers. Building networks with MLNX-OS enables scaling to thousands of compute and storage nodesnodes, and provides monitoring and provisioning capabilities.
NVIDIA Quantum InfiniBand switches include Scalable Hierarchical Aggregation And Reduction Protocol (SHARP). SHARP offloads and accelerates data reduction algorithms, increasing the performance and scalability of HPC and AI applications.
The NVIDIA HPC-X® is a comprehensive MPI and SHMEM/PGAS software suite. HPC-X leverages InfiniBand In-Network Computing and acceleration engines to optimize research and industry applications.
The NVIDIA UFM® platform empowers data center administrators to efficiently provision, monitor, manage and proactively troubleshoot their InfiniBand network infrastructure.
> Socket Direct
> UFM Cyber-AI
> InsideHPC interview with Gilad for ISC21
> QM9700
> QM8700
> CS8500
> SB7800
> InfiniBand Product Guide
> NVIDIA GPUDirect RDMA
> NVIDIA Quantum-2 InfiniBand Platform
> HPC and Bioscience
See how you can build the most efficient, high-performance network.