High-performance computing (HPC) and AI applications require the most advanced high-speed networking. NVIDIA InfiniBand switches provide the highest performance and smart In-Network Computing acceleration engines, enabling world-leading supercomputing platforms.
NVIDIA's family of InfiniBand switches deliver a complete chassis and fabric management that enables managers to build highly cost-effective and scalable switch fabrics ranging from small clusters up to tens of thousands of nodes. Features like adaptive routing, congestion control and quality of service ensure the maximum effective fabric performance under all types of traffic conditions.
NVIDIA's family of edge switch systems provides the highest-performing fabric solutions in a 1U form factor by delivering up to 51.2 terabits per second (Tb/s) of non-blocking bandwidth with extremely low port-to-port latency. Supporting up to 400Gb/s per port, these edge switches are an ideal choice for top-of-rack leaf connectivity or for building small to medium-sized clusters. NVIDIA's edge switches are offered as managed or externally managed models to meet a variety of deployment scenarios.
NVIDIA’s InfiniBand family of director switches provide the highest-density switching solution, scaling from 43Tb/s up to 320Tb/s of bandwidth in a single enclosure, with low latency and the highest per-port speeds up to 200Gb/s. Its smart design provides unprecedented levels of performance and simplifies the building of clusters that can scale out to thousands of nodes. Moreover, the leaf, spine blades, management modules, power supplies, and fan units are all hot-swappable, eliminating downtime.
NVIDIA InfiniBand provides AI developers and scientific researchers with the highest networking performance to take on the world’s most challenging problems. Next data rate (NDR) 400Gb/s InfiniBand with new NVIDIA In-Network Computing acceleration engines provides ultra-low latency while delivering the scalability and feature-rich capabilities required for supercomputers, artificial intelligence, and hyperscale cloud data centers.
NVIDIA provides the world’s fastest and smartest switches, enabling in-network computing through NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ technology. The NVIDIA Quantum QM8700 series has the highest fabric performance available in the market with up to 16Tb/s of non-blocking bandwidth with sub 130 nanosecond (ns) port-to-port latency. Built with the NVIDIA Quantum InfiniBand switch device, the QM8700 series provides up to forty 200Gb/s full bi-directional bandwidth per port. Additionally, the QM8700 switch is available in both managed and externally managed models.
Based on the NVIDIA Switch-IB® 2 device, the NVIDIA SB7800 switch series provides up to 36 ports of 100Gb/s full bi-directional bandwidth per port. The SB7800 switch systems are smart network switches that enable in-network computing through NVIDIA SHARP technology. This architecture enables all active data center devices to accelerate the communications frameworks using embedded hardware, resulting in order-of-magnitude application performance improvements. The SB7800 switch is available in both managed and externally managed models.
The NVIDIA SB7780 InfiniBand router enables a new level of scalability and isolation that’s critical for the next generation of data centers. The SB7780 InfiniBand router is based on the Switch-IB application-specific integrated circuit (ASIC) and offers fully flexible 36 enhanced data rate (EDR) 100Gb/s ports, which can be split among six different subnets. The SB7780 InfiniBand router can connect between different types of topologies. Therefore, it enables each subnet topology to best fit and maximize its application’s performance. For example, the storage subnets may use a Fat-Tree topology while the compute subnets may use 3D-torus, DragonFly+, Fat-Tree, or other topologies that best fit the local application. The SB7780 can also help split the cluster to segregate between applications that run best on localized resources and between applications that need a full fabric.
The NVIDIA CS8500 switch series provides up to 800 ports of high data rate (HDR) 200Gb/s InfiniBand, enabling the fastest interconnect speed for HPC and cloud infrastructures. As the world’s smartest network switch, CS8500 switches deliver an optimally performing fabric solution in a 29U form factor, with 320Tb/s of full bi-directional bandwidth and ultra-low port latency. The CS8500 combines the advantages of NVIDIA’s SHARP-based in-network computing, adaptive routing, congestion control, and more to enable order-of-magnitude HPC and AI application performance improvements and extremely high scalability. The CS8500 also leverages NVIDIA’s SHIELD™ (Self-Healing Interconnect Enhancement for Intelligent Data Centers) to overcome link failures, enabling network recovery 5000X faster than any other software-based solution.
The NVIDIA CS7500 series of smart director switches provide up to 648 ports of EDR 100Gb/s InfiniBand—enabling a high-performing fabric solution for HPC environments in an up to 28U form factor. Networks built on the CS7500 series deliver 130Tb/s of full bi-directional bandwidth with 400ns port latency and carry converged traffic combining assured bandwidth and granular quality of service. CS7500 smart network switches also enable in-network computing using NVIDIA SHARP software—an architecture that accelerates the communications frameworks using embedded hardware, resulting in order-of-magnitude application performance improvements. The SB7800 switch is available in three different port configurations: 648, 324, and 216 ports.
NVIDIA MLNX-OS® is NVIDIA’s InfiniBand switch operating system for data centers with storage, enterprise, high-performance, and machine learning computing and cloud fabrics. Building networks with MLNX-OS enables scaling to thousands of compute and storage nodes with monitoring and provisioning capabilities running InfiniBand. Tailored for data centers, MLNX-OS provides a robust bridging package and a complete solution for lossy and lossless networks.
NVIDIA SHARP improves the performance of message passing interface (MPI) operations by offloading them from the CPU to the switch network and eliminating the need to send data multiple times, decreasing the amount of data traversing the network and dramatically reducing MPI operation time.
The NVIDIA HPC-X® is a comprehensive MPI and SHMEM/PGAS software suite for high-performance computing environments. HPC-X provides enhancements to significantly increase the scalability and performance of message communications in the network. HPC-X enables rapid deployment and `maximum application performance without the complexity and costs of licensed third-party tools and libraries.
The NVIDIA UFM® platform empowers research and industrial data center operators to efficiently provision, monitor, manage, and proactively troubleshoot modern data center fabrics. From workload optimizations and configuration checks to improving fabric performance through AI-based detection of network anomalies and predictive maintenance, UFM consists of a comprehensive feature set to meet the broadest range of modern scale-out data center requirements.
Quantum
HDR 200G InfiniBand Wins the International Supercomputing Conference 2019
HDR InfiniBand Speeds HPC and Ai Applications with SHARP Technologies
InfiniBand In-Network Computing Technology and Roadmap
NVMe over Fabrics Solutions
Socket Direct
UFM Platforms Optimize Supercomputing OPEX
QM9700
QM8700 (Managed)
QM8790
CS8500
SB7800 (Managed)
SB7890
SB7780 Router
CS7500 (648-Port)
CS7510
CS7520
InfiniBand Product Guide
NVIDIA QUANTUM IC
SWITCH-IB 2 IC
Accelerate Your Business with Deep Learning
NVIDIA and Enmotech Create Industry-Leading, High-Performance, and Open, All-in-One Machine
Manufacturing
AI Composability and Virtualization: Network Attached GPUs
IBM and NVIDIA Enable Highly Available, Elastic Storage for Complex Modeling, Analytics, etc.
Move to 4K, the Right Way!
NVIDIA and Rafael Provide Advanced Machine Learning Platform
Oil and Gas Industry Modeling
HPC for Weather and Astronomy
HPC and Bioscience
Electronic Design Automation (EDA)
NVIDIA In-Network Computing and Next-Generation HDR 200G InfiniBand
Maximizing Server Performance with Socket Direct Adapter
Saving Power in the Modern Data Center
See how you can build the most efficient, high-performance network.