The NVIDIA Quantum InfiniBand Platform

Bring end-to-end high-performance networking to scientific computing, AI, and cloud data centers.

InfiniBand Networking Solutions

Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and highly parallelized algorithms. As these computing requirements continue to grow, NVIDIA Quantum InfiniBand—the world’s only fully offloadable, In-Network Computing platform—provides the dramatic leap in performance needed to achieve unmatched performance in high-performance computing (HPC), AI, and hyperscale cloud infrastructures with less cost and complexity.

Catch all the inspiring GTC sessions on-demand.

The Developer Conference for the Era of AI and the Metaverse

Join us this September for a GTC that will inspire your next big idea. This is a don’t miss opportunity to learn from experts and leaders in their fields on how AI is transforming industries and profoundly impacting the world. It all happens online September 19-22.

Speakers at GTC22
InfiniBand Adapters

InfiniBand Adapters

InfiniBand host channel adapters (HCAs) provide ultra-low latency, extreme throughput, and innovative NVIDIA In-Network Computing engines to deliver the acceleration, scalability, and feature-rich technology needed for today's modern workloads.

NVIDIA Data Processing Units (DPUs)

Data Processing Units (DPUs)

The NVIDIA® BlueField® DPU combines powerful computing, high-speed networking, and extensive programmability to deliver software-defined, hardware-accelerated solutions for the most demanding workloads. From accelerated AI and scientific computing to cloud-native supercomputing, BlueField redefines what’s possible.

InfiniBand Switches

InfiniBand Switches

InfiniBand switch systems deliver the highest performance and port density available. Innovative capabilities such as NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ and advanced management features such as self-healing network capabilities, quality of service, enhanced virtual lane mapping, and NVIDIA In-Network Computing acceleration engines provide a performance boost for industrial, AI, and scientific applications.

Routers and Gateway Systems

Routers and Gateway Systems

InfiniBand systems provide the highest scalability and subnet isolation using InfiniBand routers, and InfiniBand to Ethernet gateway systems. The latter is used to enable a scalable and efficient way to connect InfiniBand data centers to Ethernet infrastructures.

 Long-Haul Systems

Long-Haul Systems

NVIDIA MetroX® long-haul systems can seamlessly connect remote InfiniBand data centers, storage, and other InfiniBand platforms. They can extend the reach of InfiniBand up to 40 kilometers, enabling native InfiniBand connectivity between remote data centers or between data center and remote storage infrastructures for high availability and disaster recovery.

LinkX InfiniBand Cables and Transceivers

Cables and Transceivers

LinkX® cables and transceivers are designed to maximize the performance of HPC networks, requiring high-bandwidth, low-latency, highly reliable connections between InfiniBand elements.

InfiniBand-Enhanced Capabilities

In-Network Computing

In-Network Computing

NVIDIA Scalable Hierarchical Aggregation and Reduction Protocol (SHARP) offloads collective communication operations to the switch network, decreasing the amount of data traversing the network, reducing the time of Message Passing Interface (MPI) operations, and increasing data center efficiency.

Self Healing Network

Self-Healing Network

NVIDIA InfiniBand with self-healing network capabilities overcomes link failures, enabling network recovery 5,000X faster than any other software-based solution. These capabilities take advantage of the intelligence built into the latest generation of InfiniBand switches.

Quality of Service

Quality of Service

InfiniBand is the only high-performance interconnect solution with proven quality-of-service capabilities, including advanced congestion control and adaptive routing, resulting in unmatched network efficiency.

Network Topologies

Network Topologies

InfiniBand offers centralized management and supports any topology, including Fat Tree, Hypercubes, multi-dimensional Torus, and Dragonfly+. Routing algorithms optimize performance when designing a topology for particular application communication patterns.

Software for Optimal Performance

MLNX_OFED

OFED from OpenFabrics Alliance has been collaboratively developed and tested by high-performance input/output (IO) manufacturers. NVIDIA MLNX_OFED is an NVIDIA-tested version of OFED.

HPC-X

The NVIDIA HPC-X® is a comprehensive MPI and SHMEM/PGAS software suite. HPC-X leverages InfiniBand In-Network Computing and acceleration engines to optimize research and industry applications.

UFM

The NVIDIA Unified Fabric Manager (UFM®) platform empowers data center administrators to efficiently provision, monitor, manage, and proactively troubleshoot their InfiniBand network infrastructure.

Magnum IO

NVIDIA Magnum IO utilizes network IO, In-Network Computing, storage, and IO management to simplify and speed up data movement, access, and management for multi-GPU, multi-node systems.

Resources

  • Solution Briefs
  • Videos

Configure Your Cluster

Take Networking Courses

Ready to Purchase?