NVIDIA Mellanox Releases Next-Generation 400G InfiniBand Products at SC20

Abstract: VIDIA launches the next generation of NVIDIA® Mellanox® 400G InfiniBand products at SC20 to provide AI developers and researchers with the fastest network interconnection performance in order to deal with its most globally challenging problem.

With the exponentially growing demand for computing in the fields of drug R&D, climate research, and genetic science, NVIDIA Mellanox 400G InfiniBand has achieved a significant performance leap by providing the world’s exclusive complete hardware uninstalling and network computing platform, which can accelerate the progress of related research.

The 7th generation Mellanox InfiniBand NDR 400Gb/s product provides ultra-low latency and doubles the data throughput based on the last generation product. A new NVIDIA network computing engine is also programmed to achieve extra speed. The newly released Mellanox 400G InfiniBand solution is being integrated to the enterprise-grade products from the globally leading infrastructure manufacturers including Atos, Dell Technologies, Fujitsu, Inspur Corporation, Lenovo, and SuperMicro. In addition, leading storage infrastructure partners including DDN, IBM Storage, and other storage vendors will also be in favor of NDR.

The key work for our AI customers is to deal with increasingly complex applications, which require faster, smarter, and more scalable networks, said Gilad Shainer, Senior Vice CEO, NVIDIA Network. NVIDIA Mellanox 400G InfiniBand‘s substantial data throughput and intelligent acceleration engine enable HPC, AI, and ultra-large-scale cloud infrastructure to achieve unrivaled performance at lower cost and complexity. “

 

NVIDIA Mellanox 400G infiniband

 

Todays newly released Mellanox InfiniBand represents the industry’s most powerful network solution for AI super-computing. Mellanox NDR 400G InfiniBand switches can provide 3 times the port density and 32 times the AI acceleration capability. In addition, it increases the aggregated bidirectional throughput of the modular switch system by 5 times to 1.64 petabits/s, enabling users to operate larger workloads with fewer switches.

Expand the Ecosystem for More Applications

Some of the world’s largest scientific research institutions took the lead in expressing their interest in Mellanox InfiniBand’s next-generation technology. “The partnership between Microsoft Azure and NVIDIA Networks stems from our common passion to help scientists and researchers innovate through scalable HPC and AI systems. In the HPC field, Azure HBv2 VM is the first to introduce HDR InfiniBand to the cloud and has achieved super-computing scale and performance on the cloud for MPI customer applications, This has shown its ability to scale MPI HPC applications to more than 80,000 cores.  

To achieve its AI innovation ambitions, Azure NDv4 VM takes full advantage of HDR InfiniBand, allocating 200 Gb/s of bandwidth to each GPU. Each VM can reach a total interconnect bandwidth of 1.6 Tb/s, and it can be expanded to thousands of GPUs in an InfiniBand network that guarantees low latency, thereby introducing AI super-computing to various fields. Microsoft appreciates the continuous innovation of NVIDIA’s InfiniBand product portfolio, and we look forward to the continued close partnership between the two parties.,” remarked Nidhi Chappell, Microsoft Azure HPC, and AI Product manager. 

Steve Poole, chief architect of the next-generation platform at Los Alamos National Laboratory in the United States said, “High-performance interconnection technology is the foundation of tens of billions of times and even faster supercomputers. Los Alamos National Laboratory is dedicated to leading the forefront of HPC network technology. We will continue to cooperate with NVIDIA to evaluate and analyze its latest 400Gb/s technology for the various application requirements of Los Alamos National Laboratory.”

“In the new era of exascale computing, researchers and scientists strive to make breakthroughs and apply mathematical modeling to the fields of quantum chemistry, molecular dynamics, and civil security. We are committed to achieving more success in building Europe’s next-generation leading supercomputer by virtue of the next generation Mellanox InfiniBand, stated Professor Thomas Lippert, head of Jülich Supercomputing Center. “InfiniBand maintains its leadership in innovation and performance in a further ahead position, making it a must for high-performance server and storage interconnection in HPC and AI systems. As the throughput requirement for applications continues to increase, the demand for high-performance solutions like NVIDIA Mellanox NDR 400Gb/s InfiniBand is also expected to continuously expand to new cases and markets.,” said Addison Snell, CEO of Intersect360 Research.

Product Specifications and Availability

The uninstalling operation is crucial to AI applications. The 3rd generation NVIDIA Mellanox SHARP technology enables the InfiniBand network to offload and accelerate deep learning training operations, increasing AI acceleration capacity by 32 times. It can be used at the outbox and boosts the scientific calculation combined with the NVIDIA Magnum IO software stack.

The total bidirectional throughput of the edge switch based on Mellanox InfiniBand architecture can reach 51.2Tb/s, achieving a milestone processing capacity of over 66.5 billion data packets per second. While one of the modular switches based on Mellanox InfiniBand will reach 1.64 petabits per second, which is 5 times higher than the last generation products.

Mellanox’s InfiniBand architecture ensures compatibility between the previous and next-generation products and protects data center investments in compliance with industry standards. The solution based on this architecture is expected to release samples in the second quarter of 2021.

Leave a Comment

Scroll to Top