- Catherine
FiberMall
Answered on 7:27 am
There are a few different ways to do this (as discussed earlier in this document), summarized below.
i) OSFP-400G-DR4 (or QDD-400G-DR4) to 4 x QSFP-100G-DR over 500m SMF
Connect up to 4 x QSFP-100G-DRs to a single OSFP-400G-DR4 (or QDD-400G-DR4). The QSFP-100G-DR can plug into any Arista 100G QSFP port.

ii) OSFP-400G-XDR4 (or QDD-400G-XDR4) to 4 x QSFP-100G-FR over 2km SMF
Connect up to 4 x QSFP-100G-FRs to a single OSFP-400G-XDR4 (or QDD-400G-XDR4). The QSFP-100G-FR can plug into any Arista 100G QSFP port.

iii) OSFP-400G-PLR4 (or QDD-400G-PLR4) to 4 x QSFP-100G-LR over 10km SMF
Connect up to 4x QSFP-100G-LRs to a single OSFP-400G-PLR4 (or QDD-400G-PLR4). The QSFP-100G-LR can plug into any Arista 100G QSFP port.

iv) H-O400-4Q100-xM (or H-D400-4Q100) to 4x QSFP100 ports with Active Copper DACs, 1m-5m
Connect up to 4x 100G QSFP ports to a single 400G OSFP or QSFP-DD port. The QSFP end of the active breakout DAC includes a gearbox chip that converts 2x50G PAM-4 electrical signals into a 4x 25G NRZ interface, the modulation format used in legacy 100G QSFP ports.

v) OSFP-400G-2FR4 to 2 x QSFP-100G-CWDM4 over 2km SMF
If an OSFP port is configured for 2 x 100G (i.e. 200G total bandwidth), the OSFP-400G-2FR4 module can be used to connect to 2 x QSFP-100G-CWDM4 transceivers over duplex single-mode fibers.
Configuring an OSFP port for 200G total bandwidth means each of the 8 electrical lanes to/from the OSFP operates at 25Gb/s NRZ, the same modulation format used in legacy 100G QSFP ports.

vi) OSFP-400G-SRBD (or QDD-400G-SRBD) to 4x QSFP-100G-SRBD or 4x 100G-SR1.2 QSFPs over 100m MMF
Connect up to 4x QSFP-100G-SRBD, or 4x 100G-SR1.2 QSFPs to a single 400G-BIDI module.

vii) OSFP-400G-SR8 (or QDD-400G-SR8) to 2 x QSFP-100G-SR4 QSFPs over 100m MMF
If an OSFP port is run at 200G total bandwidth, the OSFP-400G-SR8 module can be used to connect to 2 x QSFP-100G-SR4 transceivers using a multimode breakout cable.

viii) Passive DAC breakout cable using CAB-O-2Q-400G-xM / CAB-O-2Q-200G-xM or CAB-D-2Q-400G-xM / CAB-D-2Q-200G-xM
If the OSFP or QSFP-DD port is run at 200G total bandwidth, a passive DAC breakout cable can be used to connect an OSFP or QSFP-DD port into 2x 100G QSFP ports.

People Also Ask
Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design
Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific Integrated Circuit) and general-purpose parallel computing (GPGPU). These represent the two dominant schools of thought in today’s AI hardware landscape.
InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC) and AI training sectors. Broadcom and NVIDIA are fiercely competing for market leadership. As artificial intelligence models grow exponentially in
From AI Chips to the Ultimate CPO Positioning Battle: NVIDIA vs. Broadcom Technology Roadmap Showdown
In the era driven by artificial intelligence (AI) and machine learning, global data traffic is multiplying exponentially. Data center servers and switches are rapidly transitioning from 200G and 400G connections to 800G, 1.6T, and potentially even 3.2T speeds. Market research firm TrendForce predicts that global shipments of optical transceiver modules
H3C S6550XE-HI Series 25G Ethernet Switch: High-Performance 25G/100G Solution for Campus and Metro Networks
The H3C S6550XE-HI series is a cutting-edge, high-performance, high-density 25G/100G Ethernet switch developed by H3C using industry-leading professional ASIC technology. Designed as a next-generation Layer 3 Ethernet switch, it delivers exceptional security, IPv4/IPv6 dual-stack management and forwarding, and full support for static routing protocols as well as dynamic routing protocols including
Switching NVIDIA ConnectX Series NICs from InfiniBand to Ethernet Mode: A Step-by-Step Guide
The NVIDIA ConnectX Virtual Protocol Interconnect (VPI) series network interface cards (NICs)—including models such as ConnectX-4, ConnectX-5, ConnectX-6, ConnectX-7, and ConnectX-8 (commonly abbreviated as CX-4/5/6/7/8)—represent a rare class of dual-mode adapters in the industry. A single card enables seamless switching between InfiniBand (IB) and Ethernet physical networks without hardware replacement.
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design

Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific Integrated Circuit) and general-purpose parallel computing (GPGPU). These represent the two dominant schools of thought in today’s AI hardware landscape.

InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC) and AI training sectors. Broadcom and NVIDIA are fiercely competing for market leadership. As artificial intelligence models grow exponentially in

From AI Chips to the Ultimate CPO Positioning Battle: NVIDIA vs. Broadcom Technology Roadmap Showdown
In the era driven by artificial intelligence (AI) and machine learning, global data traffic is multiplying exponentially. Data center servers and switches are rapidly transitioning from 200G and 400G connections to 800G, 1.6T, and potentially even 3.2T speeds. Market research firm TrendForce predicts that global shipments of optical transceiver modules

H3C S6550XE-HI Series 25G Ethernet Switch: High-Performance 25G/100G Solution for Campus and Metro Networks
The H3C S6550XE-HI series is a cutting-edge, high-performance, high-density 25G/100G Ethernet switch developed by H3C using industry-leading professional ASIC technology. Designed as a next-generation Layer 3 Ethernet switch, it delivers exceptional security, IPv4/IPv6 dual-stack management and forwarding, and full support for static routing protocols as well as dynamic routing protocols including

Switching NVIDIA ConnectX Series NICs from InfiniBand to Ethernet Mode: A Step-by-Step Guide
The NVIDIA ConnectX Virtual Protocol Interconnect (VPI) series network interface cards (NICs)—including models such as ConnectX-4, ConnectX-5, ConnectX-6, ConnectX-7, and ConnectX-8 (commonly abbreviated as CX-4/5/6/7/8)—represent a rare class of dual-mode adapters in the industry. A single card enables seamless switching between InfiniBand (IB) and Ethernet physical networks without hardware replacement.
