- Casey
FiberMall
Answered on 8:47 am
Selecting the right 400G transceiver for multimode fiber involves many factors. Here are some of the key considerations:
Distance: The range of operations for each type of transceiver varies. Before choosing a transceiver, you should know the exact distance between the systems you plan to connect. Short-range transceivers are typically used for distances up to 70m, while long-range variants can cover distances above 2km.
Power Consumption: Power usage can vary substantially from one transceiver type to another. Higher capacity transceivers often use more power. Ideally, you should aim for a transceiver that offers the required data rate at the lowest possible power consumption.
Cost: Pricing can vary significantly between different transceivers. The overall cost should be evaluated in the context of your specific networking needs and budgetary constraints.
Compatibility: Not all transceivers will be compatible with your switches, routers, or other network devices. Be sure to confirm that the transceiver you choose works with your existing hardware.
Interconnection: Consider how different transceivers suit your interconnection environments. Transceivers come in different form factors such as QSFP-DD, OSFP, CFP2, CFP8, or COBO, and each has its own specifications for things like power consumption, size, and interface.
Reliability and Durability: The lifespan and durability of the transceivers also come into play. High-quality transceivers are built to last, reducing the need for replacements and maintenance.
The key features and common applications of each of these transceivers are described below.
1.The OSFP-400G-SR8 / SR8-C and QDD-400G-SR8 / SR8-C
The 400G-SR8 was the first 400G MMF transceiver available and has been deployed for point-to-point 400GE applications, such as leaf-to-spine connectivity, illustrated below.

While the 400G-SR8 provides cost-effective 400GE connectivity over MMF, it requires 16 fibers per transceiver and uses an MPO-16 APC fiber connector. Most 40G and 100G parallel MMF optics (such as the 40G-SR4 and 100G-SR4) use MPO-12 UPC fiber connectors. MPO-16 to 2x MPO-12 patch cables are required to use a 400G-SR8/SR8-C transceiver over an MPO-12 UPC-based fiber plant.
Another key application for 400G-SR8 transceivers is optical breakout into 2x 200G-SR4 links, enabling TOR-to-host connectivity where 200G to the host is required, as illustrated below.

The 400G-SR8-C transceiver has the same features as the 400G-SR8, with the added ability to breakout into 8x 50G-SR or 8x 25G-SR optical links. It can therefore be used in applications that require high-density 50G or 25G breakouts – as illustrated below.

- The OSFP-400G-SRBD and QDD-400G-SRBD, or “400G-BIDI”transceivers.
400G-BIDI transceivers use the widely deployed MPO-12 UPC connector for parallel multimode fiber. This allows existing 40G or 100G links that use 40G-SR4 or 100G-SR4 QSFP optics to be upgraded to 400GE with no change to the fiber plant, as illustrated below:

When configured for 400GE operation, the 400G-BIDI transceiver is compliant with the IEEE 400GBASESR4.2 specification for 400GE over 4 pairs of MMF.
Arista’s 400G-BIDI transceivers are also capable of breaking out into 4x 100GE links and can be configured (via EOS) to interoperate either with the widely deployed base of 100G-BIDI (100G-SRBD) transceivers, or newer 100G-SR1.2 transceivers, as indicated below.

In summary, Arista’s 400G-BIDI transceiver is software configurable to operate in any one of three operating modes:
i) 400G-SR4.2 for point-to-point 400GE links
ii) 4x 100G-BIDI for breakout and interop with 4x 100G-BIDI (100G-SRBD) transceivers
iii) 4x 100G-SR1.2 for breakout and interop with 4x 100G-SR1.2 transceivers
People Also Ask
Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing
In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements
Unveiling Google’s TPU Architecture: OCS Optical Circuit Switching – The Evolution Engine from 4x4x4 Cube to 9216-Chip Ironwood
What makes Google’s TPU clusters stand out in the AI supercomputing race? How has the combination of 3D Torus topology and OCS (Optical Circuit Switching) technology enabled massive scaling while maintaining low latency and optimal TCO (Total Cost of Ownership)? In this in-depth blog post, we dive deep into the
Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling
OCP 2025: FiberMall Showcases Advances in 1.6T and Higher DSP, LPO/LRO, and CPO Technologies
The rapid advancement of artificial intelligence (AI) and machine learning is driving an urgent demand for higher bandwidth in data centers. At OCP 2025, FiberMall delivered multiple presentations highlighting its progress in transceiver DSPs for AI applications, as well as LPO (Linear Pluggable Optics), LRO (Linear Receive Optics), and CPO
What is a Silicon Photonics Optical Module?
In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry
Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing
In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Unveiling Google’s TPU Architecture: OCS Optical Circuit Switching – The Evolution Engine from 4x4x4 Cube to 9216-Chip Ironwood
What makes Google’s TPU clusters stand out in the AI supercomputing race? How has the combination of 3D Torus topology and OCS (Optical Circuit Switching) technology enabled massive scaling while maintaining low latency and optimal TCO (Total Cost of Ownership)? In this in-depth blog post, we dive deep into the

Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

OCP 2025: FiberMall Showcases Advances in 1.6T and Higher DSP, LPO/LRO, and CPO Technologies
The rapid advancement of artificial intelligence (AI) and machine learning is driving an urgent demand for higher bandwidth in data centers. At OCP 2025, FiberMall delivered multiple presentations highlighting its progress in transceiver DSPs for AI applications, as well as LPO (Linear Pluggable Optics), LRO (Linear Receive Optics), and CPO

What is a Silicon Photonics Optical Module?
In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design
