What are the 100G-DR, 100G-FR and 100G-LR QSFP Transceivers?

Picture of Harry Collins

Harry Collins

Answered on 2:21 am

The 100G-DR, 100G-FR, and 100G-LR QSFP transceivers are optical modules that support 100 Gigabit Ethernet data rates over single-mode fiber. They are based on the IEEE 802.3 standard and the 100G Lambda MSA specifications. They use a single wavelength of light to transmit and receive data, which reduces the complexity and cost of the optical components. They also have a small form factor and low power consumption, making them suitable for high-density and low-power applications.

The main differences between the three types of transceivers are the reach and the interoperability with other modules. The 100G-DR transceiver supports a reach of up to 500 meters over duplex single-mode fiber and can interoperate with 400G DR4 modules in 4x100GbE breakout applications. The 100G-FR transceiver supports a reach of up to 2 kilometers over duplex single-mode fiber and can interoperate with 400G DR4+ modules in 4x100GbE breakout applications. The 100G-LR transceiver supports a reach of up to 10 kilometers over duplex single-mode fiber and can interoperate with 4x100G LR1 modules in 4x100GbE breakout applications.

The difference between a legacy 100G QSFP module and a 100G-DR / FR module is illustrated below.

legacy module and dr

The 100G-DR/FR/LR modules have a reach of 500m/2km/10km over SMF, and are designed to interoperate with 400G-DR4/XDR4/PLR4 transceivers using a breakout cable. Each 400G-DR4/XDR4/PLR4 module can connect to 4 x 100G-DR/FR/LR modules.

People Also Ask

RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers

In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

What is a Silicon Photonics Optical Module?

In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Related Articles

800g sr8 and 400g sr4

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report

Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Read More »
RoCEv2

RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers

In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU

Read More »
liquid cooling

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Read More »
multi-plane

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

Read More »
800G Silicon Photonics Optical Modules

What is a Silicon Photonics Optical Module?

In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Read More »
Scroll to Top