What is the 100G-SRBD (or “BIDI”) Transceiver?

Picture of Harper Ross

Harper Ross

Answered on 8:23 am

The 100G-SRBD (or “BIDI”) transceiver is a type of optical transceiver that can transmit and receive 100G data over a pair of multimode fibers using bidirectional technology. Like the QSFP-100G-SWDM4 transceiver, it also provides 100Gbs bandwidth over standard duplex multi-mode fiber. However, unlike the SWDM4 transceiver (which transmits 4 x 25Gbps wavelengths out of the Tx port, and receives 4 x 25Gbps wavelengths on the Rx port), each optical port on the SRBD contains both a transmitter and receiver, running at full duplex 50Gb/s over a single fiber. The two ports of the QSFP-100G-SRBD provide an aggregate 100Gb/s of bandwidth. The QSFP-100G-SRBD is supported on all Arista QSFP 100G ports, and can be used for links up to 70m of OM3 or up to 100m of OM4 multi-mode fiber.

People Also Ask

Analysis of Management Methods for Unmanaged InfiniBand Switches

Why Unmanaged IB Switches Have No Web-UI 1) Positioning and Functional Simplification  2) Differences in Protocol Stack  3) Reducing Cost and Complexity  How to Manage and Configure Unmanaged IB Switches Although there is no Web-UI, they can be managed via the following methods: 1) Connect to the Subnet Manager via

Analysis of Core Port Ratios in Intelligent Computing Center Network Design

Two Key Design Principles for GPU Cluster Networks The Definition of Core Ports In a typical Spine-Leaf (CLOS) network architecture for intelligent computing centers: Consistent Access-to-Core Port Ratios The number and bandwidth of “downlink ports” (used to connect servers) on a Leaf switch should maintain a fixed and sufficient ratio—typically 1:1

NVIDIA Spectrum-X Network Platform Architecture Whitepaper

Improving AI Performance and Efficiency AI workload demands are growing at an unprecedented rate, and the adoption of generative AI is skyrocketing. Every year, new AI factories are springing up. These facilities, dedicated to the development and operation of artificial intelligence technologies, are increasingly expanding into the domains of Cloud

NVIDIA GB200 NVL72: Defining the New Benchmark for Rack-Scale AI Computing

The explosive growth of Large Language Models (LLM) and Mixture-of-Experts (MoE) architectures is fundamentally reshaping the underlying logic of computing infrastructure. As model parameters cross the trillion mark, traditional cluster architectures—centered on standalone servers connected by standard networking—are hitting physical and economic ceilings. In this context, NVIDIA’s GB200 NVL72 is

In-Depth Analysis Report on 800G Switches: Architectural Evolution, Market Landscape, and Future Outlook

Introduction: Reconstructing Network Infrastructure in the AI Era Paradigm Shift from Cloud Computing to AI Factories       Global data center networks are undergoing the most profound transformation in the past decade. Previously, network architectures were primarily designed around cloud computing and internet application traffic patterns, dominated by “north-south” client-server models. However,

Related Articles

800g sr8 and 400g sr4

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report

Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Read More »
Unmanaged InfiniBand Switches

Analysis of Management Methods for Unmanaged InfiniBand Switches

Why Unmanaged IB Switches Have No Web-UI 1) Positioning and Functional Simplification  2) Differences in Protocol Stack  3) Reducing Cost and Complexity  How to Manage and Configure Unmanaged IB Switches Although there is no Web-UI, they can be managed via the following methods: 1) Connect to the Subnet Manager via

Read More »
ai cluster

Analysis of Core Port Ratios in Intelligent Computing Center Network Design

Two Key Design Principles for GPU Cluster Networks The Definition of Core Ports In a typical Spine-Leaf (CLOS) network architecture for intelligent computing centers: Consistent Access-to-Core Port Ratios The number and bandwidth of “downlink ports” (used to connect servers) on a Leaf switch should maintain a fixed and sufficient ratio—typically 1:1

Read More »
NVIDIA Spectrum-X Network Platform

NVIDIA Spectrum-X Network Platform Architecture Whitepaper

Improving AI Performance and Efficiency AI workload demands are growing at an unprecedented rate, and the adoption of generative AI is skyrocketing. Every year, new AI factories are springing up. These facilities, dedicated to the development and operation of artificial intelligence technologies, are increasingly expanding into the domains of Cloud

Read More »
nvidia

NVIDIA GB200 NVL72: Defining the New Benchmark for Rack-Scale AI Computing

The explosive growth of Large Language Models (LLM) and Mixture-of-Experts (MoE) architectures is fundamentally reshaping the underlying logic of computing infrastructure. As model parameters cross the trillion mark, traditional cluster architectures—centered on standalone servers connected by standard networking—are hitting physical and economic ceilings. In this context, NVIDIA’s GB200 NVL72 is

Read More »
Report on 800G Switches

In-Depth Analysis Report on 800G Switches: Architectural Evolution, Market Landscape, and Future Outlook

Introduction: Reconstructing Network Infrastructure in the AI Era Paradigm Shift from Cloud Computing to AI Factories       Global data center networks are undergoing the most profound transformation in the past decade. Previously, network architectures were primarily designed around cloud computing and internet application traffic patterns, dominated by “north-south” client-server models. However,

Read More »
Scroll to Top