- Casey
FiberMall
Answered on 6:59 am
Several Arista 100G transceivers and cables can be used in breakout mode, which means they can be split into four 25G channels. These include:
- QSFP-100G-SR4: 100GBASE-SR4 QSFP optical transceiver, up to 70m over parallel OM3 or 100m over OM4 multi-mode fiber
- QSFP-100G-XSR4: 100GBASE-XSR4 QSFP optical transceiver, up to 150m over parallel OM3 or 300m over OM4 multi-mode fiber
- QSFP-100G-PSM4: 100GBASE-PSM4 40G/100G dual speed QSFP optical transceiver, up to 500m over parallel SMF
- QSFP-100G-PLRL4: 100GBASE-PLRL4 40G/100G dual speed QSFP optical transceiver, up to 2km over parallel SMF
- CAB-Q-Q-100G-xM: 100GBASE-CR4 QSFP to QSFP Twinax Copper Cable, 0.5m to 5m
- CAB-Q-4S-100G-xM: 100GBASE-CR4 QSFP to 4 x 25GbE SFP Twinax Copper Cable, 1m to 5m
- AOC-Q-Q-100G-xM: 100GbE QSFP to QSFP Active Optical Cable, 1m to 30m
- AOC-Q-4S-100G-xM: Arista AOC-Q-4S-100G-xM Compatible QSFP28 to 4x25G SFP28 Breakout Active Optical Cable, 3m to 30m
These transceivers and cables are interoperable with relevant industry standards when used in the breakout mode, as shown in the table below:

People Also Ask
Key Design Constraints for Stack-OSFP Optical Transceiver Cold Plate Liquid Cooling
Foreword The data center industry has already adopted 800G/1.6T optical modules on a large scale, and the demand for cold plate liquid cooling of optical modules has increased significantly. To meet this industry demand, OSFP-MSA V5.22 version has added solutions applicable to cold plate liquid cooling. At present, there are
NVIDIA DGX Spark Quick Start Guide: Your Personal AI Supercomputer on the Desk
NVIDIA DGX Spark — the world’s smallest AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip — brings data-center-class AI performance to your desktop. With up to 1 PFLOP of FP4 AI compute and 128 GB of unified memory, it enables local inference on models up to 200 billion parameters and fine-tuning of models
RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers
In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU
Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing
In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements
Unveiling Google’s TPU Architecture: OCS Optical Circuit Switching – The Evolution Engine from 4x4x4 Cube to 9216-Chip Ironwood
What makes Google’s TPU clusters stand out in the AI supercomputing race? How has the combination of 3D Torus topology and OCS (Optical Circuit Switching) technology enabled massive scaling while maintaining low latency and optimal TCO (Total Cost of Ownership)? In this in-depth blog post, we dive deep into the
Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Key Design Constraints for Stack-OSFP Optical Transceiver Cold Plate Liquid Cooling
Foreword The data center industry has already adopted 800G/1.6T optical modules on a large scale, and the demand for cold plate liquid cooling of optical modules has increased significantly. To meet this industry demand, OSFP-MSA V5.22 version has added solutions applicable to cold plate liquid cooling. At present, there are

NVIDIA DGX Spark Quick Start Guide: Your Personal AI Supercomputer on the Desk
NVIDIA DGX Spark — the world’s smallest AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip — brings data-center-class AI performance to your desktop. With up to 1 PFLOP of FP4 AI compute and 128 GB of unified memory, it enables local inference on models up to 200 billion parameters and fine-tuning of models

RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers
In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing
In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Unveiling Google’s TPU Architecture: OCS Optical Circuit Switching – The Evolution Engine from 4x4x4 Cube to 9216-Chip Ironwood
What makes Google’s TPU clusters stand out in the AI supercomputing race? How has the combination of 3D Torus topology and OCS (Optical Circuit Switching) technology enabled massive scaling while maintaining low latency and optimal TCO (Total Cost of Ownership)? In this in-depth blog post, we dive deep into the

Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling
Related posts:
- What is the Difference Between UFM Telemetry, Enterprise and Cyber-AI?
- Is the Module on the OSFP NIC flat or Riding Heatsink?
- What is the Reach, Fiber Type, Connector, and Optical Modulation for Each 400G Transceiver Type?
- What do QSFP28, QSFP56, and SFP56 Mean? What Nomenclature Should be Used to Describe the Different Types of QSFP and SFP Ports?
