- Mia
Harry Collins
Answered on 2:15 am
CX7 network card can interconnect with other 400G Ethernet switches that support RDMA, but you need to pay attention to the following points:
- The switch and the network card need to use suitable cables and transceivers, such as OSFP or QSFP112.
- The switch and the network card need to use the same rate and frame size, such as 400 Gb/s and 4 KB MTU.
- The switch and the network card need to configure the same RoCE v2 parameters, such as priority, flow control, congestion control, etc.
- The switch and the network card need to ensure the stability and reliability of the network and avoid packet loss or retransmission that causes performance degradation.
People Also Ask
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards,

Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node

OCP 2025: FiberMall Showcases Advances in 1.6T and Higher DSP, LPO/LRO, and CPO Technologies
The rapid advancement of artificial intelligence (AI) and machine learning is driving an urgent demand for higher bandwidth in data centers. At OCP 2025, FiberMall delivered multiple presentations highlighting its

What is a Silicon Photonics Optical Module?
In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics,

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network

Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific

InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC)
Related posts:
- If the Server’s Module is OSFP and the Switch’s is QSFP112, can it be Linked by Cables to Connect Data?
- What FEC is Required When the 400G-BIDI is Configured for Each of the Three Operating Modes?
- What Type of Optical Connectors do the 400G-FR4/LR4, 400G-DR4/XDR4/PLR4, 400G-BIDI (400G SRBD), 400G-SR8 and 400G-2FR4 Transceivers Use?
- What is the 100G-SRBD (or “BIDI”) Transceiver?
