How to Select DAC Cables for a Large Data Center?

Direct Attach Cable (DAC), also known as direct attach cable, is widely used in the ICT field. Commonly used in all kinds of IT equipment, such as computing, storage, network, and other high-speed interfaces short-distance interconnection, but also usually called by its nature – high-speed copper cable.

As the “first highway” in the physical network of data centers, DAC Cable has been used in large data centers for more than 10 years, and large data centers hardly use AOC Cable (Active Optical Cable). The cumulative use of DAC Cable in major data centers around the world has been well over 50 million. Amazon’s data center alone will reach a million deployment scale in 2020.

The application and implementation of DAC Cable deployment in large data centers are just one of the many aspects of the overall cloud computing infrastructure progress and innovation, but it reflects the overall collaborative innovation and pragmatic and stable thinking of cloud computing infrastructure.

DAC AOC

Figure 1. DAC Cable (left) and AOC Cable(right)

Data Center Physical Network Interconnection

First, let’s take a look at the physical network high-speed interconnection link in the data center.

Data Center Network Link Overview

Figure 2. Data Center Network Link Overview

In a typical 3-layer CLOS architecture of large data center networks, the length of network links used for interconnection between Spine and Leaf, and between Leaf and TOR in a cluster is generally less than 2km, and the length of interconnection between Leaf and TOR is usually less than 100m. The number of links requiring short and long-distance optical modules accounts for 1/3 of the total number of physical links in the whole cluster. The length of links connecting the server network adapter to TOR is usually within 10m, but the number of links accounts for 2/3 of the total number of physical links, which are usually connected by DAC Cable or AOC Cable.

Similarities and Differences Between DAC Cable and AOC Cable

Electrical and optical channels of DAC Cable and AOC Cable

Figure 3. Electrical and optical channels of DAC Cable and AOC Cable

AOC Cable and DAC Cable both use the same form factor and electrical interface in the module package at both ends, such as SFP, QSFP, and various other standards to ensure standard mating with the system side (switch, NIC, etc.).

The modules of AOC Cable contain electrical and optical conversion chips with basic functions such as CDR, Retimer/Gearbox, Laser, PD, and other electrical and optical devices, and the electrical signal on the system side is modulated to the optical signal for transmission.

DAC Cable is simply a passive copper medium that consists of a module in which a high-speed differential twinax cable is directly soldered together with a shielding layer and an outer cover layer as a cable assembly, and the electrical signal is transmitted directly between ends.

Advantages of AOC Cable

AOC Cable interconnects with NIC

Figure 4. AOC Cable interconnects with NIC

  1. Standard interface, plug-and-play. For the application of AOC Cable and optical modules, the conformance target point (TP1a and TP4) is at the pluggable module and system port. Therefore, in the era of brand switches, AOC Cable is plug-and-play as long as all ports of the switch and optical modules meet the electrical signal specifications of TP1a and TP4. The transmission of intermediate optical signals is a closed loop between modules and has no concern for system users.
  2. The optical fiber supports a longer connection distance. As we all know, optical fiber has a very low loss per unit length and can support far more transmission distance than copper cable.

Advantages of DAC Cable

For users and O&M of IT equipment, DAC Cable has two very intuitive advantages over AOC Cable: cost and power consumption.

Take the 2019 25G DAC Cable and AOC Cable for example:

  1. Low cost: The cost of DAC Cable is about 1/5 that of AOC Cable.
  2. Low power consumption: DAC Cable is passive with zero power consumption; The power consumption of 25G AOC Cable is about 1-2 watts/root.

DAC Cable also has the advantage of higher reliability and lower latency compared to AOC Cable, which can be significantly reflected in large-scale deployment and latency-sensitive services. DAC Cable is more adaptable than AOC Cable in immersion cooling (no need to consider sealing of liquid-sensitive optoelectronic devices in optical modules).

 

Attribution for DAC Cable Not Being Used at Scale

Before 2018, DAC cables were not used at scale in data centers for two reasons:

    1. Operation problems of DAC Cable in network branded devices: In the era of using commercial switches, all the way from TOR to server NIC in physical network is black box, so “end-to-end” debugging including DAC Cable cannot be achieved, and it is better to use AOC Cable with relatively standardized interface signals as plug and play.
    2. The length of DAC Cable cannot meet the demand of various IDC environment deployment: DAC Cable is limited by the electric signal loss budget because it transmits high-speed electric signals directly. Usually, the length of DAC Cable does not exceed 7m at 10Gbps, 5m at 25Gbps, 3m at 56Gbps and 2m at 112Gbps, while AOC Cable can usually achieve a maximum of 30m to 100m.

In the early days of the rapid development of data centers of large Internet companies, the server rooms were mainly rented, with different constraints such as the upper limit of power consumption of the cabinets, heat dissipation capacity of the server rooms, and cabling channels of the cabinets, etc. TOR usually had to span multiple cabinets to access the servers, and had to take care of the servers in different outgoing directions. This causes the access distance from the server to the TOR to exceed the maximum distance that DAC Cable can provide in most scenarios, and longer AOC Cable has to be chosen to support it.

All this changed in 2018 when the cloud computing infrastructure started to conduct its own research on data center network, IDC cabinets and the operation of super large-scale self-built server rooms. In this feast, large data centers put DAC Cable, the “dessert”, on the stage.

 

Data Center Network White-Boxing

Develop DAC Cable applications

In 2018, large data centers started to develop white-box switches, establishing the principle of “beginning with the end in mind”: the overall design is oriented towards network stability and operational efficiency in the final large-scale deployment. Although the technical feasibility of using DAC Cable also exists for branded switches, the “white box” provides the fundamental guarantee for large-scale operation.

DAC Cable interconnects with NIC on white box switch

Figure 5. DAC Cable interconnects with NIC on the “white box” switch

In the DAC Cable interconnection scenario, there is actually a complete electrical channel between the two chips (MAC to MAC or PHY to PHY).

The total loss of each connection combination of TOR switch + DAC Cable + NIC is different, and the TOR ASIC needs to set the appropriate Tx EQ equalization parameters for each loss to ensure that the BER at the receiving end meets the requirements of error-free transmission, and there are many combinations.

The problem arises if DAC cable is used on branded switches:

First, new service requirements (NICs, cables) depend on the equipment vendor to provide port parameter updates and online upgrades, which is a great challenge to the stability of network operations and the efficiency of scale deployment.

Second, using Link Training mode, you can make more than 99% of the links work. However, in the number of links at the scale of millions, this reliability magnitude will bring a heavy burden to O&M.

 

How does the Self-developed White-box Switch of Large Data Centers Solve the Above DAC Cable Application Problem?

  1. End-to-end loss combination of convergence under the white box. In large data centers, the use planning of switch ports and internal links is considered in the design of switch hardware. The port channel loss for connecting servers is designed to be small and the loss distribution range is narrow. At the same time, in the definition of self-developed DAC Cable, based on relevant IEEE 802.3 specifications, the overall loss range of developed DAC Cable is narrowed by using appropriate wire diameters for DAC Cable of different lengths. Finally, according to the design characteristics of NIC channel, reasonable and sufficient NIC channel loss budget is reserved in the calculation and simulation of total channel loss. These designs come at no extra cost.
  2. Select fixed parametric equalizer in the white box. Based on the above design, the overall interconnection channel loss is narrowed and controllable, which makes it possible to select a fixed set of equilibrium parameters in practice so that all interconnection combinations can obtain BER performance with sufficient margin. This set of parameters is not optimal for each combination, but the BER is sufficient and the link is stable and reliable. This avoids the use of Link Training mode and allows large-scale network operations with marginal effects. So far, this is only the most basic part of the DAC Cable scale-up design. How to quickly implement and make DAC Cable works great in IDC is a more critical step to solve the problems of cabinet integration, delivery, and operation and maintenance.

 

Collaborative Infrastructure Innovation

Large-scale deployment of DAC Cable for large data centers

Following the successful large-scale deployment of DAC Cable in large data centers in 2019, a new term, MOR (Middle of Rack), quickly emerged in the industry to jokingly refer to the practice of placing the TOR (Top of Rack) in the middle of large data centers. At one level, this jocular term graphically illustrates some of the ingenious, site-specific innovations that large data centers are making at the IDC cabinet level.

Cloud computing infrastructure overcomes the inherent shortcomings of DAC Cable through holistic design, allowing the scalable application of DAC Cable to translate into overall benefits in terms of stability, energy consumption, cost, delivery, and operational efficiency.

First, break the mold by placing the TOR in the middle. TOR doesn’t have to be placed at the top.

Large data centers place the access layer switch in the middle U of the server cabinet, reducing the cabling distance from the switch to the furthest server in a single cabinet to half the height of the cabinet, with the longest in a single cabinet needing no more than 2m. The first generation of self-research switches in large data centers even designed back-to-back and front-to-back cooling airflow directions to support servers with different outlet directions, thus achieving the same side cabling in the cabinet and ensuring minimal DAC Cable length requirements. In the subsequent evolution, large data centers have unified the server outlet direction to the front side, and the switch models are grouped together.

Traditional cabinets vs. large data center self-research cabinets

Figure 6. Traditional cabinets (left) TOR top-mounted vs. large data center self-research cabinets (right) TOR middle-mounted

Secondly, cabinet innovation is adapted to local conditions. Because of the different server configurations for different business types, in addition to single cabinets, there will still be demand for switches to access servers across two cabinets. Therefore, while the switch is in the middle, the new cabinet design will be designed to cross the cabinet cable channel in the middle of the cabinet (the traditional cabinet cross-cabinet cable needs to be threaded through the weak power hole from the top of the cabinet) for threading. In this way, the longest cable requirement from the switch to the server across two cabinets can be met by 2.5m; one more consideration: the 2.5m length requirement can be easily met by 56G-PAM4, and there is even a chance that 112G-PAM4 rate can be achieved in the future.

double parallel cabinet cabling SERVER

Figure 7. Traditional cabinet (left) double parallel cabinet cabling vs. large data center self-research cabinet (right) double parallel cabinet cabling SERVER

Third, cabling standardization to help efficient integration delivery. The DAC Cable is defined in 0.25m steps from 0.75m to 2.5m, and the cabling rules from the mid-switch to the servers in each U position of the cabinet are defined in a standardized manner so that the cable length is just enough without additional bending and coiling, which greatly improves the efficiency of the whole cabinet integration and delivery, and also avoids the poor signal integrity performance caused by excessive bending of the DAC Cable. It also minimizes the degradation of signal integrity caused by excessive bending of the DAC Cable.

 

Fourth, self-research innovation to solve the concern of operation and maintenance. Considering the thin and soft wiring and O&M experience of AOC Cable that IDC O&M has been accustomed to for many years, FiberMall specially used nylon braided outer quilt instead of traditional PVC material in the design of DAC Cable, making the bending radius and softness of the first generation 25G DAC Cable significantly improved compared with standard commercial DAC Cable. It successfully helped IDC field operation and maintenance to transition from the long-used AOC Cable to DAC Cable.

FiberMall's 25G DAC Cable (left) and 200G 1-in-2 DAC Cable (right)

Figure 8. FiberMall’s 25G DAC Cable (left) and 200G Breakout DAC Cable (right)

With the advent of self-powered switches and DAC Cable, rapid-scale deployment in large data centers in the form of full cabinet integration is occurring. Large Data Center DAC Cable has been the first to reach a million-dollar deployment scale in just over a year from 2019.

 

With an Open and Win-Win Mind

Bearing the Fruit of DAC Cable Application

In 2020, after the deployment of DAC Cable in large data centers has accumulated a million scale and operated for one year. Led by Alibaba, in cooperation with Tencent, Baidu, and other large domestic data center users, and jointly with FiberMall, Molex, and other DAC Cable vendor representatives, the “White Paper on Next Generation Data Center High-Speed Copper Cables” was released at ODCC.

The results of large-scale data center deployments and operations have given users the best practices and confidence in DAC Cable applications. In the following years, more and more large data centers have adopted DAC Cable as the first highway for their physical networks, and more and more partners have entered into the R&D, manufacturing, supply and integration of DAC Cable.

From 2019 to now, the number of DAC Cable deployed in data centers has accumulated more than 50 million, which brings hundreds of millions of dollars of cost reduction and tens of millions of kilowatt hours of energy savings each year.

 

From Zero to Millions

DAC Cable Deployment at Scale Brings Change to Data Centers

The large-scale deployment of DAC Cable in data centers brings not only the most intuitive cost and energy benefits, but also the impact on data center network architecture design, evolution, and service performance. The latter is even more significant when viewed from the perspective of the entire cloud infrastructure.

 

Loosely coupled with the service, allowing the network to evolve easily

  1. The network pursues a single GB cost dividend and evolves faster. The bandwidth of data center network commercial chips basically maintains the evolution rate of doubling every 3 years. The data center network also evolves in parallel at the first time to pursue the declining dividend of single GB network bandwidth cost, and the application of new features.
  2. The services are long-tailed and varied, and the iterations are not synchronized. In the same data center network, services at different rates are required at the same time. For example, on a 200G network, 50G servers are used for primary access, but 25G and 100G servers are still required for access.
  3. The use of multiple forms of cables makes the evolution of data center networks loosely coupled with service iterations.The use of DAC Cable makes it very easy to implement various special forms of cables, because different forms of DAC Cable use the same high-speed bare wire and manufacturing process in the main body. DAC Cable provides low cost, versatility, flexibility, fast delivery and other features that can efficiently The low cost, versatility, flexibility, and fast delivery features provided by DAC Cable enable efficient support of different rates of service access, allowing data center networks to easily and quickly evolve to gain bandwidth dividends and new features.

100G network architecture of large data centers

In the 100G network architecture of large data centers, two direct-connected DAC cables, 25G and 100G (NRZ), are used, while TOR has two models, 25G, and 100G.

 

Improved Stability and Reduced Latency

  1. In a normal air-cooled environment, the failure rate of DAC Cable is more than 1 order of magnitude lower than that of AOC Cable. Since DAC Cable has zero power consumption and does not contain active devices such as electric and optical chips, there is no failure factor due to the aging of lasers and the electrical stress of semiconductors. DAC Cable embodies very high stability in service operation, lightweight network operation, and excellent user network experience.
  2. The DAC Cable is simple and reliable in immersion liquid-cooled environments and does not contain liquid-sensitive components such as lasers and optical waveguides, eliminating the need for sealing processes such as those required for optical modules, thereby greatly reducing the corresponding costs and increasing reliability. Large data center DAC Cable through the choice of materials, cable signal integrity performance design, using a DAC Cable to support both air-cooled and immersion liquid-cooled environment.
  3. DAC Cable provides extremely low latency performance.DAC Cable and fiber optic cable itself will have about 5ns/m transmission delay, but the optical module introduces additional signal delay due to clock and data recovery (CDR), and even requires DSP-based signal equalization technology. In scenarios such as AI computing, resource pooling, etc., the delay needs to be extraordinarily controlled; PCIE Gen6 at 64G-PAM4 rate in the physical layer protocol to control the delay only leaves a FEC delay overhead budget of 10ns or less, while the optical module DSP-based re-timer chip will bring tens of nanoseconds of delay (receive and send).

Energy Conservation and Emission Reduction, Cost Reduction, and Efficiency Increase

The most intuitive benefit of switching from AOC Cable to DAC Cable is the reduction in energy consumption and procurement costs. The figures are particularly impressive when the number of users is in the millions in data centers.

In addition, DAC Cable eliminates the need to set up systems needed to monitor lasers, module temperatures, etc., as required in optical module operations, and eliminates the need to focus on field issues such as fiber end-face cleanliness. While the scale of server deployments in data centers is exploding, the application of DAC Cable helps IDC field operations and maintenance, and data center network operation efficiency is significantly improved.

 

Looking Ahead

The nature of DAC Cable’s use of copper as a transmission medium dictates that whether the physical network upgrades link bandwidth by increasing single channel rates or increasing the number of parallel channels, the application of DAC Cable will be severely challenged. What is “more unfortunate” is that these two approaches are often simultaneous or alternate.

Challenge: Getting Shorter and Thicker

The loss of copper transmission line in the level of 3 ~ 6dB / m, as the single channel rate continues to grow, can support the length of DAC Cable will become shorter and shorter. If the longest length that can be supported does not reach 1.5m ~ 2m, it will lose most of the meaning of DAC Cable applications.

The diameter of each copper transmission line in DAC Cable is in the millimeter level (fiber is in the micron level), and the scale out of 4 channels → 8 channels → 16 channels makes the diameter of DAC Cable increase almost exponentially.

Therefore, in the process of increasing network speed and bandwidth, DAC Cable has great challenges in wiring and connectable length in the cabinet.

The Future of DAC Cable

As physical network single-channel rates move toward 224 Gbps, the copper-mediated electrical channel faces significant challenges. But with advanced Serdes technology, the use of DAC Cable to meet data center ultra-short-haul network interconnections is still achievable and significant in the face of continuous advances in materials and processes.

  1. Material and process evolution: The performance of materials (including insulation media materials and conductor materials) for high-speed cables is improving at a rate of about 20-30% every three years, while the signal rate is doubling every three years. In the current research, some new DAC Cable materials and technologies are emerging, such as new materials and processes to reduce the thickness of the insulation layer without reducing the diameter of the copper conductor, making the overall wire diameter smaller; there are also changes in the cable formation method, for example, in the same wire diameter to make the cable softer and easier to bend.
  2. New application requirements: With the rapid growth of AI computing, the interconnection expansion within AI training clusters requires high bandwidth and low latency. DAC Cable’s high stability and low latency characteristics can meet the needs of these areas.
  3. Potential solutions above the length of DAC Cable: From the logic of communication is simple, “electric relay” or “electric to optical” can meet the requirement that exceeds the length limit of passive DAC Cable.

Active copper cable. By adding a Re-driver or Re-timer to the module in the DAC Cable, the electrical signal is “relayed” to extend the distance of high-speed electrical signal transmission (longer), or to reduce the wire diameter (thinner) for the same distance. Its cost and power consumption between the passive DAC Cable and active AOC Cable, in a certain rate range is a good choice. However, as the rate rises to 112G-PAM4, CDR-enabled re-timer (or even DSP-based) is required, and the latter brings a comparable transmission delay and power cost to the optical module.

Direct-drive optical modules. The module does not use CDR (and does not use DSP-based equalizer) thus greatly reducing the transmission delay year over year, at the cost of strong reliance on the chip at both ends of the link to provide compensation and equalization of the signal, the channel loss budget within the equipment at both ends is reduced. Some vendors in the industry are currently developing Direct-drive optical modules based on the 112G-PAM4 rate, and the ecology is in the early stages.

DAC Cable technology, only a small part of the data center physical network interconnection technology, brings cost, energy efficiency, stability, and network performance contributions are very obvious; at the same time, its limitations and challenges are just as obvious.

Leave a Comment

Scroll to Top