United States / $ USD
  • About Us

    Meet us and know our mission, belief, service and more.

  • Contact Us

    Find our locations and get connected with us closely.

  • Service Center

    Ask us whatever you care, we will help you 24/7.

  • News Room

    Find out the latest news and events around Fiber Mall.

Select Country/Region

News Room / Three Predictions about Data Centers in 2019: Silicon photonics Will Be The Core of Optical Module Development

Three Predictions about Data Centers in 2019: Silicon photonics Will Be The Core of Optical Module Development

Summary: Dr. Radha Nagarajan of Inphi Corp is pleased with the achievements of the technology industry in 2018 and is excited about the unlimited possibilities brought by 2019, including the high-speed data center interconnect (DCI) market. Geographical decomposition of data centers will become more common. The data center will continue to grow. Silicon photonics and CMOS will be the core of the optical module development.

ICCSZ news. As we all known that the technology industry has made many extraordinary achievements in 2018, and there will be various infinite possibilities in 2019, it's been a long time coming. Dr. Radha Nagarajan, Chief Technology Officer of Inphi, believes that the high-speed data center interconnection (DCI) as one of the technology industry sectors, will also change in 2019. Here are three things he expects to happen in the data center this year.

1.Geographical Decomposition Of Data Centers Will Become More Common.

Data center consumption requires substantial physical space support, including infrastructure such as power and cooling. Geographic decomposition of data center will become more common as it becomes more and more difficult to build single, large, continuous large data centers. Decomposition is key in metropolitan areas where land prices are high. High-bandwidth interconnects are critical to connecting these data centers.

DCI-Campus: These data centers are often connected together, such as in campus. Distances are usually limited to between 2km and 5km. Depending on availability of fiber optics, the distances also overlap the CWDM and DWDM links.

DCI-Edge: This type of connection ranges from 2 km to 120 km. These links are mainly connected to the distributed data centers within the area and are usually subject to latency limitations. DCI optical technology options include direct detection and coherence, both are implemented using the DWDM transmission format in fiber C-band (192 THz to 196 THz window). The direct detection modulation format is amplitude-modulated with simpler detection scheme, consumes lower power,and lower costs, and it requires external dispersion compensation in most cases. For 100 Gbps, 4-level pulse amplitude modulation (PAM4), the direct detection format is a cost-effective method for DCI-Edge applications. The capacity of the PAM4 modulation format is twice that of the traditional non-return-to-zero (NRZ) modulation format. For the next generation of 400-Gbps (per wavelength) DCI systems, 60-Gbaud, 16-QAM coherent formats are leading competitors.

DCI-Metro / Long Haul: This category of optical fiber beyond DCI-Edge, with 3,000 kilometers of ground links and longer seabeds. A coherent modulation format is used for this category, and the modulation type may be different for different distances. The coherent modulation format is also amplitude and phase modulation, which requires detection by a local oscillator laser. It needs complex digital signal processing, consumes more power, has a longer range, and is more expensive than direct detection or NRZ methods.

2. The Data Center Will Continue to Develop.

High-bandwidth interconnects are critical to connecting these data centers.Thus, DCI-Campus, DCI-Edge and DCI-Metro / Long Haul data centers will continue to grow.

In the past few years, the DCI domain has become the increasing focus of traditional DWDM system vendors.Growing bandwidth demand of cloud service providers (CSPS) that provide software as a service (SaaS), platform as a service (PaaS), and infrastructure as a service (IaaS) capabilities is driving the demand for optical systems  that connect switches and routers no different layers of the CSP data center network.Today, this needs to run at 100 Gbps, and inside a data centre, it can be cabling with direct copper cable (DAC), active optical cable (AOC) or 100G "grey" optics can be used in the data centre. For the links of data center facilities (campus or edge/metropolitan applications), the only option available until recently was full-featured, coherent transponder-based approach, the methods are suboptimal.

With the transition to the 100G ecosystem, the data center network architecture has shifted from a more traditional data center model, where all data center facilities are located in a single large "large data center" park. Most CSPs have been integrated into distributed regional architectures to achieve the required scale and provide cloud services highly available. Data center areas are often located near metropolitan areas with high population densities in order to provide the best service (in terms of latency and availability) to end customers closest to these areas. The regional architecture differs slightly between CSPs, but consists of redundant regional "gateways" or "hubs" that are connected to the CSP's wide area network (WAN) backbone (and may be used for peer-to-peer, local content transmission or subsea transmission). Each regional gateway is connected to each data center in the region, where the compute / storage servers and support structures reside. As the area needs to expand, purchasing additional facilities and connect them to the regional gateway is easy. Compared to the relatively high cost and long construction time of building a new large data center, this allows for rapid expansion and growth of the area, with the added benefits of introducing the concept of different availability zones (AZ) within a given area .

The transition from large data center architectures to regions introduces additional constraints that must be considered when choosing a gateway and data center facility location. For example, to ensure the same customer experience (from latency perspective), the maximum distance between any two data centers (through a public gateway) must be bounded. Another consideration is that the gray optical system is too inefficient to interconnect physically disparate data center buildings within the same geographic area. With these factors in mind, today's coherent platforms are not suitable for DCI applications.

The PAM4 modulation format provides low power consumption, low footprint, and direct detection options. By using silicon photonics, a dual-carrier transceiver with PAM4 application-specific integrated circuit (ASIC) was developed, integrating digital signal processor (DSP) and forward error correction (FEC), and packaging it into QSFP28 form factor. The resulting switchable pluggable module can perform DWDM transmission through typical DCI link, each fiber pair is 4 Tbps, and the power consumption per 100G is 4.5 W.

3. Silicon Photonics and CMOS Will Be The Core of The Development of Optical Modules.

The combination of silicon photonics for highly integrated optical elements and high-speed silicon complementary metal oxide semiconductor (CMOS) for signal processing will play a role in the evolution of low-cost, low-power, switchable pluggable optical modules.

The highly integrated silicon photonic chip is the core of the pluggable module. Compared to indium phosphide, silicon CMOS platforms are able to access wafer-level optics with larger 200 mm and 300 mm wafer sizes. Photodetectors with wavelengths of 1300nm and 1500nm were constructed by adding germanium epitaxy on standard silicon CMOS platform. In addition, components based on silicon dioxide and silicon nitride can be integrated to produce low refractive index contrast and temperature-insensitive optical components.

In Figure 2, the output path of the silicon photonic chip contains a pair of traveling wave Mach Zehnder modulators (MZM), one for each wavelength. The two wavelength outputs are combined on a chip using integrated 2: 1 interleaver, which is used as DWDM multiplexer. The same silicon MZM can be used for NRZ and PAM4 modulation formats with different drive signals.

As the bandwidth requirements of data center networks continue to grow, Moore's Law requires the advancement of switching chips, which will enable switch and router platforms to maintain switch chip base parity while increasing the capacity of each port. The next generation of switching chips is for 400G per port function. A project called 400ZR was launched in the Optical Internet Forum (OIF) to standardize the next generation of optical DCI modules and create supplier's diverse optical ecosystem. This concept is similar to WDM PAM4, but extended to support 400-Gbps requirements.