Column Control DTX

Data Center Ethernet Technology and Evolution to 224 Gbps

Application Notes

Introduction

As bandwidth demands for digital services (5G, Internet of Things, etc.) continue to increase, data centers must upgrade infrastructure to keep pace. 800G and 1.6T research are underway, with a single lane interface speed up to 224 Gbps. This application note will focus on the latest evolution of high-speed Ethernet links in a modern data center, and the Keysight Technologies high-speed test solutions that are available for up to 224 Gbps interfaces.

Table of Contents

  • Introduction
  • Data Center Interconnects
  • The Development of Data Center Interconnection Technology
  • Moving to 800G ethernet
  • Technical challenges of 800G ethernet
  • Test Solutions
  • 112 Gbps Test solutions
  • 224 Gbps Test solutions
  • Signal generation
  • Component test
  • Waveform test
  • Bit error ratio test
  • Test solutions summary

Data Center Interconnects

Large-scale internet data centers are the fastest-growing market for optical interconnection technology and innovation, with 70% of all internet traffic occurring inside the data center, driven by increasing machine-to-machine communication. A typical data center network structure based on CLOS1 architecture (aka leaf-spine) is shown in Figure 1.

A data center internal network usually has 3 to 4 levels from bottom to top.

Moving through the levels from the Server to the Core, the reach of each interconnection increases from a few meters to several kilometers, necessitating changes in technology and interface standards.

Server Cabinets / Top of Rack Switch (TOR): At the lowest level, individual server racks are connected to TOR switches at the top of the cabinet. Current data centers generally deploy 25G networks, with some artificial intelligence (AI) applications utilizing 50G speeds. Over the next few years 100G, 200G, and 400G speed interconnection technology will be employed. Connection distances are short, being either within the cabinet or to adjacent cabinets and generally less than 5 meters. A typical interface technology used today is direct attach copper cable (DAC) or active optical cable (AOC). As speeds evolve to 400G and 800G the reach of DACs will be too short and active electrical cable (AEC) will be used instead.

TOR to Leaf Switch: The second level is the connection from TOR switches to Leaf switches. This distance ranges up to about 50 meters, using 100G interconnection technology now and moving to 200G and 400G speeds and in a few years to 800G. Typically optical modules such as 100GBASE-SR4 or 200GBASE-SR4 combined with multi-mode optical fiber are used today along with NRZ (Non-Return to Zero) signaling. For this level and the higher-level interconnections, the move to 200G and 400G also changes the signaling to PAM4 (Pulse Amplitude Modulation 4 level).

Leaf to Spine: The leaf to spine connection may be within the campus, or adjacent campus, with a connection distance up to 500 meters. Using similar interface rates as TOR to Leaf, 100G moving to 200/400G now and 800G around 2023. With the longer reach, the technology moves to single-mode fiber and often several parallel fibers utilizing modules such as 100G-PSM4, 100G-CDWM4, and moving to 200GBASE-DR4 and 400GBASE-DR4.

Spine to Core: As the reach increases further up to 2 kilometers, the cost of fiber starts to be a consideration, and so wavelength division multiplexing technology is often used to send data via several different optical wavelengths on one fiber, today using modules such as 100GBASE-LR4, 100GCWDM4, 400GBASE-ER4/-LR4/-FR4, etc.

Data Center Interconnect (DCI): This is generally a connection between several adjacent data centers for load balancing or disaster recovery backup. The distance may range from tens of kilometers to around a hundred kilometers. Over this longer distance, dense wavelength division multiplexing is employed and, more recently, coherent communication is being used in preference to direct detect technologies. Telecom operators have deployed 100G coherent technology for many years in long-distance (hundreds of kilometers) applications. Speed increases into 200, 400, 800G technology is also ongoing. For DCIs, since the transmission distance is not as far as the telecom applications and is mainly point-to-point, coherent transmission is feasible using pluggable module technology with smaller size and power consumption, such as 400G-ZR.

The Development of Data Center Interconnection Technology

As shown in Figure 1 there are several different electrical and optical interconnection technologies in use in the data center, and they are continuously evolving.

The speed of each interface can be achieved by more than one method or interface standard, each having different trade-offs between performance, reach, power consumption, and cost.

There are three technical directions to increase the speed of the interconnection interface (Figure 2):

The first method is to directly increase the data or baud rate of the channel, e.g. the development from 155 Mb/s to 622 Mb/s in the SDH/SONET era, or 100 Mb/s Ethernet ports all the way to 10 Gb/s Gigabit Ethernet ports. Often the baud rate improvement required can be ahead of the available technology at the time so other methods have been used.

×

Please have a salesperson contact me.

*Indicates required field

Preferred method of communication? *Required Field
Preferred method of communication? Change email?
Preferred method of communication?

By clicking the button, you are providing Keysight with your personal data. See the Keysight Privacy Statement for information on how we use this data.

Thank you.

A sales representative will contact you soon.

Column Control DTX