The Race to 400G Ethernet

The Race to 400G Ethernet

By: Dave Fredricks | Data Center Infrastructure Architect

 

In the beginning of 2016, the discussion of 200G and 400G Ethernet speeds started heating up among data center professionals. Published standards for speeds of 40GE and 100GE were first released in 2015. By 2016 these optics became readily available to the market place.

40G connections can now be achieved with an LC or MPO/MTP® fiber optic connector on OM3 or OM4 glass, and for copper using 40GBase-T or Category 8 cabling. 100GE speeds started to gain traction by mid-2016 with users deploying leaf and spine data center topologies. These 100GE connections are typically done with SR-4 or parallel optics using an MPO/MTP® connector on OM4 glass.

The move toward 200GE began in 2015 by large cloud providers such as AWS. They wanted to gain faster connections without having to pay for expensive 2K or 10K single-mode optics.

This led to a great compromise. Innovative cloud providers asked OEM manufacturers to build custom optics with a 500-meter distance on single-mode glass, instead of the specified 2K or 10K distance. In doing so, the OEMs used lesser-cost components in the transceivers, reducing the overall price.

Today we see large cloud providers running speeds of 200GE in their new platforms.

Two Sides of Success

The next step is to 400GE. Two standards organizations have been working behind the scenes with this benchmark for the past couple years.

One standard is the IEEE 802.3bs for transceivers. Set to release in 2017, this standard includes 200GE speed optics.

The other standard is the TIA-492-AEEE. This standard is addressing the fiber optic glass; specifically an enhanced multi-mode to be called OM5 or Wide Band Multimode Fiber (WBMMF).

OM5 glass is designed to have an effective operating window for four 100G light signals between 840nm and 950nm. This glass is not new to the industry, and when laser-optimized glass (OM3 and OM4) is made, a small percentage will even meet the OM5 specification.

The speed/distance specification of this fiber optic glass is determined by the Differential Mode Dispersion (DMD) test. While these types of glass and their testing processes are not new to the industry, it is proving to be a recipe for success. The formula for making the glass is improving and therefore yielding higher bandwidth product to support the OM5 specifications.

Another aspect of 100GE, 200GE and 400GE can be seen on the backside of optics and transceivers. There are several types of optics on the market including CFP, CFP2, CXP, SFP+, QSFP and CPAK

A new optic, however, is the OSFP (patented by Arista). The advantage of this transceiver is that the size of its backside is shorter and wider than a QSFP. This OSFP enables 32 optics to be housed in a 1U footprint.

As data center operators, we mostly concern ourselves with the front side of the optics (being an LC or MPO/MTP® connector). But it is still important to understand all aspects of different optics available for a connection. This way, your knowledge will best help an operator select the best transceiver at the lowest cost for the desired link.

Watch What Happens in 2017

The soon-to-be-released TIA-942-B standard will include MPO/MTP® ferrules with 16 and 32 fibers. This will work to better accommodate higher speeds into the optics. For example, 16 fibers x 25G = 400G or 8 fibers x 50G = 400G.

These 16- and 32-fiber connectors do not mate with current 12- and 24-fiber connectors. For obvious reasons, this can cause confusion to the data center operator. Many are taking a "wait and see" approach to learn how this plays out in the market.

Taking into consideration all of these variables, an efficient and scalable cabling solution can be deployed. At CABLExpress, we are designing structured cabling plants to best comply with the current published standards. We have a road map in place for 2017 and future generation of speeds. In the meantime, we will continually post updates on these different topics throughout 2017 as new data is released.


Learn more about spine-and-leaf cabling in our white paper, Cabling Designs for Hyperconvergence.