Data Center Cabling Solutions

Taking Your Network to 40/100G Ethernet

By Josh Taylor, Director - CABLExpress Product Management

Published February 2012, updated April 2019


Introduction

With the ratification of new industry standards and increased demands on data center throughput, 40/100G Ethernet will be an integral component of the next-generation data center. In fact, it is already an emerging influence on how organizations plan, build and operate their existing data center architectures.

The market proves this: manufacturers are already responding to the increased demand for Ethernet hardware, including cabling products, switches and transceivers.

The implementation of 40/100G Ethernet is dependent upon a variety of organizational factors, including existing infrastructure, budget, throughput demand and leadership priority. However, it is clear that the stage is set for the most dramatic change related to data center fiber optic infrastructures since their inception.

In this paper, we will discuss the deep impact that this network speed transition has on data center cabling infrastructure, and the decisions that organizations will need to make to accommodate these changes.

Why are data centers migrating to 40/100G Ethernet?

Data growth

The world revolves around digital data. We now rely on data to conduct business, engage in social activities and manage our lives. There is no sign of slowed growth in the production of, and demand for, more data – as well as faster access to it. According to the 2014 IDC Digital Universe Study sponsored by EMC [1]: “Like the physical universe, the digital universe is large – by 2020 containing nearly as many digital bits as there are stars in the universe. It is doubling in size every two years, and by 2020 the digital universe – the data we create and copy annually – will reach 44 zettabytes, or 44 trillion gigabytes.”

Cloud

Among several other factors, the increase in cloud storage will drive the need for data throughput. “In 2013, less than 20% of the data in the digital universe [was] ‘touched’ by the cloud, either stored, perhaps temporarily, or processed in some way. By 2020, that percentage will double to 40% [1].”

The Internet of Things

Another factor contributing to the exponential growth of information is the advent of the Internet of Things (IoT). “Fed by sensors soon to number in the trillions, working with intelligent systems in the billions, and involving millions of applications, the Internet of Things will drive new consumer and business behavior that will demand increasingly intelligent industry solutions... [1].”

This exponential growth in information means processing speeds have to increase as well, so as not to slow access to data. High-performance cabling that can transfer data over 40/100G Ethernet will be a necessary addition to data centers looking to keep up with this digital data growth.

Virtualization: the double-edged sword

Virtualization can help data centers save on capital expenses, improve operational efficiency, and create more agile infrastructures.

There are many types of virtualization, from desktop to storage to server virtualization. Server virtualization, in particular, calls for fewer, more efficient servers, which translates to fewer server connections. Because there are fewer connections, is it important that these connections work properly.

Unfortunately, most data centers do not contain cabling infrastructure designed to meet the high-performance capabilities that virtualization demands. This is particularly true for data centers built in the 1980s, before high-performance cabling even existed.

Decreasing tolerance for downtime

When data transactions are interrupted due to network downtime, it translates to a very real loss of revenue. When Amazon.com went down in August 2013, the company lost $66,240 per minute [2]. Considering how quickly lost revenue can add up, it makes sense that there is an extremely low tolerance for network downtime.

The effect of downtime on revenue is even greater when considering end-user experience. According to one source, network downtime measured for user experience and business needs costs an average of $5,600 per minute [3]!

Network administrators should have a contingency plan in place in the event of network failure. However, one of the most effective ways to mitigate this issue is to make sure the existing network is able meet the demands of increasing data throughput, including upgrading networks to be capable of handling 40/100G speeds.

Managing capital expenses

While migrating to 40/100G Ethernet creates an up-front capital expense, it saves data centers in the long run by future-proofing infrastructure. Not only will data centers be prepared for the increasing demands on data throughput, but the high-performance cabling infrastructure required of 40/100G Ethernet can grow with future hardware upgrades. This will reduce the need to tear out and replace cabling with each upgrade.

Factors in preparing for 40/100G Ethernet migration

Link distances and loss amounts

As data center speeds increase, optical loss budgets decrease. Optical loss occurs over cabling distance and at mating points where connections are made. Since most data center cabling runs are shorter distances (compared to long-haul campus runs), the inherent losses from distance in a data center are somewhat negligible compared to the losses incurred from mating points.

As connections in the data center increase to improve manageability, performance suffers. This is because added connections contribute to increased dB loss. Therefore, a balance must be maintained between manageability and performance.

Choosing the right cabling product can combat the issue of balancing manageability versus performance. Cabling products with low optical loss rates will ensure that a structured cabling environment is running at its peak. When comparing dB loss rates of cabling products, look for “maximum” instead of “typical” loss rates. While typical loss rates can allude to the performance capabilities of a product, they are not helpful when determining loss budgets.

Cabling infrastructure design

Due to the exponential port growth experienced by data centers during the last two decades, cabling infrastructure is often reduced to a cluttered tangle commonly referred to as “spaghetti cabling.” Spaghetti cabling leads to decreased efficiency, increased dB loss and more cable management challenges.

The Telecommunications Infrastructure Standard for Data Centers, or TIA-942, was developed to address various data center infrastructure design topics, including the problem of spaghetti cabling. Among other aspects of data center planning and design, TIA-942 focuses on the physical layout of cabling infrastructure.

TIA-942 offers a roadmap for data center cabling infrastructure based on the concept of a structured cabling environment. By creating logical segments of connectivity, a structured cabling system can grow and move as data center needs change and throughput demands increase. Therefore, implementing a structured cabling system in accordance to TIA-942 standards is the ideal way to prepare for migration to 40/100G speeds.

The heart of a structured cabling system is the main distribution area, or MDA. All equipment links back to the MDA. Other terms used to define this area include: main cross connect, main distribution frame (MDF), and central patching location (CPL).

The principle of a structured cabling system is to avoid running cables from active port to active port (often referred to as “point-to-point”). Instead, all active ports are connected to one area (the MDA), where the patching is done. This is also where moves, adds and changes (MACs) take place.

TIA-942 calls for the use of interconnect points, which are typically in the form of patch panels (also referred to as fiber enclosures). Patch panels allow for patch cables (or jumpers) to be used in the front of the racks or cabinets where the equipment is housed. The patch cable would then connect to a fiber optic trunk, and then to another patch panel in the MDA.

There are several advantages to implementing a structured cabling system. First, using fiber optic trunks significantly reduces the amount of cabling bulk both underfloor and in overhead conveyance. Implementing a structured cabling system also reduces airflow congestion, which reduces power usage.

Another distinct advantage to a structured cabling system is that it allows for modularity, meaning connector changes can be made without having to remove horizontal or distribution cabling. For example, a chassis-based switch with 100BFX ports is connected to a patch panel using SC fiber optic jumpers.

To upgrade the chassis and install new blades with LC ports, you no longer have to replace the entire channel as you would with a point-to-point system. Instead, the module within the patch panel is replaced. Underfloor and overhead cabling would remain undisturbed.

However, it should be noted that this method does add insertion loss to the channel because it adds more mating points. To offset insertion loss created by additional mating points, high-performance fiber optic cables should be used for implementation.

Connectivity options

When migrating to 40/100G speeds, there are several connectivity options to consider when planning your cabling infrastructure.

The first uses long-haul (LX) transceivers with single-mode (SM) cabling. Data is transmitted via serial transmission. In a serial transmission, one fiber is dedicated to carry transmitting data and another fiber to carry receiving data. These two fibers make what is referred to as a “channel.” A channel is defined as the fiber, or group of fibers, used to complete a data circuit. Until recently, serial transmission has been used for Ethernet speeds up to 10G.

This setup is typically not used in data centers because it is built for long distances. It is also very expensive, despite the abundance (and therefore low cost) of singlemode cabling. In order to work effectively over long distances, the lasers used in LX transceivers are extremely precise – and expensive. This drastically increases the overall cost of an LX/SM connectivity solution.

The next option uses short-haul (SX) transceivers with multi-mode (MM) cabling. Data is transmitted via parallel optic transmission. Parallel optic transmission aggregates multiple fibers for transmission and reception. For 40G SR4 transmission, four fibers transmit at 10G each, while four fibers receive at 10G each. This means a total of eight strands of fiber will be utilized for a 40G Ethernet channel.

With the Institute of Electrical and Electronics Engineers (IEEE) 802.3bm standard update, a new connectivity option is offered in 100G SR4. This option allows for 100G Ethernet speeds utilizing a 12-fiber MPO interface. It is the same principle as 40G SR4, but each fiber would be transmitting or receiving 25G.

These short-haul connectivity setups are more ideal for migrating to 40/100G Ethernet because they work well under the short distances found within a data center. Also, because SX transceivers use a vertical-cavity surface-emitting laser, or VCSEL, they are much less expensive than their LX counterparts.

The next option features new technology. Very recently, technology advancements have been made that offer alternative options to the standard QSFP MM transceivers with an MPO connection. These new transceivers utilize a duplex LC footprint - a technology that offers a significant advantage to end-users with LC connector footprints in their existing infrastructures.

The QSFP-40G Universal Transceiver not only utilizes the LC duplex footprint but is also universal for both multi-mode and single-mode fiber. This transceiver is standards-based as well, compliant with IEEE 802.3bm, so it can interoperate with QSFP-40G-LR4 and QSFP-40G-LR4L.

A bi-directional (Bi-Di) transceiver also uses the LC duplex footprint. The key factor for this transceiver is its use of multiple wavelengths. It uses two 20-Gbps channels, each transmitted and received simultaneously.

Fiber types

If multi-mode cables are being used to migrate to 40/100G Ethernet, it is recommended they be OM3 or OM4 fiber, replacing any OM1 or OM2 fiber cables.

OM4, the newest fiber type on the market, transmits the most bandwidth and is more effective over longer distances. OM4 is highly recommended for any new installs as it provides the longest lifespan in a cabling infrastructure.

Back to Basics

Connectors

The LC fiber cable connector is the most accepted connector used in the data center, especially for high-density network applications. The LC connector has one fiber and is typically mated in pairs for a duplex connection.

Possibly the most drastic change data centers will undergo in migrating to 40/100G Ethernet is a change from the LC connector to the MPO-style connector.

Developed by Nippon Telegraph and Telephone Corporation (NTT), “MPO” stands for “multi-fiber push-on.” (A popular brand of the MPO-style connector, US Conec’s MTP®, is often used to refer to all MPO connectors – similar to using “Band-Aid®” for “adhesive bandage.”)

What about copper?

There have been significant technology improvements over the past few decades that create the potential for 40G copper links. Choosing copper over fiber usually boils down to cost. Active copper cables with transceivers on each side that utilize coaxial cables are surging in the market, driven by top-of-rack (TOR) architecture that utilizes switches at the top of a rack versus a patch panel. This can be costly, especially when considering hardware refresh rates and support windows.

For the long-term, it is clear that fiber optics will play the dominant role in data center structured cabling. Fiber has better transmission properties and is not susceptible to external interference the way the copper medium is. However, copper cabling will continue to have a role towards the “edge” of a data center structured cabling system, as well as the edges of a campus network.

Conclusion and next steps for data centers

Data centers are experiencing the most significant change in cabling infrastructure since the introduction of fiber optic cabling. No longer is it a question of if data centers will migrate to 40/100G Ethernet, but when. Installing a high-performance, fiber optic structured cabling infrastructure is essential to a successful migration.

We covered why migration to 40/100G Ethernet is imminent, as well as the decisions data center managers will need to make to prepare for implementation. There are several next steps you can take to prepare for this drastic change:

  1. Determine your current and future data center needs, including throughput demand, data production rates and business-driven objectives. In what ways does your current data center infrastructure support and/or fail those needs?
  2. Use this information to determine when your data center should migrate to 40/100G Ethernet.
  3. Map out your current data center infrastructure.
  4. Use this map to create a plan for the hardware and cabling infrastructure upgrades necessary for migration.
  5. Create a plan for migration, including internal communication strategy, budget, timeline, and roles and responsibilities of those involved.

The timeline for migration is different for every data center, depending on technology needs, budget, size and organizational priority. However, educating yourself on 40/100G Ethernet, evaluating your current cabling infrastructure and beginning plans for implementation will ensure a smooth, trouble-free migration.

References

[1] http://www.emc.com/leadership/digital-universe/2014iview/index.htm

[2] http://www.forbes.com/sites/kellyclay/2013/08/19/amazon-com-goes-down-loses-66240-per-minute/

[3] http://www.datacenterknowledge.com/archives/2011/08/10/true-costs-of-data-center-downtime/


Have questions?  Need help with a project?

Contact us today and let us know.

Speak To The Author