Data Center Cabling Solutions

Cabling Designs for Hyperconvergence


When considering the impact converged infrastructure and hyperconvergence are having on cabling, it’s important to first understand how data center topologies are evolving. Over the past several years, the emergence of software-defined networking (SDN) has pushed data center designs to move from three-layer topologies (Figure 1) to leaf-spine topologies (Figure 2).

Three-layer design is where the bottom (or access) layer connects hosts to the network. The middle layer is the distribution or aggregation layer. The core layer provides routing services to other parts of the data center as well as services outside of the data center space, such as internet access and connectivity to other data center locations. An example of this topology would be using Cisco Nexus 7000 Series as the core switch, Cisco Nexus 5000 Series as the aggregation switch and Cisco Nexus 2000 Series as the access switches.

While this design is simple, it does feature limitations in scalability. It can be subject to bottlenecks if uplinks between layers are oversubscribed. This can come from latency incurred as traffic flows through each layer and from the blocking of redundant links using protocols like spanning tree.

Leaf-spine is an alternative design where leaf switches form the access layer. These leaf switches are fully mesh-connected to all the spine switches. The mesh ensures that each leaf switch is no more than one connection away from any other leaf switch.

This topology is easily scalable. The links between the leaf and spine layer can be either routed or switched. All links are forwarding – which means that none of the links are blocked in a path. For example, you could use technology like Transparent Interconnection of Lots of Links (TRILL) or Shortest Path Bridging (SPB) software.

Converged Infrastructure and Apache Hadoop

Converged infrastructure works by grouping multiple technology components into a single computing package. Some of the components of a converged infrastructure may include servers, data storage devices, network hardware and software for IT management.

Many customers are using converged infrastructure to build Apache Hadoop clusters. Apache Hadoop is an open-source software framework written in Java for distributed storage and distributed processing of very large data sets. Figure 3 shows a typical Hadoop cluster using Arista equipment.

In the far left core cabinet is an Arista 7500R chassis with 12-port MXP line cards. Connecting out of the MXP line cards are 24-fiber MTP® to MTP® trunks going up into MTP® coupler panels. Each MXP line card has 12 ports and each MTP® coupler panel has 12 couplers to one-to-one line card mirroring.

On the backside of the MTP® coupler panels are 48-fiber MTP® to MTP® trunks with (2) 24 MTP® connectors used as backbone cabling running into the back of 24-port MTP®-LC cassette modules. Each 24-port cassette module replicates one core switch to connect a row of compute cabinets.

The center cabinet, or end-of-row (EoR) or middle-of-row (MoR) cabinet, replicates four network core switches with two as main data connections and two as management ports.

The cabinet on the right is the compute cabinet and has two top-of-rack (ToR) switches. This configuration uses one ToR for data at 10G or 25G and one for management at 1G or 10G. The ToR switches are connected back to the EoR cabinet using 8-fiber/4-port LC to LC cable assemblies. Below the ToR switches in the compute cabinets are servers and disk arrays.

All the components of a data center

Converged infrastructure can have all the components of a data center (servers, data storage devices, networking equipment and software) contained in a group of cabinets or one cabinet as shown in Figure 4. As data center owners need more computing power, they can add infrastructure either one cabinet at a time or multiple cabinets at a time.

When end-users install these systems, in-cabinet connectivity can become a challenge due to the different media types available. These media types include Twinax Direct Attach Copper (DAC), Active Optical Cables (AOCs) and traditional transceivers and optics with patch cords and jumpers. Note, DAC can come as either passive or active.

In-Cabinet Connectivity Using Twinax DAC Cables

In-cabinet connectivity for Figure 4 uses Twinax DAC cabling. The DAC cables plug into the servers and software appliances to the network switch or uplink switch. Generally, DAC cables (as shown in Figure 5) that are shorter than five meters are passive and do not consume power. They do nothing to the signal, only acting as a pass-through transmission medium.

Prior to the signal entering the passive DAC cable, the switch will set signal conversion, conditioning, amplification and equalization or skew. Properly utilizing passive DAC cables requires switches that have signal processing chipsets to maintain acceptable skew.

Typically, DAC cables longer than 5 meters are active and draw power from each end, but this may vary from vendor to vendor. Active cables cost three times more than passive cables on average. SFP+ DAC is a popular choice for 10G Ethernet. It reaches up to 10 meters, has low latency and lower cost.

One of the challenges of using DAC comes with cable management. They come in standard lengths and breakouts, which can leave long service loops and cause cable congestion. Generally, DAC cables can support 1G to 1G, 10G to 10G, 40G to 40G and 100G to 100G connections. They can also break out from 40G to (4) 10G and 100G to (4) 25G.

In-Cabinet Connectivity Using Active Optical Cables

AOCs are made of fiber optic glass with optics attached to each end (Figure 6). They are more expensive than DAC but can run longer distances and higher speeds up to 100G.

AOCs can support 10G to 10G, 40G to 40G and 100G to 100G connections. They can also break out from 40G to (4) 10G and 100G to (4) 25G.

A limitation that comes with using AOCs is that they only support one transmission speed and vendor type. When your next equipment upgrade occurs, the AOC cables will most likely need to be replaced. Although less expensive than individual optics and jumpers, they are not scalable to higher speeds.

In-Cabinet Connectivity Using Jumpers, Harnesses and Patch Cords

When using transceivers mounted in devices, copper patch cords can be used effectively to run speeds from 1G to 10G. Category 6 copper cabling (CAT6) is the current industry leader for these connections.

Recently released mini CAT6 (Figure 7) cabling has reduced the diameter of standard CAT6 by 50%. Mini CAT6 also has a more flexible copper core to better route the cable in-cabinet. It can be bundled to break out and stagger on each end. One end can stagger into a ToR switch and the other end into any server or appliance ports needed.

If the application uses patch panels or a ToR switch with required optical ports, fiber optic jumpers and harnesses (Figure 8) can be used effectively. One advantage of using pre-terminated fiber is that the lengths can be precise to make the required connection. Each connector can be pre-labeled to port destination for ease of installation. This greatly reduces the service loops in-cabinet and helps with managing all other connections, such as power cables and monitoring equipment.

Another advantage is a longer potential lifespan. Unlike DAC cables that feature a shorter length and lower power capabilities, fiber jumpers and harnesses will be forward-compatible as speeds increase.

Custom fiber optic harnesses (Figure 9) can also be made to specific lengths and breakouts and pre-labeled for ease of installation and repeatability.

Hyperconvergence

Hyperconvergence uses a software architecture to integrate compute, storage, networking, virtualization, and other technologies in a single hardware box. As data centers started to use converged infrastructure to bring compute systems into several cabinets (or just one cabinet), hyperconvergence has now reduced that footprint to 1U or 2U of rack space.

This new technology has relied on the advances of SDN. The software can communicate with all of the required components in the compute cycle, not just in one location but in many locations.

Along with SDN, Network Functions Virtualization (NFV) has become a predominant enterprise data center model. The implications of these models are far-reaching.

As an example, an organization may have its main data center located at its headquarters environment. This institution could have a disaster recovery site in another state that is housed in a colocation facility. It could also have its email and other business applications in the cloud, using a large cloud vendor like Amazon. Each of these three locations can run the same SDN on their equipment and NFV will allow them to appear and act as the same compute system to the user.

Hyperconvergence In-Cabinet Connectivity

When deploying hyperconvergence, the devices used are 1U and 2U in rack space and many can be installed in a single cabinet (Figure 10). Each of these devices would typically require two or four connections to an uplink switch. This uplink switch can be mounted at the top of the cabinet or in the center of the cabinet to reduce the length of the in-cabinet connectivity.

Generally, hyperconvergence has more in-cabinet connections than both three-layer topologies and converged infrastructure because of the number of converged machines used per cabinet. Understanding the connections necessary by speed and optimizing media type usage will reduce costs and best support the components in the cabinet.

Cable Kitting

As data centers deploy hyperconvergence, proper planning can help prepare for in-cabinet connections. For instance, determining the best lengths for DAC cables, breakouts, harnesses, and power cords is highly recommended. Once power cord lengths are determined, they can be color-coded for both power strips.

In addition, selecting the necessary optics per cabinet ahead of time will help with quick and efficient deployment. Optics can be programmed to work with several different vendors’ equipment to reduce the amount of part numbers to stock and manage. A single cabling supplier box can include all the needed connectivity and optics for one cabinet.

Conclusion

Data center topologies continue to migrate from three-layer to leaf-spine. SDN is helping the evolution to converged infrastructure and hyperconvergence.

As the individual components for computing are coming in smaller packages, such as 1U and 2U rack unit machines, the cabling is being compacted. The vast majority of the connections are now being made in-cabinet and accomplished with Twinax DAC cabling and/or AOCs.

Planning ahead and standardizing on cabinet equipment rack positions helps define the best cable for the connection. Acquiring the cables at proper length, breakout, color and labeling will ease installation, cable congestion, cooling and repeatability. This, in turn, will reduce downtime related to connectivity.


Have questions?  Need help with a project?

Contact us and let us know!

Speak To The Authors

Download the PDF of this white paper