When designing a structured cabling plant to run speeds of 40/100G for Ethernet and 32GFC and 128GFCp for Fibre Channel, one must review the latest published standards that apply. Also, understanding what the next standards will entail helps to ensure you implement a cabling system that will last for multiple generations of hardware upgrades.
The Telecommunications Industry Association (TIA), an affiliate of the Electronic Industries Alliance (EIA), has published several standards that impact structured cabling. Included among these is the TIA-942-A Telecommunications Infrastructure Standard for Data Centers which addresses the basic topology of structured cabling design.
The Institute of Electrical and Electronics Engineers (IEEE) has published the IEEE 802.3bm which defines the physical layer’s and data link layer’s media access control (MAC) of wired Ethernet. In addition, the Fibre Channel Industry Association (FCIA) and the InterNational Committee for Information Technology Standards (INCITS) have released standards regarding Fibre Channel interfaces and transceivers.
These standards provide a framework for designing effective structured cabling infrastructures that meet 40/100G Ethernet, as well as 32GFC and 128GFCp light budgets. Taking a deeper look at each allows us to discern proper strategies for success.
The TIA-942-A standard includes a diagram of a recommended structured cabling design as shown in Figure 1. Note the four patch areas listed as Main Distribution Area (MDA), Horizontal Distribution Area (HDA), Zone Distribution Area (ZDA) and Equipment Distribution Area (EDA).
These design recommendations have several advantages and disadvantages. Using patch panels in a data center topology is beneficial because it keeps cable lengths short and reduces the need for long (home run) cable runs. We’re able to serve just one cabinet, a pod of cabinets or a row of cabinets from an end-of-row (EoR) or middle-of-row (MoR) patch cabinet. The next advantage comes in the event of changing connector interfaces due to a hardware upgrade. In this instance, only a short patch cord would need to be replaced versus a longer “point-to-point” cable.
Another advantage is the accessibility of the MDA. There are numerous benefits to an MDA design:
Maximum uptime - Adding new hardware takes far less time. Since the cables are neat and installed in an orderly manner, there is less confusion and it is less likely that the wrong cable will be disconnected during moves, adds and changes.
Reduced footprint - A structured cabling system allows technicians to use smaller diameter trunk cabling. This reduces overall cable bulk and decreases the chances for blocked airflow or crushed cables. A structured cabling system is usually much more organized.
Increased scalability - Today’s data centers are complex and diverse environments. A structured cabling system is useful to manage and support connections to hardware and to enable growth of the network. Due to its modular design, a structured system allows for increased scalability. Since the footprint is already in place, it is easy to add additional patch panels to create new connections.
Return on investment - A structured cabling system will last over multiple generations of hardware, increasing ROI and maximizing value.
In addition, over time a structured cabling system will:
Having the ports from a core switch and ports from the data center floor in an MDA location with cross-connections is very useful (as shown in Figure 2). This is the desired method used for port mapping.
When all of these factors are considered together, this makes structured cabling far more cost-efficient than a point-to-point cabling system. However, the disadvantage to this type of structured cabling solution is an increased number of mated pairs in the channel.
Due to this, it is imperative to use a cabling product set that can meet current and foreseeable loss budgets. This makes it possible to attain an equilibrium between performance, cable management and return on investment.
The TIA has released several other standards that assist in designing structured cabling plants. The TIA-568-C.0, C.1, C.2 and C.3: Generic Telecommunications Cabling, Balanced Twisted-Pair (Copper) Cabling and Fiber Optic Cabling, TIA-569-B: Telecommunications Pathways and Spaces, and TIA-606-B: Administration or Labeling and Identification can be used as additional resources when developing your data center infrastructure cabling specifications.
The latest standard for Ethernet connectivity from the Institute of Electrical and Electronics Engineers is called IEEE 802.3bm. Released in April 2015, 802.3bm defines the physical layer’s and data link layer’s media access control (MAC) of wired Ethernet. This is generally a local area network technology with some wide area network applications.
IEEE 802.3bm addresses 40G and 100G Ethernet from an SR-4 (four fibers or lanes in one direction) transceiver perspective. The previous standard had 40G running on a 12-fiber SR-4 MTP® connector and 100G on a 24-fiber SR-10 MTP® connector. The latest standard has both 40G and 100G running on a 12 fiber SR-4 MTP® connector. Recently, several equipment manufacturers have released 40G Ethernet connections that operate using duplex or LC connectors.
The IEEE 802.3bm standard specifies link loss budgets on OM3 glass of 1.8 dB at 70 meters. It specifies a loss of 1.9 dB at 100 meters on OM4 glass. The next Ethernet standard is IEEE 802.3bs, released in 2017, the IEEE 802.3bs introduced 400G Ethernet.
The third group of standards to consider is those published by the FCIA and the T11 Committee. T11 is a technical committee of INCITS responsible for Fibre Channel interfaces or transceivers.
The current standard for Fibre Channel is the 5th Generation or 16GFC. Generation 6, set to be released soon, will include 32GFC and 128GFCp. With both 16GFC and 32GFC, the current duplex or LC connector will provide the connections. Operating 128GFCp requires a QSFP28 transceiver connected with a 12-fiber SR-4 MTP® connector.
Link loss budgets for 32GFC and 128GFCp specify a loss of 1.9 dB at 100 meters on OM4 glass. Just as in the IEEE 802.3bm standard, 1.9 dB of loss at 100 meters on OM4 glass is a good benchmark to consider when planning a structured cabling system. The next speed increase for Fibre Channel will be the 7th Generation, specifying 64GFC and 256GFCp on QSFP56 transceivers.
It is imperative to understand the published standards for data center infrastructure cabling design, Ethernet connectivity and Fibre Channel connectivity. These standards provide the benchmarks for designing successful structured cabling topologies in the data center.
A first step in the process is to design the cabling from the core switch out to the data center floor. Having a patch panel above the core switch provides the option to use LC-MTP® harnesses into the blade or line card ports (as shown in Figure 3).
These pre-engineered harnesses can be constructed from 8-fiber/4-port to 96-fiber/48-port. They come pre-staggered and labeled for easy installation and identification. They are easily customizable for any application and made to work with any blade or line card on the market.
The patch panel above the core switch comes in different sizes and configurations to best match the switch it is replicating and connecting. The MTP® coupler panels can exactly replicate the blades or line cards in the core switch. For instance, a Brocade 8510-8 can support eight 48-port blades. One 48-port blade would require one 96-fiber/48-port LC-MTP® harness. That harness has eight MTP® connectors using one 8-coupler MTP® panel (as shown in the 4U enclosure in Figure 3). Eight MTP® 8-coupler panels will support all eight 48-port blades or 384 ports in total.
To install one of the newer 128GFCp blades with QSFP28 optics in the 8510-8 chassis all that is required is to simply add MTP® to MTP® harnesses from the blade to the MTP® coupler panel. There is also the option to add an MTP®-LC cassette module into the enclosure over the core switch and use duplex jumpers into the core blade ports.
The back of the enclosures mounted in the core cabinet would feature MTP® to MTP® trunks. These trunks are also known as backbone cabling. In a structured cabling design, this fiber would remain in place for multiple generations of equipment upgrades. This is another great advantage of implementing a structured cabling system.
The backbone cabling can either go into a core replication enclosure as an MDA cross-connect area (as shown in Figure 2) or be divided up into the data center floor. If cabling to the data center floor, the standards recommend using an EoR or MoR patch cabinet.
Director-class core switch replication enclosures come in different sizes and configurations to best mirror the core switch. A typical user scenario might involve a Brocade 8510-8 chassis. When first purchased, the chassis could be loaded with two 32-port blades. Over time, more SAN ports may become needed and a new 48-port blade might be installed. As the organization grows, even more ports may be required and an FC16-64 blade might be installed to optimize the port count of 64 ports per blade.
The 12U enclosure pictured in Figure 4 shows how to replicate a 32-, 48- or 64-port blade using MTP®-LC cassettes. These cassettes are custom-labeled and numbered to match the face of the blade. For example, the front of the FC16-64 cassette would be numbered 0-63.
When deploying new core switches, as in the scenario of the Brocade 8510-8 above, consideration should be given to future growth, upgrades and/or switch replacements in the data canter. The standards recommend a three- or four-cabinet core switch install as shown in Figure 5.
Using wide and deep cabinets for the core switches and main patch cabinet will facilitate easier cable management and maintenance. Installing the core switch at the bottom of the cabinet allows room for an additional switch, if needed. In addition, having the bulk of the cabling in a separate cabinet (MDA) allows for easy migration from the old switch to the new switch, and greatly simplifies the process of adding another switch.
Fiber optic links can come in three basic configurations, each with its own advantages and disadvantages. Examples of each are shown in Figures 6, 7 and 8.
The example in Figure 6 shows the most common link method used in the data center. In this configuration, MTP®-LC cassettes are installed over the core switch and the other MTP®-LC cassettes are installed either in an MDA location or out on the data center floor. This option requires the least amount of planning and uses duplex LC to LC jumpers routed out of the front of the cassettes into the needed ports. However, this is the most costly link with the highest dB loss.
The example in Figure 7 shows LC-MTP® harnesses routed out of the core switch into MTP® coupler panels and MTP®-LC cassettes on the other end of the link. This design reduces the cost of the MTP®-LC cassette over the core switch and allows the use of custom LC-MTP® harnesses. It also reduces the dB link loss by removing a mated pair of LC connectors.
The example in Figure 8 shows the LC-MTP® harnesses routed out of the core switch with LC-MTP® harnesses on the data center floor. The challenge with this design comes in the customized breakout lengths of the harnesses on the data center floor, but if the application is large-scale with individual cabinet or pod design, this configuration can be used quite effectively. It also has the least cost attached to it and lowest dB link loss.
When designing structured cabling solutions, it is also imperative to consider the data center floor ports and how they are connected back to the core switch(es). Planning for an EoR or MoR patch cabinet has many advantages. For years, many designs had a top-of-rack (ToR) switch in every cabinet that would connect back to a core switch. A challenge with this design is that some ports in the ToR switch aren’t used or needed.
The unused ports in the ToR switch may or may not contain transceivers and there is most likely a maintenance cost associated with the switch. Recently, more and more data center managers have decided to better plan and make the needed ports in a cabinet or row more scalable. This is where the EoR or MoR cabinets come into play. Figure 9 shows a typical EoR/MoR design.
Many data center managers plan for expansion with a pod design. A pod design is a group of cabinets or racks which can act as a self-contained unit of scalable hardware and power. As an example, a data center could have a deployment strategy where a row has 13 cabinets. In the row with 13 cabinets, there is one EoR cabinet as shown in Figure 9. The data center manager could decide that a pod will contain three cabinets and a row will have four pods (or 12 compute cabinets).
When planning the number of network and storage ports in a pod most operators have two scenarios. The first is what a typical application would require for most situations. The second would be for a maximum number of ports for applications that require the most connections. Basically, it is recommended to plan for normal conditions but have a strategy for high-density scenarios. A typical pod design is shown in Figure 10.
In conclusion, it is imperative to understand the published standards for data center infrastructure cabling design, Ethernet connectivity and Fibre Channel connectivity to design an efficient structured cabling plant. The TIA-942-A standard recommends using patch panels at strategic locations, but not too many as to exceed the link loss light budgets.
We must consider that link loss needs to be less than 1.9 dB at 100 meters for the latest Ethernet and Fibre Channel connection speeds. It is also wise to consider a link loss of under 1.5 dB to plan for future speed increases.
A well-designed structured cabling solution should:
Have questions? Need help with a project?
Contact us and let us know!