A wise person once said, “To fully understand where you are going, you must first understand where you have been. "Storage, for example, has had a unique history over the past 20 years. For most of that time, storage requirements have been increasing at a pace ahead of Moore’s Law; the last five have been light years ahead. Big Data (Hadoop/Sparks), 5G, IoT, Artificial Intelligence (AI), Augmented Reality (AR), and Machine Learning (ML) are all drivers for the recent increase in storage requirements.
The traditional storage protocols such as, Fibre Channel, underwent changes that will also affect how we design storage networks moving forward. The migration path started with spinning hard drives (HDD), then moved to solid-state storage, (SSD) and then onto non-volatile memory express (NVMe). As NVMe continues presenting huge opportunities for the future, it's helpful to know how it will affect storage design moving forward.
Traditional spinning drives were huge, bulky, and slow but they got the job done. Like an old workhorse, HDDs were reliable if the requirements didn’t demand much speed or bandwidth; this still holds true for HDDs today. As storage requirements grew, spinning drives were replaced with solid state drives.
Because they had no moving parts, SSDs provided more bandwidth but at a higher cost. The introduction of high bandwidth applications to the industry was the catalyst that developed our instant gratification attitude for better speeds and access today.
Spinning hard drives connect to the motherboard using a serial ATA (SATA) cable. The standard for interface communication is the Advanced Host Controller Interface (AHCI). Limitations for data access were due to the spinning drive and cable connection.
Solid state drives connect to the motherboard using the same SATA cables, which were limited to 600 MB/S bandwidth. The standard for these interface communications is also the Advanced Host Controller Interface (AHCI). Increased gaming and high-end video applications inspired this new storage design.
The latest standard NVMe to communicate with the interface enables a new connection to the motherboard using PCIe slots on the board. By removing the SATA cable and using a PCI slot, the bottleneck caused by the SATA cable bandwidth limit of 600 MB/s is eliminated. Its max interface bandwidth is >4GB/s.
The interface standard determines the amount of data requests a computer can send to a storage device. The new NVMe standard is designed for performance and speed so it has a much greater capacity for handling 1000s of requests in a single instance, whereas the AHCI standard was written for the much slower HDDs.
The new protocol over fabric includes Ethernet, RDMA, Infiniband, and Fibre Channel. Storage managers will see a huge boost in application performance, efficiency, and scalability when deploying NVMe over Fabrics. The new protocol will drive latency down to 20 microseconds or less. This increase in performance and decrease in latency new protocol will be a paradigm shift for storage managers.
Brocades Gen 5/6/7 products can use the new NVMe protocol. Luckily, this means storage managers with over 90% of their existing storage arrays in the Fibre Channel protocol do not have to rip and replace the FC infrastructure because NVMe over Fabric facilitates the use of current FC protocol with the new NVMe protocol.
“The new Brocade G630 switch and Brocade FC32-64 port blade for the Brocade X6 director are the latest Gen 6 Fibre Channel switching solutions built for flash storage-based systems to continue accelerating business operations, respond to dynamic demands, and eliminate complexity for the world’s most critical enterprises.” Brocade paper
The NVMe over Fibre Channel (FC-NVMe) specification is defined by the T11 committee of the International Committee on Informational Technology Standards (INCITS).
The need for high-performance storage is increasing every day, as is evident by the increase in SSD storage across the network the past few years. At the forefront of that increase is the need for NVMe interface to use SSD in the future.
The protocol for it has expanded to include fabrics like Ethernet, Infiniband and Fibre Channel. This is important because Fibre Channel is such a huge part of the installed base and allows customers to maintain the Fibre Channel architecture while taking advantage of the huge NVMe benefits using SSD infrastructure.
As storage moves closer and closer to the edge of the network, the need for high-performance SSD flash will continue to increase. CABLExpress has been at the forefront of developing layer 1 solutions that support storage networks into next millennium. Contact us to learn more about how our pre-terminated solution offers a wide array of capabilities to support managers as they scale from 16G to 128G applications.