High-density 100GbE switches are rapidly becoming the norm for enterprise network backbones. According to the latest data from Crehan Research, shipments of 100GbE switches more than doubled year-over-year in 2018, surpassing shipments of 40GbE switches by a significant margin.
Data center managers generally are reluctant to make changes to the production environment, particularly when equipment has not been fully depreciated. However, the move to 100GbE has taken place in just three years since the first production shipments of 100GbE switches in 2015.
What’s more, Crehan predicts that 400GbE switches will see even faster adoption than 100GbE, accounting for the majority of data center Ethernet switch bandwidth by 2022. The 400GbE standard was ratified Dec. 6, 2017, and the first 400GbE switches began shipping in 2018. Crehan expects the ramp up to 400GbE to start later in 2019 or early 2020.
Clearly, 100GbE technology still has significant value, so it’s unlikely that organizations will do a wholesale “rip and replace” of their 100GbE network backbones. Instead, most organizations will phase in 400GbE to relieve congestion in their aggregation networks and satisfy demand for even greater bandwidth.
A number of factors are driving the need for 400GbE in the enterprise data center. Server densities and processor capabilities continue to increase, and organizations are adopting more hyper-converged systems as well. Each server node has two 10Gb or 25Gb network interface cards, so organizations need 400GbE if they’re going to pack even more capacity into the same space.
Cloud platforms and applications are also putting greater demand on the network as processing and data access continue to move away from endpoint devices. At the same time, Internet of Things (IoT) devices are collecting massive amounts of data at the network edge, much of which will need to be pushed to the data center or cloud for analysis. Organizations also need more capacity to support artificial intelligence and other emerging technologies.
In addition to providing greater bandwidth, 400GbE requires one-fourth the number of connections to the spine as 100GbE, reducing port density, simplifying cabling and allowing more room for expansion. Multiplexers can be used to subdivide the bandwidth into smaller increments for downstream connectivity. Organizations will have to replace their traditional quad small formfactor pluggable (QSFP) transceivers with the new “double-density” QSFPs that support 400GbE. However, QSFP-DDs have the same module formfactor as QSFP for backward compatibility.
Crehan believes that economics will also drive 400GbE adoption. Although 400GbE switches are more expensive on a per-port basis than older technologies, they offer significant value on a per-gigabit basis. As prices continue to drop, that value will only increase. Also, new serializer-deserializer technology is being developed that supports 100Gb single lane-rate connectivity. That means only four “lanes” (fibers) will be required to deliver 400GbE, further reducing costs and power requirements.
The transition to 100GbE is well under way. With seemingly no upper limit to bandwidth demands, the move to 400GbE is following close behind. If your network capacity requirements are accelerating rapidly, you’ll want to keep 400GbE on your radar. Technologent’s engineers have extensive experience in the design and implementation of high-performance networks and can help you determine the best path forward.
Tags:
Data Center DesignAugust 12, 2019
Comments