100 Gigabit Ethernet (100GbE)

The steady march to 100 Gigabit Ethernet in data center networks appears to have progressed into a jog this year.

It’s August, and 100GbE port shipments have already doubled from 2017, while the rate of adoption continues to accelerate, as data centers move away from 40GbE, according to Dell’Oro Group. Shipments are projected to “almost triple by the end of the year,” Sameh Boujelbene, senior director at the market research firm, said.

The analyst was commenting on Dell EMC’s recent launch of its latest 100GbE Open Networking switch. By 2022, he predicted, 19.4 million 100GbE ports per year would be shipped, up from 4.6 million in 2017.

Dell’s chief rival Hewlett Packard Enterprise sees a similar trend. “100GbE is going to come really quickly,” HPE chief technologist Chris Dando told Data Center Knowledge in an interview.

That is, really quickly compared to the rate of transition previously from 1GbE to 10GbE, or to 40GbE. “There’s no next step on the 40GbE route, so 100GbE is the natural successor,”Dando said.

 

Why Now?

Hundred Gigabit Ethernet isn’t exactly new technology, so why so much activity now?

One part of the answer is maturity and falling costs of 100GbE networking equipment (and the corresponding 25GbE NICs for uplinks from increasingly dense blade and rack servers).

Juniper Networks and Cisco Systems have been making their own ASICs for 100/25GbE switches for some time. Broadcom, Cavium, and Mellanox Technologies now offer standard 100GbE-capable processors that are powering switches by the likes of Dell, HPE, Huawei Technologies, Lenovo Group, Quanta Computer, Super Micro Computer, and others.

Volume and competition are driving down prices, and the premiums for 100/25GbE equipment over 40/10GbE are relatively low: 20 to 30 percent more for NICs and 50 percent more per port for switches, according to SNIA estimates.

Servers may already have 25GbE. “We see a lot of servers in racks that are being upgraded with the latest Intel technology; that includes dual 25GbE IO capability,” Jeff Baher, director of networking and service provider solutions at Dell, told us.

The 100GbE products are backward-compatible, which simplifies deployment. Existing cabling can be reused by installing new transceivers or modules on the cables. Nearly all 25GbE adaptors and switch ports are backward-compatible with 10GbE, and 100GbE adaptors and ports gel with 40GbE. Most switches can support a mix of speeds.

 

There’s More to It Than Just Speed

The consolidation of cabling that’s possible with 100GbE makes it attractive from direct-savings perspective and because it reduces power and space needs. “You can use fewer transcoders and that can drive some of the costs down,” Dando said.

“With the previous generation of technology, if you have a full rack of compute going into a top-of-rack switch, you have 16 compute nodes, each with two 20GbE connections, so you have 320Gb going up. With the new technology, you’d be able to run that on ten servers with dual 25GbE connections, so you have 500GbE going to the top-of-rack switch but with fewer actual connections: from 32 down to 10.”

In internal tests, Dando said, HPE has seen 56 percent better performance with 27 percent lower total cost of ownership over 10GbE, along with 31 percent savings in power and 38 percent lower cabling complexity.

Putting 100GbE in spine networks also eases concerns about contention ratios and overloading core networks.

 

The Workloads

Large service providers and some hyperscale cloud platforms have already adopted 100GbE, for running high-performance computing or cloud infrastructure services (although Microsoft will only be taking Azure to 100GbE this year).

Data centers supporting HPC and technical computing for financial services, government platforms, oil and gas, and utilities also have started moving to 100GbE as part of adopting flatter data center fabric architectures and microservices, Brett Ley, data center sales director at Juniper, told us. Operators switching from proprietary InfiniBand storage connectivity to a more ubiquitous and cost-effective Ethernet model is another driver, he added.

Bandwidth and low-latency needs of the workloads in mainstream data centers are also reaching the point where 100GbE can help, Baher said. The workloads may be traditional, but as companies virtualize more and pool VMs or containers, “you need to be able to move them around left and right in the rack, which drives the need for higher speeds.”

Intel recently estimated that traffic inside data centers is growing 25 percent per year, most of it East-West traffic, creating network bottlenecks.

 

The Storage Factor

Also driving interest are software-defined storage technologies, such as Storage Spaces Direct, scale-out storage, and hyperconverged infrastructure, as they put more storage traffic on IP-based networks, Baher noted. “There [is] a slew of workloads that are very storage-intensive, where you’re moving big files or streams of data.”

Ethernet storage has three times the bandwidth of Fibre Channel, at about one-third of the cost. Flash drives with NVMe interfaces have higher throughput and lower latency than SAS or SATA drives.

Enterprise storage vendors, however, are moving slowly. SNIA estimated that some vendors would not support 25GbE (and 100GbE for all-flash arrays) until mid-2019.

 

Preparing for What’s Ahead

Some of the 100GbE adoption for data center networks is about future-proofing.

“The networking fabric in data centers doesn’t tend to change as rapidly as other things,” Dando said. Compute and storage, for example, are typically on three to five-year refresh cycles, while a common network refresh cycle is seven years, and even 10 years in some cases, because the network touches so much, and because it is so complex, he explained. That’s why network engineers have to look much further ahead when upgrading.

 “100/25GbE has become mainstream, and if you’re building a new data center or refreshing an environment, you’d have to have a pretty compelling reason not to deploy that now,” he said. “If you were doing 10/40GbE to a new data center right now, I’d question whether that was the right economic decision to make.”

Things 100GbE prepares you for are emerging technologies like NVMe over Fabrics. “RDMA and RDMA over converged Ethernet are the sorts of capabilities customers say they want 100GbE for, so they’re ready for NVMe over Fabrics,” Dando said.

Future-proofing includes making the network more flexible on protocols, because 100GbE silicon from suppliers like Broadcom supports programmable switches.

“When VXLAN started becoming the norm for taking traffic across data center environments, networking processors had to be re-engineered to support it, so we had to bring out a totally different product,” Dando said. With programmable switches, enabled by the likes of Broadcom’s Trident 3 processor, what once required changing the physical silicon can be done in software.