100GbE: luxury or necessity?

IEEE P802.3ba, a standard for data transmission over 100 Gigabit Ethernet links (100GbE), was developed between 2007 and 2010 [3], but only became widespread in 2018 [5]. Why in 2018 and not before? And why immediately in droves? There are at least five reasons why...

100GbE: luxury or necessity?

IEEE P802.3ba was developed primarily to meet the needs of the data center and the needs of Internet traffic exchange points (between independent operators); as well as to ensure the smooth operation of resource-intensive web services, such as portals with a large amount of video content (for example, YouTube); and for high performance computing. [3] Consumers on the Internet are also contributing to changing bandwidth requirements: many people have digital cameras and want to stream their content over the Internet. That. The amount of content circulating on the Internet is getting bigger and bigger over time. Both at the professional and consumer levels. In all these cases, when transferring data from one domain to another, the aggregate throughput of key network nodes has long exceeded the capabilities of 10GbE ports. [1] This is the reason for the emergence of a new standard: 100GbE.

Large data centers and cloud service providers are already actively adopting 100GbE, and plan to gradually move to 200GbE and 400GbE in a couple of years. At the same time, they are already looking at speeds exceeding terabit. [6] Although there are some large vendors that are switching to 100GbE just last year (for example, Microsoft Azure). Data centers that provide high-performance computing for financial services, government platforms, oil and gas platforms, and utilities have also started moving to 100GbE. [5]

In enterprise data centers, the need for bandwidth is somewhat lower: only recently 10GbE has become a necessity here, not a luxury. However, since the rate of traffic consumption is growing more and more rapidly, it is doubtful that 10GbE will live in corporate data centers for at least 10 or even 5 years. Instead, we will see a fast transition to 25GbE and an even faster transition to 100GbE. [6] Because, according to Intel analysts, the intensity of traffic inside the data center increases by 25% annually. [5]

Dell and Hewlett Packard analysts state [4] that 2018 is the year of 100GbE for the data center. Back in August 2018, shipments of 100GbE equipment doubled shipments for the entire 2017 year. And the pace of supply continues to pick up as data centers move away from 40GbE en masse. It is expected that by 2022, 19,4 million 100GbE ports will be supplied annually (in 2017, for comparison, this figure was 4,6 million). [4] In terms of costs, $2017 billion was spent on 100GbE ports in 7 and is projected to be around $2020 billion in 20 (see Figure 1). [1]

100GbE: luxury or necessity?
Figure 1. Statistics and forecasts of demand for network equipment

Why now? 100GbE is not such a new technology, so why is there such a hype around it right now?

1) Because this technology has matured and become cheaper. It was in 2018 that we crossed the line when the use of platforms with 100-gigabit ports in the data center became more cost-effective than β€œstacking” several 10-gigabit platforms. Example: Ciena 5170 (see Figure 2) is a compact platform that provides 800GbE (4x100GbE, 40x10GbE) aggregate throughput. If multiple 10 Gigabit ports are required to provide the required bandwidth, then the costs of additional hardware, extra space, excess power consumption, ongoing maintenance, additional parts, and additional cooling systems add up to a fairly tidy sum. [1] For example, Hewlett Packard experts, analyzing the potential benefits of moving from 10GbE to 100GbE, came up with the following figures: higher performance (by 56%), lower total costs (by 27%), less power consumption (by 31%), simplification cable interconnections (by 38%). [5]

100GbE: luxury or necessity?
Figure 2. Ciena 5170: Platform example with 100 Gigabit ports

2) Juniper and Cisco have finally created their own ASICs for 100GbE switches. [5] Which is an eloquent confirmation of the fact that 100GbE technology is indeed mature. The fact is that it is profitable to create ASIC microcircuits only when, firstly, the logic implemented on them does not require changes in the foreseeable future, and secondly, when a large number of identical microcircuits are manufactured. Juniper and Cisco wouldn't make these ASICs without being confident in the maturity of 100GbE.

3) Because Broadcom, Cavium, and Mellanox Technologie have started churning out processors with 100GbE support, and these processors are already used in switches from manufacturers such as Dell, Hewlett Packard, Huawei Technologies, Lenovo Group, etc. [5]

4) Because server racks are increasingly equipped with the latest Intel NICs (see Figure 3), with two 25 Gigabit ports, and sometimes even converged NICs with two 40 Gigabit ports (XXV710 and XL710) . {Figure 3. Latest Intel NICs: XXV710 and XL710}

5) Because 100GbE equipment is backwards compatible, which simplifies deployment: you can reuse already routed cables (just connect a new transceiver to them).

In addition, the availability of 100GbE prepares us for new technologies such as "NVMe over Fabrics" (e.g. Samsung Evo Pro 256 GB NVMe PCIe SSD; see Figure 4) [8, 10], "Storage Area Network" (SAN) / "Software Defined Storage" (see Fig. 5) [7], RDMA [11], which without 100GbE could not realize their full potential.

100GbE: luxury or necessity?
Figure 4. Samsung Evo Pro 256 GB NVMe PCIe SSD

100GbE: luxury or necessity?
Figure 5. "Storage Area Network" (SAN) / "Software Defined Storage"

Finally, as an exotic example of the practical demand for the use of 100GbE and related high-speed technologies, we can cite the scientific cloud of the University of Cambridge (see Fig. 6), which is built on the basis of 100GbE (Spectrum SN2700 Ethernet switches), in order, among other things, ensure the efficient operation of NexentaEdge SDS distributed disk storage, which can easily overload a 10/40GbE network. [2] Such high-performance scientific clouds are deployed to solve a wide variety of applied scientific problems [9, 12]. For example, medical scientists use such clouds to decipher the human genome, and 100GbE channels are used to transfer information between university research groups.

100GbE: luxury or necessity?
Figure 6. A fragment of the scientific cloud of the University of Cambridge

Bibliography

  1. John Hawkins. 100GbE: Closer to the Edge, Closer to Reality // one.
  2. Amit Katz. 100GbE Switches – Have You Done The Math? // one.
  3. Margaret Rose. 100 Gigabit Ethernet (100GbE).
  4. David Graves. Dell EMC Doubles Down on 100 Gigabit Ethernet for the Open, Modern Data Center // one.
  5. Mary Branscombe. The Year of 100GbE in Data Center Networks // one.
  6. Jarred Baker. Moving Faster in the Enterprise Data Center // one.
  7. Tom Clark. Designing Storage Area Networks: A Practical Reference for Implementing Fiber Channel and IP SANs. 2003. 572p.
  8. James O'Reilly. Network Storage: Tools and Technologies for Storing Your Company's Data // 2017. 280p.
  9. James Sullivan. Student cluster competition 2017, Team University of Texas at Austin/Texas State University: Reproducing vectorization of the Tersoff multi-body potential on the Intel Skylake and NVIDIA V100 architectures // Parallel Computing. v.79, 2018.pp. 30-35.
  10. Manolis Katevenis. The next Generation of Exascale-class Systems: the ExaNeSt Project // Microprocessors and Microsystems. v.61, 2018.pp. 58-71.
  11. Hari Subramoni. RDMA over Ethernet: A Preliminary Study // Proceedings of the Workshop on High Performance Interconnects for Distributed Computing. 2009.
  12. Chris Broekema. Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA // Future Generation Computer Systems. v.79, 2018.pp. 215-224.

PS. The article was originally published in "System Administrator".

Only registered users can participate in the survey. Sign in, you are welcome.

Why did large data centers begin to massively switch to 100GbE?

  • In fact, no one has started moving anywhere yet ...

  • Because this technology has matured and become cheaper

  • Because Juniper and Cisco created ASICs for 100GbE switches

  • Because Broadcom, Cavium, and Mellanox Technologie added support for 100GbE

  • Because servers have 25- and 40-gigabit ports

  • Your version (write in the comments)

12 users voted. 15 users abstained.

Source: habr.com

Add a comment