How 1500 bytes became the maximum unit of information transfer on the Internet

How 1500 bytes became the maximum unit of information transfer on the Internet

Ethernet is everywhere, and tens of thousands of manufacturers produce equipment that supports it. However, almost all of these devices have one thing in common - MTU:

$ ip l
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 state UNKNOWN
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 state UP 
    link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff

MTU (Maximum Transmission Unit) defines the maximum size of a single data packet. In general, when you exchange messages with devices on your LAN, the MTU will be on the order of 1500 bytes, and almost the entire Internet operates at 1500 bytes. However, this does not mean that these communication technologies cannot transmit larger packet sizes.

For example, 802.11 (commonly known as WiFi) has an MTU of 2304 bytes, and if your network uses FDDI, then your MTU is 4352 bytes. Ethernet itself has the concept of β€œgiant frames”, where the MTU can be assigned a size of up to 9000 bytes (with support for this mode by NICs, switches and routers).

However, on the Internet this is not particularly necessary. Since the Internet's main backbones are primarily made up of Ethernet connections, the de facto unofficial maximum packet size is set to 1500B to avoid packet fragmentation on other devices.

The number 1500 itself is strange - one would expect constants in the computer world to be based on powers of two, for example. So where did 1500B come from and why do we still use it?

magic number

Ethernet's first big breakthrough into the world came in the form of standards. 10BASE-2 (thin) and 10BASE-5 (thick), the numbers in which indicate how many hundreds of meters a particular network segment can cover.

Since there were many competing protocols at the time, and hardware had its limitations, the creator of the format admits that the memory requirements of the packet buffer played a role in the emergence of the magic number 1500:

In hindsight, it's clear that a larger maximum might have been a better solution, but if we had increased the cost of NICs early on, it would have prevented Ethernet from becoming as widespread.

However, this is not the whole story. IN work β€œEthernet: Distributed Packet Switching in Local Computer Networks,” 1980, provides one of the earliest analyzes of the effectiveness of using large packets in networks. At that time, this was especially important for Ethernet networks, since they could either connect all systems with a single coaxial cable, or consist of hubs capable of sending one packet to all nodes on the same segment at one time.

It was necessary to choose a number that would not result in too high delays when transmitting messages in segments (sometimes quite busy), and at the same time would not increase the number of packets too much.

Apparently, engineers at that time chose the number 1500 B (about 12000 bits) as the most β€œsafe” option.

Since then, various other messaging systems have come and gone, but among them, Ethernet had the lowest MTU value with its 1500 Bytes. Exceeding the minimum MTU value in a network means either causing packet fragmentation or engaging in PMTUD [finding the maximum packet size for selected path]. Both options had their own special problems. Even if sometimes large OS manufacturers lowered the MTU value even lower.

Efficiency factor

We now know that the Internet MTU is limited to 1500B, largely due to legacy latency metrics and hardware limitations. How much does this affect the efficiency of the Internet?

How 1500 bytes became the maximum unit of information transfer on the Internet

If we look at data from a large Internet traffic exchange point AMS-IX, we see that at least 20% of transmitted packets have a maximum size. You can also look at the total LAN traffic:

How 1500 bytes became the maximum unit of information transfer on the Internet

If you combine both graphs, you get something like the following (traffic estimates for each packet size range):

How 1500 bytes became the maximum unit of information transfer on the Internet

Or, if we look at the traffic of all these headers and other service information, we get the same graph with a different scale:

How 1500 bytes became the maximum unit of information transfer on the Internet

Quite a large portion of the bandwidth is spent on headers for packets in the largest size class. Since the highest overhead at peak traffic is 246 GB/s, it can be assumed that if we had all switched to "jumbo frames" when such an option still existed, this overhead would only be about 41 GB/s.

But I think today for the largest part of the Internet that train has already left. And although some providers work with an MTU of 9000, most do not support it, and trying to change something globally on the Internet has proven to be extremely difficult over and over again.

Source: habr.com

Add a comment