One of Chromium's features puts a huge load on root DNS servers

One of Chromium's features puts a huge load on root DNS servers

The Chromium browser, the burgeoning open-source parent of Google Chrome and the new Microsoft Edge, has received significant backlash for a well-intentioned feature: it checks to see if the user's ISP is "stealing" non-existent domain query results.

Intranet Redirect Detector, which creates fake queries for random "domains" that are statistically unlikely to exist, is responsible for about half of the total traffic received by root DNS servers worldwide. Verisign engineer Matt Thomas wrote a lengthy post on the APNIC blog with a description of the problem and an estimate of its scope.

How DNS resolution is normally performed

One of Chromium's features puts a huge load on root DNS servers
These servers are the ultimate authority that you should contact to resolve .com, .net, and so on, so that they tell you that frglxrtmpuf is not a top-level domain (TLD).

The DNS, or Domain Name System, is a system by which computers can translate memorable domain names like arstechnica.com into much less convenient IP addresses like 3.128.236.93. Without DNS, the Internet could not exist in a usable form, which means that unnecessary load on the top-level infrastructure is a real problem.

Loading a single modern web page can require an unimaginable number of DNS lookups. For example, when we analyzed the ESPN homepage, we counted 93 separate domain names, from a.espncdn.com to z.motads.com. All of them are needed to fully load the page!

In order to be able to handle this load for a search engine that needs to serve the entire world, the DNS is designed as a multi-level hierarchy. At the top of this pyramid are the root servers - each top-level domain, such as .com, has its own family of servers that are the ultimate authority for each domain below them. One step up these servers are the root servers themselves, from a.root-servers.net to m.root-servers.net.

How often does this happen?

Due to the tiered caching hierarchy of the DNS infrastructure, a very small percentage of the world's DNS queries reach root servers. Most people get their DNS resolver information directly from their ISP. When a user's device needs to know how to get to a particular site, the query is first sent to a DNS server managed by that local ISP. If the local DNS server does not know the answer, it forwards the request to its own "forwarders" (if any are specified).

If neither the local ISP's DNS server nor its configured "forwarders" have a cached response, the query goes directly to the authoritative domain server above the one you are trying to convert. When Π΄ΠΎΠΌΠ΅Π½.com this will mean that the request is sent to the authoritative servers of the domain itself comwhich are located at gtld-servers.net.

System gtld-servers, to which the request was made, responds to it with a list of authoritative name servers for the domain domain.com, as well as at least one glue record containing the IP address of one such name server. Then the responses go down the chain - each forwarder passes these responses down to the server that requested them, until the response finally reaches the server of the local provider and the user's computer. All of them cache this response so that higher-level systems do not needlessly be disturbed.

In most cases, nameserver entries for domain.com will already be cached on one of these forwarders, so no one bothers the root servers. However, for now, we are talking about the form of the URL we are used to - the one that is converted into a regular website. Chrome requests refer to the level above this, on the step of the clusters themselves root-servers.net.

Chromium and NXDomain theft check

One of Chromium's features puts a huge load on root DNS servers
Chromium checks "is this DNS server not fooling me?" account for nearly half of all traffic reaching the Verisign Root DNS Server Cluster.

The Chromium browser, the parent project of Google Chrome, the new Microsoft Edge and countless lesser-known browsers, wants to provide users with the ease of searching in a single box, sometimes referred to as an "Omnibox". In other words, the user enters both real URLs and search engine queries in the same text field at the top of the browser window. Taking it one step further towards simplification, it also doesn't force the user to enter the URL part with http:// or https://.

As convenient as it is, this approach requires the browser to understand what should be considered a URL and what should be considered a search query. In most cases, this is pretty obvious - for example, a string with spaces cannot be a URL. But things can get trickier when you consider intranetsβ€”private networks that can also use private top-level domains to resolve real websites.

If a user on their company's intranet types "marketing" and there is an internal website with the same name on their company's intranet, Chromium displays an information box asking the user if they want to search for "marketing" or go to https://marketing. It's still all right, but many ISPs and public Wi-Fi network providers "steal" every misspelled URL, redirecting the user to some page stuffed with banner ads.

random generation

The developers of Chromium didn't want users on regular networks to see an info box asking what they meant every time they searched for a single word, so they implemented a test: when they launch a browser or change networks, Chromium performs DNS lookups on three randomly generated "domains" the top level is seven to fifteen characters long. If any two of these requests return with the same IP address, then Chromium assumes that the local network is "stealing" errors NXDOMAIN, which it should receive, so the browser considers all single-word queries entered as search attempts until further notice.

Unfortunately, networks that not steal the results of DNS queries, these three operations usually go all the way up to the root name servers themselves: the local server does not know how to resolve qwajuixk, so forwards this request to its forwarder, which does the same, until finally a.root-servers.net or one of his "brothers" will not be forced to say "Sorry, but this is not a domain."

Since there are approximately 1,67*10^21 possible fake domain names between seven and fifteen characters in length, most often each of these tests, performed on a "fair" network, gets to the root server. This amounts to half from the total load on the root DNS, according to statistics from that part of the clusters root-servers.net, which are owned by Verisign.

History repeats itself

This is not the first time that a well-intentioned project flunked or almost flooded a public resource with unnecessary traffic - this immediately reminded us of the long and sad history of D-Link and the Poul-Henning Kamp NTP server in the mid-2000s.

In 2005, FreeBSD developer Poul-Henning, who also owned Denmark's only Stratum 1 Network Time Protocol server, received an unexpected and large bill for his traffic. In short, the reason was that the D-Link developers wrote the addresses of the Stratum 1 NTP servers, including the Kampa server, into the firmware of the company's line of switches, routers and access points. This instantly increased the Kampa server's traffic by nine times, causing the Danish Internet Exchange (Denmark's Internet exchange point) to change its rate from "Free" to "$9 per year".

The problem was not that there were too many D-Link routers, but that they "violated the chain of command." Much like DNS, NTP must operate in a hierarchical fashion - Stratum 0 servers relay information to Stratum 1 servers, which relay information to Stratum 2 servers, and so on down the hierarchy. A typical home router, switch, or access point, like the ones D-Link had programmed with NTP server addresses, would send requests to a Stratum 2 or Stratum 3 server.

The Chromium project, probably with the best of intentions, replicated the NTP problem in the DNS problem by flooding the Internet's root servers with queries they should never have handled.

There is hope for a solution soon

The Chromium project has an open a bug, which requires the Intranet Redirect Detector to be disabled by default to resolve this issue. We must pay tribute to the Chromium project: a bug was found beforehow Matt Thomas of Verisign brought him a lot of attention with his fasting on the APNIC blog. The bug was opened in June, but remained in oblivion until the post of Thomas; after fasting, he began to be under careful supervision.

It is hoped that the problem will soon be resolved, and the root DNS servers will no longer have to respond to approximately 60 billion bogus requests daily.

As advertising

Epic servers - Is VPS on Windows or Linux with powerful AMD EPYC family processors and very fast Intel NVMe drives. Hurry up to order!

One of Chromium's features puts a huge load on root DNS servers

Source: habr.com

Add a comment