ProHoster > Blog > Administration > /etc/resolv.conf for Kubernetes pods, option ndots:5, how this can negatively affect application performance
/etc/resolv.conf for Kubernetes pods, option ndots:5, how this can negatively affect application performance
We recently launched Kubernetes 1.9 on AWS with the help of Kops. Yesterday, while smoothly rolling out new traffic to our largest Kubernetes cluster, I started noticing unusual DNS name resolution errors logged by our application.
GitHub has quite a bit about it. they said, so I also decided to look into it. As a result, I realized that in our case this was caused by an increased load on kube-dns ΠΈ dnsmasq. The most interesting and new for me was the very reason for the significant increase in DNS query traffic. About this and what to do with it, my post.
DNS resolution inside the container - as in any Linux system - is determined by the configuration file /etc/resolv.conf. Default Kubernetes dnsPolicy it ClusterFirst, which means that any DNS request will be redirected to dnsmasqrunning in a pod kube-dns inside the cluster, which in turn will forward the request to the application kube-dns, if the name ends with a cluster suffix, or otherwise, to a higher-level DNS server.
File /etc/resolv.conf inside each container by default will look like this:
The interesting part of this configuration is how the local search domains and settings ndots:5 coexist together. To understand this, you need to understand how DNS resolution works for unqualified names.
What is a full name?
A fully qualified name is a name that will not be searched locally and will be treated as an absolute name during name resolution. By convention, the DNS software considers a name to be fully qualified if it ends with a dot (.), and not fully qualified otherwise. That is google.com. fully defined and google.com - not.
How is an unqualified name handled?
When an application connects to the remote host specified in the name, DNS name resolution is usually done using a system call, for example, getaddrinfo(). But if the name is incomplete (does not end with .), I wonder if the system call will try to resolve the name as an absolute first, or will it go through the local search domains first? It depends on the option ndots.
This means that if for ndots is set to 5 and the name contains less than 5 dots, the system call will attempt to resolve it sequentially, first going through all local search domains, and failing, eventually resolving it as an absolute name.
Why same ndots:5 can negatively affect application performance?
As you can imagine, if your application uses a lot of external traffic, for every established TCP connection (or more precisely, for every resolved name), it will issue 5 DNS queries before the name is properly resolved, because it will first go through 4 local search domain, and at the end will issue an absolute name resolution request.
The following chart shows the total traffic on our 3 kube-dns pods before and after we switched several hostnames configured in our application to fully qualified ones.
The following chart shows the latency of the application before and after we switched several hostnames configured in our application to full ones (the vertical blue line is deployment):
Solution #1 - Use fully qualified names
If you have few static external names (i.e. those defined in the application configuration) to which you create a large number of connections, perhaps the easiest solution is to switch them to fully qualified ones by simply adding them. at the end.
This is not a final solution, but it helps to quickly, if not cleanly, improve the situation. We applied this patch to solve our problem, the results of which were shown in the screenshots above.
Decision #2 - customization ndots Π² dnsConfig
Kubernetes 1.9 introduced a feature in alpha mode (beta version v1.10) that allows you to better control DNS settings through a pod property in dnsConfig. Among other things, it allows you to customize the value ndots for a specific pod, i.e.