Kubernetes 1.17: Highlights of what's new

Yesterday, December 9, took place the next release of Kubernetes is 1.17. According to the tradition that has developed for our blog, we talk about the most significant changes in the new version.

Kubernetes 1.17: Highlights of what's new

The information used to prepare this material is taken from the official announcement, Kubernetes enhancements tracking tables, CHANGELOG-1.17 and related issues, pull requests, and Kubernetes Enhancement Proposals (KEP). So what's new?..

Topology Aware Routing

For a long time, the Kubernetes community has been waiting for this feature - Topology-aware service routing. If KEP on it originates in October 2018, and the official enhancements — 2 years ago, then regular issues (like it) - and even older by a few more years ...

The general idea is to provide the ability to implement "local" routing for services located in Kubernetes. "Local" in this case means "the same topological level" (topology level), which may be:

  • the same node for services,
  • the same server rack,
  • the same region
  • the same cloud provider,
  • ...

Examples of using this feature:

  • saving on traffic in cloud installations with multiple availability zones (multi-AZ) - see below. fresh illustration on the example of traffic in one region, but different AZs in AWS;
  • lower latency in performance/better throughput;
  • a sharded service that has local information about the node in each shard;
  • hosting fluentd (or equivalents) on the same node as the applications whose logs are collected;
  • ...

Such routing, "aware" of the topology, is also called the similarity of network affinity - by analogy with node affinity, pod affinity/anti-affinity or emerging not so long ago Topology-Aware Volume Scheduling (And Volume Provisioning). Current level of implementation ServiceTopology in Kubernetes - alpha version.

For details on how the feature works and how you can already use it, read this article from one of the authors.

Dual stack IPv4/IPv6 support

Significant progress fixed in another networking feature: simultaneous support for two IP stacks, which was first introduced in K8s 1.16. In particular, the new release brought the following changes:

  • in kube-proxy implemented the ability to work simultaneously in both modes (IPv4 and IPv6);
  • в Pod.Status.PodIPs appeared API downward support (simultaneously with this in /etc/hosts now require to add an IPv6 address for the host as well);
  • dual stack support CHILD (Kubernetes IN Docker) and kubeadm;
  • updated e2e tests.

Kubernetes 1.17: Highlights of what's new
Illustration using dual stack IPV4/IPv6 in KIND

Progress on CSI

Declared stable topology support for CSI-based storage, first introduced in K8s 1.12.

Initiative for volume plugin migrations to CSICSI Migration - Reached beta version. This feature is critical for translating existing storage plugins (in-tree) to a modern interface (CSI, out-of-tree) transparent to Kubernetes end users. Cluster administrators will only need to activate CSI Migration, after which existing stateful resources and workloads will still “just work” ... but using the latest CSI drivers instead of the outdated ones included in the Kubernetes core.

A migration for the AWS EBS drivers is currently in beta status (kubernetes.io/aws-ebs) and GCE PD (kubernetes.io/gce-pd). The forecasts for other storages are as follows:

Kubernetes 1.17: Highlights of what's new

We talked about how the "traditional" storage support in K8s came to CSI in this article. And the transition of CSI migration to beta status is dedicated to separate publication on the project blog.

In addition, other significant functionality in the context of CSI has reached beta status (i.e. enabled by default) in the release of Kubernetes 1.17, originating (alpha implementation) in K8s 1.12, − creating snapshots and recovery from them. Among the changes made to Kubernetes Volume Snapshot on the way to beta release:

  • splitting the CSI external-snapshotter sidecar into two controllers,
  • added secret to delete (deletion secret) as an annotation to the contents of the volume snapshot,
  • new finalizer (finalizer) to prevent the snapshot API object from being deleted when there are remaining links.

At the time of release 1.17, the feature is supported by three CSI drivers: GCE Persistent Disk CSI Driver, Portworx CSI Driver, and NetApp Trident CSI Driver. You can read more about its implementation and use in this publication in the blog.

Cloud Provider Labels

Labels that are automatically are assigned to created nodes and volumes depending on the cloud provider used, have been available in Kubernetes as a beta for a very long time - since the release of K8s 1.2 (April 2016!). Given their widespread use for so long, developers decided tothat it is time to declare the feature stable (GA).

Therefore, they were all renamed accordingly (according to topologies):

  • beta.kubernetes.io/instance-typenode.kubernetes.io/instance-type
  • failure-domain.beta.kubernetes.io/zonetopology.kubernetes.io/zone
  • failure-domain.beta.kubernetes.io/regiontopology.kubernetes.io/region

… but still available under their old names (for backwards compatibility). However, all administrators are advised to switch to current labels. Related Documentation K8s has been updated.

Structured output of kubeadm

First introduced in alpha format structured output for the kubeadm utility. Supported formats: JSON, YAML, Go-template.

Motivation to implement this feature (according to KEP) is:

Although Kubernetes can be manually deployed, the de facto (if not de jure) standard for this operation is to use kubeadm. Popular systems management tools like Terraform rely on kubeadm to deploy Kubernetes. Planned improvements to the Cluster API include a buildable package for bootstrapping Kubernetes with kubeadm and cloud-init.

Without structured output, even seemingly innocuous changes can break Terraform, the Cluster API, and other software that uses the output of kubeadm.

Future plans include support (as structured output) for the following kubeadm commands:

  • alpha certs
  • config images list
  • init
  • token create
  • token list
  • upgrade plan
  • version

Illustration of a JSON response to a command kubeadm init -o json:

{
  "node0": "192.168.20.51:443",
  "caCrt": "sha256:1f40ff4bd1b854fb4a5cf5d2f38267a5ce5f89e34d34b0f62bf335d74eef91a3",
  "token": {
    "id":          "5ndzuu.ngie1sxkgielfpb1",
    "ttl":         "23h",
    "expires":     "2019-05-08T18:58:07Z",
    "usages":      [
      "authentication",
      "signing"
    ],
    "description": "The default bootstrap token generated by 'kubeadm init'.",
    "extraGroups": [
      "system:bootstrappers:kubeadm:default-node-token"
    ]
  },
  "raw": "Rm9yIHRoZSBhY3R1YWwgb3V0cHV0IG9mIHRoZSAia3ViZWFkbSBpbml0IiBjb21tYW5kLCBwbGVhc2Ugc2VlIGh0dHBzOi8vZ2lzdC5naXRodWIuY29tL2FrdXR6LzdhNjg2ZGU1N2JmNDMzZjkyZjcxYjZmYjc3ZDRkOWJhI2ZpbGUta3ViZWFkbS1pbml0LW91dHB1dC1sb2c="
}

Stabilization of other innovations

In general, the release of Kubernetes 1.17 took place under the motto "Stability". This was facilitated by the fact that a lot of features in it (their total number is 14) received GA status. Among those:

Other changes

The full list of innovations in Kubernetes 1.17, of course, is not limited to those listed above. Here are some others (and for a more complete list, see below). CHANGELOG):

  • the feature presented in the last release has “grown up” to the beta version RunAsUserName for windows;
  • similar change befell EndpointSlice API (also from K8s 1.16), however this solution to improve the performance/scalability of the Endpoint API is not enabled by default yet;
  • pods critical for the operation of the cluster are now can be created not only in namespaces kube-system (See the documentation for details.) Limit Priority Class consumption);
  • new option for kubelet - --reserved-cpus - allows you to explicitly define a list of CPUs reserved for the system;
  • for kubectl logs submitted new flag --prefix, which adds the name of the pod and the container of the source to each line of the log;
  • в label.Selector added RequiresExactMatch;
  • all containers in kube-dns are now running with fewer privileges;
  • hyperkube separated into a separate GitHub repository and will no longer be included in Kubernetes releases;
  • much improved performance kube-proxy for non-UDP ports.

Dependency changes:

  • version of CoreDNS included in kubeadm is 1.6.5;
  • crictl version updated to v1.16.1;
  • CSI 1.2.0;
  • etcd 3.4.3;
  • latest checked Docker version upgraded to 19.03;
  • The minimum version of Go required to build Kubernetes 1.17 is 1.13.4.

PS

Read also on our blog:

Source: habr.com

Add a comment