Kubernetes 1.14: Highlights of what's new

Kubernetes 1.14: Highlights of what's new

This night takes place next release of Kubernetes - 1.14. According to the tradition that has developed for our blog, we talk about the key changes in the new version of this wonderful Open Source product.

The information used to prepare this material is taken from Kubernetes enhancements tracking tables, CHANGELOG-1.14 and related issues, pull requests, Kubernetes Enhancement Proposals (KEP).

Let's start with an important introduction from SIG cluster-lifecycle: dynamic failover clusters Kubernetes (more precisely, self-hosted HA deployments) are now can create using familiar (in the context of single-node clusters) commands kubeadm (init и join). In short, for this:

  • the certificates used by the cluster are transferred to secrets;
  • to be able to use the etcd cluster inside the K8s cluster (i.e. getting rid of the external dependency that existed so far) is used etcd-operator;
  • Documents the recommended settings for an external load balancer that provides a failover configuration (in the future, it is planned to fail this dependency as well, but not at this stage).

Kubernetes 1.14: Highlights of what's new
Kubernetes HA cluster architecture created with kubeadm

Implementation details can be found in design proposal. This feature was really long-awaited: the alpha version was expected back in K8s 1.9, but has only appeared now.

API

Team apply and in general declarative object management rendered of kubectl in apiserver. The developers themselves briefly explain their decision by the fact that kubectl apply - a fundamental part of working with configurations in Kubernetes, however, "it is full of bugs and difficult to fix", and therefore this functionality needs to be brought to normal and transferred to the control plane. Simple and illustrative examples of the problems that exist today:

Kubernetes 1.14: Highlights of what's new

Implementation details are in KEP. The current readiness is alpha (promotion to beta is planned for the next Kubernetes release).

Became available in alpha version opportunity using the OpenAPI v3 schema for creating and publishing OpenAPI documentation on CustomResources (CR) used to validate (server-side) K8s user-defined resources (CustomResourceDefinition, CRD). Publishing OpenAPI to CRD allows clients (for example, kubectl) perform validation on its side (within kubectl create и kubectl apply) and issue documentation according to the scheme (kubectl explain). Details - in KEP.

Pre-existing logs now open with flag O_APPEND (but not O_TRUNC) in order to avoid loss of logs in some situations and for the convenience of truncate'a logs with external utilities for rotation.

Also in the context of the Kubernetes API, it can be noted that in PodSandbox и PodSandboxStatus added field runtime_handler to record information about RuntimeClass in a pod (read more about it in the text about Kubernetes 1.12 release, where this class appeared as an alpha version), and in Admission Webhooks implemented the ability to determine which versions AdmissionReview they support. Finally, in the Admission Webhooks rules, now can be limited the scope of their use by namespaces and cluster frameworks.

Repositories

PersistentLocalVolumesthat have been in beta since release K8s 1.10, are announced stable (GA): This feature gate is no longer disabled and will be removed in Kubernetes 1.17.

Possibility use of environment variables so-called Downward API (e.g. pod name) for directory names mounted as subPath, was developed - in the form of a new field subPathExpr, with the help of which the desired directory name is now determined. Initially, the feature appeared in Kubernetes 1.11, but for 1.14 it remained in alpha version status.

As in the previous release of Kubernetes, many significant changes are introduced for the actively developing CSI (Container Storage Interface):

CSI

Became available (as part of the alpha version) support resizing for CSI volumes. To use it, you need to enable a feature gate called ExpandCSIVolumes, as well as the availability of support for this operation in a particular CSI driver.

Another feature for CSI in the alpha version is opportunity refer directly (i.e. without using PV/PVC) to CSI volumes within the pod specification. This removes the restriction on the use of CSI as exclusively remote data storageopening the doors to the world for them local ephemeral volumes. For use (example from documentation) must be included CSIInlineVolume feature gate.

There has also been progress in the Kubernetes “guts” related to CSI, which are not so visible to end users (system administrators) ... At the moment, developers are forced to maintain two versions of each storage plugin: one is “old fashioned”, inside the K8s codebase (in -tree), and the second - within the new CSI (read more about it, for example, in here). This causes an understandable inconvenience that needs to be addressed as the CSI itself stabilizes. It is not possible to simply take and declare obsolete (deprecated) the API of internal (in-tree) plugins due to relevant Kubernetes policy.

All this led to the fact that the alpha version reached migration process plugin internal codeimplemented as in-trees in CSI plugins, which will reduce developers' concerns to maintaining one version of their plugins, while maintaining compatibility with older APIs and allowing them to be deprecated in the usual way. It is expected that by the next release of Kubernetes (1.15) all cloud provider plugins will be migrated, the implementation will receive beta status and will be activated in K8s installations by default. See details in design proposal. This migration also resulted in failure from volume limits defined by specific cloud providers (AWS, Azure, GCE, Cinder).

In addition, support for block devices with CSI (CSIBlockVolume) translated to the beta version.

Nodes / Kubelets

Alpha version released new endpoint in a Kubelet designed for return of metrics on the main resources. Generally speaking, if earlier Kubelet received statistics on the use of containers from cAdvisor, now this data comes from the container runtime environment via CRI (Container Runtime Interface), however, compatibility with older Docker versions is also preserved. Previously, the statistics collected in Kubelet were given through the REST API, and now the endpoint is used for this, located at /metrics/resource/v1alpha1. Long-term developer strategy is is to minimize the set of metrics provided by Kubelet. By the way, these metrics themselves now called not "core metrics", but "resource metrics", and described as "first-class resources, such as cpu, and memory".

A very interesting nuance: despite the clear performance advantage of the gRPC endpoint compared to various use cases of the Prometheus format (see the result of one of the benchmarks below), the authors preferred the Prometheus text format due to the clear leadership of this monitoring system in the community.

“gRPC is not compatible with major monitoring pipelines. Endpoint will only be useful for supplying metrics to Metrics Server or monitoring components that integrate directly with it. Prometheus text format performance when using caching in Metrics Server good enough for us to prefer Prometheus over gRPC given the widespread use of Prometheus in the community. As the OpenMetrics format becomes more stable, we will be able to get closer to gRPC performance with a proto-based format."

Kubernetes 1.14: Highlights of what's new
One of the comparative performance tests of using gRPC and Prometheus formats in the new Kubelet endpoint for metrics. More charts and other details can be found in KEP.

Among other changes:

  • Kubelet now (once) trying to stop containers in an unknown state before restart and delete operations.
  • Using PodPresets now to the init container is added the same information as a regular container.
  • kubelet started using usageNanoCores from the CRI statistics provider, and for hosts and containers on Windows added network statistics.
  • Information about the operating system and architecture is now recorded in labels kubernetes.io/os и kubernetes.io/arch Node objects (transferred from beta to GA).
  • Ability to specify a specific system user group for containers in a pod (RunAsGroup, appeared in K8s 1.11) advanced to beta (enabled by default).
  • du and find used in cAdvisor replaced by on Go-implementation.

CLI

In cli-runtime and kubectl added flag -k to integrate with customize (by the way, its development is now carried out in a separate repository), i.e. to process additional YAML files from special kustomization directories (for details on how to use them, see KEP):

Kubernetes 1.14: Highlights of what's new
Simple file usage example customization (possible and more complex use of kustomize within overlays)

Besides:

  • Added by new team kubectl create cronjob, whose name speaks for itself.
  • В kubectl logs can now combine flags -f (--follow for streaming logs) and -l (--selector for label query).
  • kubectl learned copy files selected by wild card.
  • To the team kubectl wait added flag --all to select all resources in the namespace of the specified resource type.

Others

Stable (GA) status has been given to the following features:

Other changes introduced in Kubernetes 1.14:

  • Default RBAC policy no longer allows API access discovery и access-review users without authentication (unauthenticated).
  • Official support for CoreDNS provided by Linux only, so when using kubeadm to deploy it (CoreDNS) in a cluster, nodes must only run on Linux (nodeSelectors are used for this limitation).
  • Default CoreDNS config now uses forward plugin instead of proxy. In addition, in CoreDNS added readinessProbe to prevent load balancing on the appropriate (not service ready) pods.
  • In kubeadm, on phases init or upload-certs, became possible download the certificates required to connect the new control-plane to the kubeadm-certs secret (use the flag --experimental-upload-certs).
  • An alpha version has appeared for Windows installations Support gMSA (Group Managed Service Account) - special accounts in Active Directory that can also be used by containers.
  • For GCE activated mTLS encryption between etcd and kube-apiserver.
  • Updates in used/dependant software: Go 1.12.1, CSI 1.1, CoreDNS 1.3.1, support for Docker 18.09 in kubeadm, and the minimum supported version of Docker API became 1.26.

PS

Read also on our blog:

Source: habr.com

Add a comment