How to use kubectl more efficiently: a detailed guide

How to use kubectl more efficiently: a detailed guide
If you're working with Kubernetes, then kubectl is probably one of your most used utilities. And whenever you spend a lot of time working with a certain tool, it's worth learning it well and learning how to use it effectively.

Team Kubernetes aaS from Mail.ru translated an article by Daniel Weibel that provides tips and tricks for working effectively with kubectl. It will also help you better understand how Kubernetes works.

According to the author, the purpose of the article is to make your daily work with Kubernetes not only more efficient, but also more enjoyable!

Introduction: What is kubectl

Before learning how to use kubectl more effectively, you need to get a basic understanding of what it is and how it works.

From the user's point of view, kubectl is a control panel that allows you to perform Kubernetes operations.

From a technical standpoint, kubectl is a Kubernetes API client.

Kubernetes API is an HTTP REST API. This API is the true Kubernetes user interface through which it is fully controlled. This means that every Kubernetes operation is exposed as an API endpoint and can be made with an HTTP request to that endpoint.

Therefore, the main task of kubectl is to make HTTP requests to the Kubernetes API:

How to use kubectl more efficiently: a detailed guide
Kubernetes is a completely resource-oriented system. This means that it maintains the internal state of the resources and all Kubernetes operations are CRUD operations.

You are in complete control of Kubernetes by managing these resources, and Kubernetes figures out what to do based on the current state of the resources. For this reason, the Kubernetes API reference is organized as a list of resource types with their associated operations.

Let's look at an example.

Let's say you want to create a ReplicaSet resource. To do this, you declare a ReplicaSet in a file named replicaset.yaml, then run the command:

$ kubectl create -f replicaset.yaml

This will create a ReplicaSet resource. But what happens behind the scenes?

Kubernetes has a ReplicaSet creation operation. Like any other operation, it is provided as an API endpoint. The specific API endpoint for this operation looks like this:

POST /apis/apps/v1/namespaces/{namespace}/replicasets

API endpoints for all Kubernetes operations can be found at API reference (including the above endpoint). To make the actual request to the endpoint, you must first add the API server URL to the endpoint paths listed in the API reference.

Hence, when you execute the above command, kubectl sends an HTTP POST request to the above API endpoint. The ReplicaSet definition you provided in the file replicaset.yaml, passed in the body of the request.

This is how kubectl works for all commands that interact with the Kubernetes cluster. In all of these cases, kubectl simply sends HTTP requests to the appropriate Kubernetes API endpoints.

Please note that you can fully manage Kubernetes with a utility such as curlby manually sending HTTP requests to the Kubernetes API. Kubectl just makes it easy to use the Kubernetes API.

These are the basics of what kubectl is and how it works. But there is one more thing about the Kubernetes API that every kubectl user should know. Let's briefly dive into the inner world of Kubernetes.

The Inner World of Kubernetes

Kubernetes consists of a set of independent components that run as separate processes on cluster nodes. Some components work on the master nodes, others on the worker nodes, each component performs its own special task.

Here are the most important components on the main nodes:

  1. Storage - stores resource definitions (usually it's etcd).
  2. API server - Provides an API and manages storage.
  3. Controller Manager - ensures that the statuses of the resources comply with the specifications.
  4. Scheduler - schedules pods on worker nodes.

And here is one of the most important components on worker nodes:

  1. kubelet - manages the launch of containers on the working node.

To understand how these components work together, consider an example.

Let's say you just did kubectl create -f replicaset.yaml, after which kubectl made an HTTP POST request to ReplicaSet API endpoint (by passing the ReplicaSet resource definition).

What happens in a cluster?

  1. After doing kubectl create -f replicaset.yaml The API server stores your ReplicaSet resource definition in the store:

    How to use kubectl more efficiently: a detailed guide

  2. Next, the ReplicaSet controller is launched in the controller manager, which handles the creation, modification, and deletion of ReplicaSet resources:

    How to use kubectl more efficiently: a detailed guide

  3. The ReplicaSet controller creates a pod definition for each ReplicaSet (according to the pod template in the ReplicaSet definition) and stores them in the store:

    How to use kubectl more efficiently: a detailed guide

  4. The scheduler starts, keeping track of pods that have not yet been assigned to any worker node:

    How to use kubectl more efficiently: a detailed guide

  5. The scheduler selects the appropriate worker node for each pod and adds this information to the pod definition in the repository:

    How to use kubectl more efficiently: a detailed guide

  6. On the worker node to which a pod is assigned, Kubelet starts up and keeps track of the pods assigned to that node:

    How to use kubectl more efficiently: a detailed guide

  7. Kubelet reads the pod definition from the repository and instructs a container runtime like Docker to run containers on the node:

    How to use kubectl more efficiently: a detailed guide

Below is the textual version of this description.

The API request to the ReplicaSet creation endpoint is handled by the API server. The API server authenticates the request and stores the ReplicaSet resource definition in storage.

This event starts the ReplicaSet controller, which is a subprocess of the controller manager. The ReplicaSet controller monitors the creation, update, and deletion of ReplicaSet resources in the store and receives an event notification when it occurs.

The task of the ReplicaSet controller is to ensure that the required number of ReplicaSet Replica Pods exist. In our example, there are no pods yet, so the ReplicaSet controller creates these pod definitions (according to the pod template in the ReplicaSet definition) and saves them to the store.

Creating new pods starts the scheduler, which keeps track of pod definitions that are not yet scheduled for worker nodes. The scheduler selects the appropriate worker node for each pod and updates the pod definitions in the repository.

Note that up to this point, no workload code was running anywhere in the cluster. Everything that has been done so far β€” it's creating and updating resources in storage on the master node.

The last event starts the Kubelet, which keeps track of the pods scheduled for their worker nodes. The kubelet of the worker node that your ReplicaSet pods are installed for should instruct the container runtime, such as Docker, to download the required container images and run them.

At this point, your ReplicaSet application is finally up and running!

The Role of the Kubernetes API

As you saw in the previous example, the Kubernetes components (with the exception of the API server and storage) watch for changes to the resources in the storage and change information about the resources in the storage.

Of course, these components do not interact directly with the storage, but only through the Kubernetes API.

Consider the following examples:

  1. ReplicaSet controller uses API endpoint list ReplicaSets with parameter watch to monitor changes to ReplicaSet resources.
  2. ReplicaSet controller uses API endpoint create pod (create pod) to create pods.
  3. The scheduler uses the API endpoint patch pod (change pod) to update pods with information about the selected worker node.

As you can see, this is the same API that kubectl accesses. Using the same API for internal components and external users is a fundamental design concept of Kubernetes.

Now we can summarize how Kubernetes works:

  1. The store maintains state, i.e. Kubernetes resources.
  2. The API server provides an interface to the storage in the form of the Kubernetes API.
  3. All other Kubernetes components and users read, observe, and manipulate Kubernetes state (resources) through the API.

Knowing these concepts will help you better understand kubectl and get the most out of it.

Now let's take a look at some specific tips and tricks to help you get more productive with kubectl.

1. Speed ​​up input with command completion

One of the most useful but often overlooked kubectl performance tricks is command completion.

Command completion allows you to autocomplete certain parts of kubectl commands with the Tab key. This works for subcommands, options, and arguments, including complex ones like resource names.

See how kubectl command completion works:

How to use kubectl more efficiently: a detailed guide
Command completion works for the Bash and Zsh shells.

Official guide contains detailed instructions for setting up autocompletion, but below we will give a short excerpt.

How command completion works

Command completion is a shell function that works with the help of the completion script. Complement Script - A shell script that defines the behavior of the complement for a specific command.

Kubectl automatically generates and outputs addon scripts for Bash and Zsh with the following commands:

$ kubectl completion bash

Или:

$ kubectl completion zsh

Theoretically, it is enough to connect the output of these commands to the appropriate command shell so that kubectl can complete the commands.

In practice, the connection method is different for Bash (including differences between Linux and MacOS) and Zsh. We'll look at all of these options below.

Bash on Linux

The Bash completion script depends on the bash-completion package, so you need to install it first:

$ sudo apt-get install bash-completion

Или:

$ yum install bash-completion

You can test that the package was installed successfully with the following command:

$ type _init_completion

If this prints out the shell function code, then bash-completion is correctly set. If the command gives a "Not Found" error, you need to add the following line to your file ~ / .bashrc:

$ source /usr/share/bash-completion/bash_completion

Do I need to add this line to the file ~ / .bashrc or not depends on the package manager you used to install bash-completion. APT requires this, YUM does not.

Once bash-completion is installed, we need to set everything up so that the kubectl completion script is enabled in all shell sessions.

One way to do this is to add the following line to the file ~ / .bashrc:

source <(kubectl completion bash)

Another way is to add the kubectl addition script to the directory /etc/bash_completion.d (create it if it doesn't exist):

$ kubectl completion bash >/etc/bash_completion.d/kubectl

All add-on scripts in the directory /etc/bash_completion.d automatically included in bash-completion.

Both options are equally applicable.

After reloading the shell, kubectl command completion will work.

bash on macOS

On MacOS, the setup is somewhat more complicated. The fact is that by default on MacOS, Bash version 3.2 is installed, and the kubectl auto-completion script requires a Bash version of at least 4.1 and does not work in Bash 3.2.

The use of an outdated version of Bash on MacOS is related to licensing issues. Bash version 4 is distributed under the GPLv3 license, which is not supported by Apple.

To set up kubectl autocompletion on MacOS, you need to install a more recent version of Bash. You can also set the updated Bash as the default shell, which will save you a lot of problems in the future. It's easy, details are given in the article "Upgrading Bash on macOSΒ».

Before proceeding, make sure you are using the latest version of Bash (check the output bash --version).

Bash autocompletion script varies by project bash completionso you need to install it first.

You can install bash-completion with Homebrew:

$ brew install bash-completion@2

Here @2 stands for bash-completion version 2. kubectl completion requires bash-completion v2, and bash-completion v2 requires at least Bash version 4.1.

command output brew-install contains a Caveats section that tells you what to add to the file ~/.bash_profile:

export BASH_COMPLETION_COMPAT_DIR=/usr/local/etc/bash_completion.d
[[ -r "/usr/local/etc/profile.d/bash_completion.sh" ]] && . 
"/usr/local/etc/profile.d/bash_completion.sh"

However, I recommend adding these lines not to ~/.bash_profile, And in ~/.bashrc. In this case, autocompletion will be available not only in the main shell, but also in child shells.

After restarting the shell, you can verify that the installation was correct with the following command:

$ type _init_completion

If you see a shell function in the output, then everything is set up correctly.

Now we need to make kubectl autocompletion enabled in all sessions.

One way is to add the following line to your ~/.bashrc:

source <(kubectl completion bash)

The second way is to add the auto-completion script to the folder /usr/local/etc/bash_completion.d:

$ kubectl completion bash
>/usr/local/etc/bash_completion.d/kubectl

This method will only work if you installed bash-completion with Homebrew. In this case, bash-completion will load all scripts from that directory.

If you installed kubectl with Homebrew, then you do not need to perform the previous step, since the auto-completion script will be automatically placed in the folder /usr/local/etc/bash_completion.d during installation. In this case kubectl completion will start working as soon as you install bash-completion.

In the end, all these options are equivalent.

zsh

Zsh autocompletion scripts do not require any dependencies. All that is needed is to enable them when loading the shell.

You can do this by adding a line to your ~/.zshrc file:

source <(kubectl completion zsh)

If you get an error not found: compdef after restarting your shell, you need to enable the built-in function compdef. It can be enabled by adding to the top of your file ~/.zshrc following:

autoload -Uz compinit
compinit

2. Quick view of resource specifications

When you create YAML resource definitions, you need to know the fields and their meaning for those resources. One place to look for this information is in the API reference, which contains full specifications for all resources.

However, switching to a web browser every time you need to search for something is inconvenient. Therefore, kubectl provides the command kubectl explain, which shows the specifications of all resources right in your terminal.

The command format is the following:

$ kubectl explain resource[.field]...

The command will output the specification of the requested resource or field. The information displayed is identical to that contained in the API manual.

By default kubectl explain shows only the first level of field nesting.

See what it looks like You can then.

You can display the entire tree if you add the option --recursive:

$ kubectl explain deployment.spec --recursive

If you don't know exactly which resources you need, you can display them all with the following command:

$ kubectl api-resources

This command displays resource names in plural form, for example, deployments instead deployment. It also displays the short name, for example deploy, for those resources that have it. Don't worry about these differences. All of these naming options are equivalent for kubectl. That is, you can use any of them for kubectl explain.

All of the following commands are equivalent:

$ kubectl explain deployments.spec
# ΠΈΠ»ΠΈ
$ kubectl explain deployment.spec
# ΠΈΠ»ΠΈ        
$ kubectl explain deploy.spec

3. Use Custom Column Output Format

The default command output format is kubectl get:

$ kubectl get pods
NAME                     READY    STATUS    RESTARTS  AGE
engine-544b6b6467-22qr6   1/1     Running     0       78d
engine-544b6b6467-lw5t8   1/1     Running     0       78d
engine-544b6b6467-tvgmg   1/1     Running     0       78d
web-ui-6db964458-8pdw4    1/1     Running     0       78d

This format is convenient, but it contains a limited amount of information. Compared to the full resource definition format, only a few fields are displayed here.

In this case, you can use a custom column output format. It allows you to determine what data to output. You can display any resource field as a separate column.

The use of a custom format is defined using the options:

-o custom-columns=<header>:<jsonpath>[,<header>:<jsonpath>]...

You can define each output column as a pair <header>:<jsonpath>Where <header> is the name of the column, and <jsonpath> is an expression that defines a resource field.

Let's look at a simple example:

$ kubectl get pods -o custom-columns='NAME:metadata.name'

NAME
engine-544b6b6467-22qr6
engine-544b6b6467-lw5t8
engine-544b6b6467-tvgmg
web-ui-6db964458-8pdw4

The output contains one column with pod names.

The expression in the option selects the names of the pods from the field metadata.name. This is because the name of the pod is defined in the child field of the name field metadata in the pod resource description. More details can be found in API guide or get a command kubectl explain pod.metadata.name.

Now let's say you want to add an extra column to the output, for example showing the node each Pod is running on. To do this, you can simply add the appropriate column specification to the custom columns option:

$ kubectl get pods 
  -o custom-columns='NAME:metadata.name,NODE:spec.nodeName'

NAME                       NODE
engine-544b6b6467-22qr6    ip-10-0-80-67.ec2.internal
engine-544b6b6467-lw5t8    ip-10-0-36-80.ec2.internal
engine-544b6b6467-tvgmg    ip-10-0-118-34.ec2.internal
web-ui-6db964458-8pdw4     ip-10-0-118-34.ec2.internal

The expression selects a node name from spec.nodeName - when a pod is assigned to a node, its name is written in the field spec.nodeName Pod resource specification. More details can be found in the output. kubectl explain pod.spec.nodeName.

Note that Kubernetes resource fields are case sensitive.

You can view any resource field as a column. Just look at the resource spec and try it out with whatever fields you like.

But first, let's take a closer look at the field select expressions.

JSONPath expressions

The expressions for selecting resource fields are based on JSONPath.

JSONPath is a language for retrieving data from JSON documents. Selecting a single field is the simplest use case for JSONPath. He has much more possibilities, including selectors, filters, and so on.

Kubectl explain supports a limited number of JSONPath features. The following describes the features and examples of their use:

# Π’Ρ‹Π±Ρ€Π°Ρ‚ΡŒ всС элСмСнты списка
$ kubectl get pods -o custom-columns='DATA:spec.containers[*].image'
# Π’Ρ‹Π±Ρ€Π°Ρ‚ΡŒ спСцифичСский элСмСнт списка
$ kubectl get pods -o custom-columns='DATA:spec.containers[0].image'
# Π’Ρ‹Π±Ρ€Π°Ρ‚ΡŒ элСмСнты списка, ΠΏΠΎΠΏΠ°Π΄Π°ΡŽΡ‰ΠΈΠ΅ ΠΏΠΎΠ΄ Ρ„ΠΈΠ»ΡŒΡ‚Ρ€
$ kubectl get pods -o custom-columns='DATA:spec.containers[?(@.image!="nginx")].image'
# Π’Ρ‹Π±Ρ€Π°Ρ‚ΡŒ всС поля ΠΏΠΎ ΡƒΠΊΠ°Π·Π°Π½Π½ΠΎΠΌΡƒ ΠΏΡƒΡ‚ΠΈ, нСзависимо ΠΎΡ‚ ΠΈΡ… ΠΈΠΌΠ΅Π½ΠΈ
$ kubectl get pods -o custom-columns='DATA:metadata.*'
# Π’Ρ‹Π±Ρ€Π°Ρ‚ΡŒ всС поля с ΡƒΠΊΠ°Π·Π°Π½Π½Ρ‹ΠΌ ΠΈΠΌΠ΅Π½Π΅ΠΌ, Π²Π½Π΅ зависимости ΠΎΡ‚ ΠΈΡ… располоТСния
$ kubectl get pods -o custom-columns='DATA:..image'

The [] operator is of particular importance. Many Kubernetes resource fields are lists, and this operator allows you to select items from those lists. It is often used with a wildcard like [*] to select all elements of a list.

Application examples

The possibilities for using a custom column output format are endless, as you can display any field or combination of resource fields in the output. Here are some examples of applications, but feel free to explore them yourself and find useful applications for you.

  1. Display container images for pods:
    $ kubectl get pods 
      -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'
    
    NAME                        IMAGES
    engine-544b6b6467-22qr6     rabbitmq:3.7.8-management,nginx
    engine-544b6b6467-lw5t8     rabbitmq:3.7.8-management,nginx
    engine-544b6b6467-tvgmg     rabbitmq:3.7.8-management,nginx
    web-ui-6db964458-8pdw4      wordpress

    This command displays the container image names for each pod.

    Remember that a pod can contain multiple containers, in which case the image names will be displayed on a single line separated by commas.

  2. Displaying node availability zones:
    $ kubectl get nodes 
      -o 
    custom-columns='NAME:metadata.name,ZONE:metadata.labels.failure-domain.beta.kubernetes.io/zone'
    
    NAME                          ZONE
    ip-10-0-118-34.ec2.internal   us-east-1b
    ip-10-0-36-80.ec2.internal    us-east-1a
    ip-10-0-80-67.ec2.internal    us-east-1b

    This command is useful if your cluster is hosted in a public cloud. It displays the availability zone for each node.

    An availability zone is a cloud concept that limits a replication zone to a geographic region.

    Availability zones for each node are obtained through a special label βˆ’ failure-domain.beta.kubernetes.io/zone. If the cluster is running in a public cloud, this label is automatically generated and populated with the names of the availability zones for each node.

    Labels are not part of the Kubernetes resource specification, so you won't find information about them in API guide. However, they can be seen (like any other labels) if you request node information in YAML or JSON format:

    $ kubectl get nodes -o yaml
    # ΠΈΠ»ΠΈ
    $ kubectl get nodes -o json

    This is a great way to learn more about resources, in addition to learning about resource specifications.

4. Easy switching between clusters and namespaces

When kubectl makes a request to the Kubernetes API, before doing so, it reads the kubeconfig file to get all the necessary parameters for the connection.

The default kubeconfig file is ~/.kube/config. Usually this file is created or updated by a special command.

When you work with multiple clusters, your kubeconfig file contains connection settings for all of those clusters. You need a way to tell the kubectl team which cluster you're working with.

Within a cluster, you can create multiple namespaces, a kind of virtual cluster within a physical cluster. Kubectl determines which namespace to use also according to the kubeconfig file. So you also need a way to tell the kubectl command which namespace to work with.

In this chapter, we will explain how it works and how to achieve effective work.

Note that you can have multiple kubeconfig files listed in the KUBECONFIG environment variable. In this case, all these files will be merged into one common configuration at runtime. You can also change the default kubeconfig file by running kubectl with the parameter --kubeconfig. See official documentation.

kubeconfig files

Let's see what exactly the kubeconfig file contains:

How to use kubectl more efficiently: a detailed guide
As you can see, the kubeconfig file contains a set of contexts. The context consists of three elements:

  • Cluster - Cluster server API URL.
  • User - user authentication credentials in the cluster.
  • Namespace - The namespace used when joining the cluster.

In practice, they often use one context per cluster in their kubeconfig. However, you can have multiple contexts per cluster, differing by user or namespace. However, this multi-context configuration is not common, so there is usually a one-to-one mapping between clusters and contexts.

At any given time, one of the contexts is current:

How to use kubectl more efficiently: a detailed guide
When kubectl reads a configuration file, it always takes information from the current context. In the example above, kubectl will connect to the Hare cluster.

Accordingly, to switch to another cluster, you need to change the current context in the kubeconfig file:

How to use kubectl more efficiently: a detailed guide
Now kubectl will connect to the Fox cluster.

To switch to a different namespace in the same cluster, you need to change the value of the namespace element for the current context:

How to use kubectl more efficiently: a detailed guide
In the example above, kubectl will use the Prod namespace of the Fox cluster (previously set the Test namespace).

Note that kubectl also provides options --cluster, --user, --namespace ΠΈ --context, which allow you to overwrite individual elements and the current context itself, regardless of what is set in the kubeconfig. See kubectl options.

Theoretically, you can manually change the settings in the kubeconfig. But it's inconvenient. To simplify these operations, there are various utilities that allow you to change the parameters in automatic mode.

Use kubectx

A very popular utility for switching between clusters and namespaces.

The utility provides commands kubectx ΠΈ kubens to change the current context and namespace respectively.

As mentioned, changing the current context means changing the cluster if you only have one context per cluster.

Here is an example of running these commands:

How to use kubectl more efficiently: a detailed guide
In essence, these commands simply edit the kubeconfig file, as described above.

to install kubectx, follow the instructions on Github.

Both commands support auto-completion of context and namespace names, so you don't have to type them in full. AutoComplete Instructions here.

Another useful feature kubectx is interactive mode. It works in conjunction with the utility fzfwhich must be installed separately. Installing fzf automatically enables interactive mode in kubectx. In interactive mode, you can choose the context and namespace through the interactive free lookup interface provided by fzf.

Using shell aliases

You don't need separate tools to change the current context and namespace because kubectl also provides commands to do so. Yes, the team kubectl config provides subcommands for editing kubeconfig files.

Here are some of them:

  • kubectl config get-contexts: display all contexts;
  • kubectl config current-context: get the current context;
  • kubectl config use-context: change the current context;
  • kubectl config set-context: change context element.

However, using these commands directly is not very convenient because they are long. You can make shell aliases for them, which are easy to execute.

I have created a set of aliases based on these commands which provide functionality similar to kubectx. Here you can see them in action:

How to use kubectl more efficiently: a detailed guide
Note that aliases use fzf to provide an interactive free-search interface (similar to kubectx's interactive mode). This means you need install fzfto use these aliases.

Here are the alias definitions:

# ΠŸΠΎΠ»ΡƒΡ‡ΠΈΡ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст
alias krc='kubectl config current-context'
# Бписок всСх контСкстов
alias klc='kubectl config get-contexts -o name | sed "s/^/  /;|^  $(krc)$|s/ /*/"'
# Π˜Π·ΠΌΠ΅Π½ΠΈΡ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰ΠΈΠΉ контСкст
alias kcc='kubectl config use-context "$(klc | fzf -e | sed "s/^..//")"'

# ΠŸΠΎΠ»ΡƒΡ‡ΠΈΡ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π΅ пространство ΠΈΠΌΠ΅Π½
alias krn='kubectl config get-contexts --no-headers "$(krc)" | awk "{print $5}" | sed "s/^$/default/"'
# Бписок всСх пространств ΠΈΠΌΠ΅Π½
alias kln='kubectl get -o name ns | sed "s|^.*/|  |;|^  $(krn)$|s/ /*/"'
# Π˜Π·ΠΌΠ΅Π½ΠΈΡ‚ΡŒ Ρ‚Π΅ΠΊΡƒΡ‰Π΅Π΅ пространство ΠΈΠΌΠ΅Π½
alias kcn='kubectl config set-context --current --namespace "$(kln | fzf -e | sed "s/^..//")"'

To set these aliases, you need to add the above definitions to your file ~/.bashrc or ~/.zshrc and reload your shell.

Using plugins

Kubectl allows you to load plugins that execute in the same way as basic commands. You can, for example, install the kubectl-foo plugin and run it with the command kubectl foo.

It would be convenient to change the context and namespace in this way, for example, to run kubectl ctx to change the context and kubectl ns to change the namespace.

I wrote two plugins that do this:

Plugins work based on aliases from the previous section.

Here's how they work:

How to use kubectl more efficiently: a detailed guide
Note that the plugins use fzf to provide an interactive free search interface (similar to kubectx interactive mode). This means you need install fzfto use these aliases.

To install plugins, you need to load shell scripts named kubectl-ctx ΠΈ kubectl-ns to any directory in your PATH and make them executable, for example with chmod +x. Right after that you will be able to use kubectl ctx ΠΈ kubectl ns.

5. Reduce input with autoaliases

Shell aliases are a good way to speed up typing. Project kubectl-aliases contains about 800 abbreviations for basic kubectl commands.

You may wonder - how to remember 800 aliases? But you do not need to remember them all, because they are built according to a simple scheme, which is given below:

How to use kubectl more efficiently: a detailed guide
For example:

  1. kgpooyaml - kubectl get pods oyaml
  2. ksysgsvcw - kubectl -n kube-system get svc w
  3. ksysrmcm -kubectl -n kube-system rm cm
  4. kgdepallsl - kubectl get deployment all sl

As you can see, aliases are made up of components, each of which represents a specific element of the kubectl command. Each alias can have one component for the base command, operation, and resource, and multiple components for parameters. You simply "fill" these components from left to right according to the diagram above.

The current detailed diagram is at GitHub. There you can also find complete list of aliases.

For example, the alias kgpooyamlall is equivalent to the command kubectl get pods -o yaml --all-namespaces.

The relative order of the options is unimportant: command kgpooyamlall is equivalent to the command kgpoalloyaml.

You don't have to use all components as aliases. For example k, kg, klo, ksys, kgpo can also be used. Moreover, you can combine aliases and regular commands or options on the command line:

For example:

  1. Instead of kubectl proxy can write k proxy.
  2. Instead of kubectl get roles can write kg roles (there is currently no alias for the Roles resource).
  3. To get data for a specific pod, you can use the command kgpo my-pod β€” kubectl get pod my-pod.

Note that some aliases require an argument on the command line. For example, alias kgpol means kubectl get pods -l. Option -l requires an argument - a label specification. If you use an alias, it will look like kgpol app=ui.

Because some of the aliases require arguments, the aliases a, f, and l must be used last.

In general, once you get the hang of this scheme, you can intuitively deduce aliases from the commands you want to execute and save a lot of typing time.

Installation

To install kubectl-aliases, you need to download the file .kubectl_aliases from GitHub and include it in the file ~/.bashrc or ~/.zshrc:

source ~/.kubectl_aliases

Autocomplete

As we said, you often add extra words to an alias on the command line. For example:

$ kgpooyaml test-pod-d4b77b989

If you're autocompleting the kubectl command, you've probably used autocompletion for things like resource names. But can it be done when aliases are used?

This is a very important question, because if autocompletion doesn't work, you lose some of the benefits of aliases.

The answer depends on which command shell you are using:

  1. For Zsh, alias completion works out of the box.
  2. Bash unfortunately needs some work to get the completion to work.

Enabling autocomplete for aliases in Bash

The problem with Bash is that it tries to complete (whenever you press Tab) the alias and not the command the alias refers to (like Zsh does). Because you don't have completion scripts for all 800 aliases, autocompletion doesn't work.

Project complete alias provides a general solution to this problem. It connects to the alias padding mechanism, internally pads the alias to a command, and returns padding options for the padded command. This means that the addition for an alias behaves exactly the same as for a full command.

Next, I'll first explain how to install complete-alias and then how to configure it to enable completion for all kubectl aliases.

Installing complete-alias

First of all, complete-alias depends on bash completion. Therefore, before installing complete-alias, you need to make sure that bash-completion is installed. Installation instructions have been given previously for Linux and MacOS.

Important note for macOS users: Like the kubectl completion script, complete-alias does not work with Bash 3.2, which is the default on MacOS. In particular, complete-alias depends on bash-completion v2 (brew install bash-completion@2), which requires at least Bash 4.1. This means that you need to install a newer version of Bash to use complete-alias on MacOS.

You need to download the script bash_completion.sh of GitHub repository and include it in your file ~/.bashrc:

source ~/bash_completion.sh

After a shell restart, complete-alias will be fully installed.

Enabling autocompletion for kubectl aliases

Technically, complete-alias provides a wrapper function _complete_alias. This function checks the alias and returns the completion hints for the alias command.

To associate a function with a specific alias, you need to use the built-in Bash mechanism complete, to install _complete_alias as an alias completion function.

As an example, let's take the alias k, which stands for the kubectl command. to install _complete_alias as an addon function for this alias, you must run the following command:

$ complete -F _complete_alias k

The result of this is that whenever you autocomplete the alias k, the function is called _complete_alias, which checks the alias and returns the completion hints for the command kubectl.

As a second example, let's take an alias kgwhich stands for kubectl get:

$ complete -F _complete_alias kg

Just like in the previous example, when you autocomplete kg, you get the same completion hints that you would get for kubectl get.

Note that you can use complete-alias this way for any alias on your system.

Therefore, to enable autocompletion for all kubectl aliases, you need to run the above command for each of them. The following snippet does just that, provided you have installed kubectl-aliases in ~/.kubectl-aliases:

for _a in $(sed '/^alias /!d;s/^alias //;s/=.*$//' ~/.kubectl_aliases); 
do
  complete -F _complete_alias "$_a"
done

This piece of code should be placed in your ~/.bashrc, reload the shell and all 800 kubectl aliases will be autocompleted.

6. Extend kubectl with plugins

Since 1.12 versions, kubectl supports plugin mechanism, which allow you to extend its functions with additional commands.

If you are familiar with Git plugin mechanisms, kubectl plugins are built in the same way.

In this chapter, we'll cover how to install plugins, where to find them, and how to create your own plugins.

Installing plugins

kubectl plugins are distributed as simple executables named as kubectl-x. Prefix kubectl- is required, followed by a new kubectl subcommand that allows you to call the plugin.

For example, the hello plugin will be distributed as a file named kubectl-hello.

To install the plugin, you need to copy the file kubectl-x to any directory in your PATH and make it executable, for example with chmod +x. Right after that you can call the plugin with kubectl x.

You can use the following command to list all plugins currently installed on your system:

$ kubectl plugin list

This command also displays warnings if you have multiple plugins with the same name, or if there is a plugin file that is not executable.

Finding and installing plugins with Krew

Kubectl plugins can be shared or reused like software packages. But where can you find plugins that others have shared?

Project Krew aims to provide a unified solution for sharing, finding, installing and managing kubectl plugins. The project calls itself a "package manager for kubectl plugins" (Krew is similar to brew).

Krew is a list of kubectl plugins that you can choose from and install. At the same time, Krew is also a plugin for kubectl.

This means that installing Krew works essentially like installing any other kubectl plugin. You can find detailed instructions at GitHub page.

The most important Krew commands are:

# Поиск в спискС плагинов
$ kubectl krew search [<query>]
# ΠŸΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΡŽ ΠΎ ΠΏΠ»Π°Π³ΠΈΠ½Π΅
$ kubectl krew info <plugin>
# Π£ΡΡ‚Π°Π½ΠΎΠ²ΠΈΡ‚ΡŒ ΠΏΠ»Π°Π³ΠΈΠ½
$ kubectl krew install <plugin>
# ΠžΠ±Π½ΠΎΠ²ΠΈΡ‚ΡŒ всС ΠΏΠ»Π°Π³ΠΈΠ½Ρ‹ Π΄ΠΎ послСднСй вСрсии
$ kubectl krew upgrade
# ΠŸΠΎΡΠΌΠΎΡ‚Ρ€Π΅Ρ‚ΡŒ всС ΠΏΠ»Π°Π³ΠΈΠ½Ρ‹, установлСнныС Ρ‡Π΅Ρ€Π΅Π· Krew
$ kubectl krew list
# Π”Π΅ΠΈΠ½ΡΡ‚Π°Π»Π»ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ ΠΏΠ»Π°Π³ΠΈΠ½
$ kubectl krew remove <plugin>

Please note that installing plugins with Krew does not interfere with installing plugins in the standard way described above.

Please note that the command kubectl krew list displays only those plugins that were installed with Krew, while the command kubectl plugin list lists all plugins, i.e. those installed with Krew and those installed by other means.

Finding Plugins Elsewhere

Krew is a young project, currently in its list of only about 30 plugins. If you can't find what you're looking for, you can find the plugins elsewhere, such as on GitHub.

I recommend looking at the GitHub section kubectl-plugins. There you will find dozens of plugins available that are worth checking out.

Writing your own plugins

you can yourself create plugins - It is not hard. You need to create an executable that does what you want, name it like kubectl-x and install as described above.

The file can be a bash script, a python script, or a compiled goroutine, it doesn't matter. The only condition is that it can be directly executed in the operating system.

Let's create an example plugin right now. In the previous section, you used the kubectl command to list containers for each pod. You can easily turn this command into a plugin that you can call, for example with kubectl img.

Create a file kubectl-img the following content:

#!/bin/bash
kubectl get pods -o custom-columns='NAME:metadata.name,IMAGES:spec.containers[*].image'

Now make the file executable with chmod +x kubectl-img and move it to any directory in your PATH. Right after that you can use the plugin kubectl img.

As mentioned, kubectl plugins can be written in any programming or scripting language. If you are using shell scripts, the advantage is that you can easily call kubectl from a plugin. However, you can write more complex plugins in real programming languages ​​using Kubernetes client library. If you are using Go, you can also use cli-runtime library, which exists specifically for writing kubectl plugins.

How to share your plugins

If you think your plugins might be useful to others, feel free to share it on GitHub. Be sure to add them to the topic kubectl-plugins.

You can also request that your plugin be added to Crew list. Instructions on how to do this are in GitHub repos.

Command completion

Plugins do not currently support auto-completion. That is, you must enter the full name of the plugin and the full names of the arguments.

The GitHub kubectl repo for this function has open request. Thus, it is possible that this feature will be implemented sometime in the future.

Good luck!

What else to read on the topic:

  1. Three levels of autoscaling in Kubernetes and how to use them effectively.
  2. Kubernetes in the spirit of piracy with a template for implementation.
  3. Our channel Around Kubernetes in Telegram.

Source: habr.com

Add a comment