Google's 7 Container Best Practices

Note. transl.: Original article by ThΓ©o Chamley, Google Cloud Architect. In this post for the Google Cloud blog, he provided a summary of his company's more detailed guide, titled "Best Practices for Operating Containers". In it, Google experts collected the best practices for operating containers in the context of using the Google Kubernetes Engine and not only, covering a wide range of topics: from security to monitoring and logging. So what are the most important container practices according to Google?

Google's 7 Container Best Practices

Kubernetes Engine (a Kubernetes-based service for running containerized applications on Google Cloud - approx. transl.) is one of the best ways to run workloads that need to scale. Kubernetes will ensure the smooth functioning of most applications if they are containerized. But if you want your application to be easy to manage and take full advantage of Kubernetes, you need to follow the best practices. They will simplify the operation of the application, its monitoring and debugging, as well as increase security.

In this article, we'll go through a list of what you need to know and do to make containers work effectively in Kubernetes. Those who wish to delve into the details should read the material Best Practices for Operating Containersand pay attention to our earlier post about building containers.

1. Use native container mechanisms for logging

If the application is running on a Kubernetes cluster, not much is needed for the logs. A centralized logging system is probably already built into the cluster you are using. In the case of using the Kubernetes Engine, this is the responsibility of Stackdriver Logging. (Note. transl.: And in the case of using your own Kubernetes installation, we recommend that you take a closer look at our Open Source solution - log house.) Keep your life simple and use native container logging mechanisms. Write logs to stdout and stderr - they will be automatically received, saved and indexed.

If desired, you can also write logs in json format. This approach makes it easy to add metadata to them. And with them, Stackdriver Logging will be able to search through the logs using this metadata.

2. Make sure containers are stateless and immutable

For containers to function correctly in a Kubernetes cluster, they must be stateless and immutable. When these conditions are met, Kubernetes will be able to do its job, creating and destroying application entities when and where necessary.

Stateless means that any state (permanent data of any kind) is stored outside the container. For this, depending on the needs, different types of external storage can be used: Cloud Storage, Persistent Disks, Redis, CloudSQL or other managed databases. (Note. transl.: Read more about this also in our article "Operators for Kubernetes: how to run stateful applications. ")

Immutable means that the container will not be modified during its lifetime: no updates, patches, configuration changes. If you need to update the application code or apply a patch, create a new image and deploy it. It is recommended to move the container configuration (listening port, runtime options, etc.) outside - in Secrets ΠΈ ConfigMaps. They can be updated without having to build a new container image. To easily create pipelines with image builds, you can use Cloud Build. (Note. transl.: We use an Open Source tool for this purpose dapp.)

Google's 7 Container Best Practices
An example of updating a Deployment configuration in Kubernetes using a ConfigMap mounted in pods as a config

3. Avoid Privileged Containers

You don't run applications as root on your servers, do you? If an attacker breaks into the application, he will gain root access. The same considerations apply to not running privileged containers. If you want to change settings on the host, you can give the container specific capabilities using the option securityContext in Kubernetes. If you need to change sysctls, Kubernetes has separate annotation for this. In general, try to make the most of init- and sidecar containers to perform similar privileged operations. They do not need to be available for either internal or external traffic.

If you are administering a cluster, you can use Pod Security Policy for restrictions on the use of privileged containers.

4. Avoid running as root

Privileged containers have already been mentioned, but it would be even better if, in addition to this, you do not run applications as root inside the container. If an attacker finds a remote vulnerability with the ability to execute code in an application with root rights, after which he can exit the container through an as yet unknown vulnerability, he will get root on the host.

The best way to avoid this is to not run anything as root in the first place. To do this, you can use the directive USER Π² Dockerfile or runAsUser in Kubernetes. The cluster administrator can also configure enforcement behavior with Pod Security Policy.

5. Make the app easy to monitor

Like logging, monitoring is an integral part of application management. A popular monitoring solution in the Kubernetes community is Prometheus - a system that automatically detects pods and services that require monitoring. (Note. transl.: See also our detailed report on monitoring with Prometheus and Kubernetes.) Stackdriver is capable of monitoring Kubernetes clusters and includes its version of Prometheus for application monitoring.

Google's 7 Container Best Practices
Kubernetes Dashboard on Stackdriver

Prometheus expects the application to forward metrics to the HTTP endpoint. Available for this Prometheus client libraries. The same format is used by other tools like OpenCensus ΠΈ Istio.

6. Share the health status of the app

Application management in production is aided by its ability to report its state to the entire system. Is the application running? Is it okay? Is it ready to receive traffic? How does he behave? The most common way to solve this problem is to implement health checks. (health checks). Kubernetes has two types: liveness and readiness probes.

For liveness probe (viability checks) an application must have an HTTP endpoint that returns a "200 OK" response if it is running and its underlying dependencies are satisfied. For readiness probe (service readiness checks) the application must have another HTTP endpoint returning a "200 OK" response if the application is in a healthy state, the initialization steps have been completed, and any valid request does not result in an error. Kubernetes will only route traffic to the container if the application is ready according to these checks. Two endpoints can be merged if there is no difference between the liveness and readiness states.

You can read more about this in the related article by Sandeep Dinesh, Developer Advocate at Google: "Kubernetes best practices: Setting up health checks with readiness and liveness probesΒ».

7. Choose Your Image Version Carefully

Most public and private images use a tagging system similar to the one described in Best Practices for Building Containers. If the image uses a system close to semantic versioning, it is necessary to take into account the specifics of tagging. For example, tag latest can be moved frequently from image to image - cannot be relied upon if you need predictable and reproducible builds and installations.

You can use the tag X.Y.Z (they are almost always unchanged), but in this case, keep track of all patches and updates to the image. If the image being used has a tag X.Y, this is a good option for the golden mean. By choosing it, you automatically receive patches and at the same time rely on a stable version of the application.

PS from translator

Read also on our blog:

Source: habr.com

Add a comment