Best Practices and Best Practices for Running Containers and Kubernetes in Production Environments

Best Practices and Best Practices for Running Containers and Kubernetes in Production Environments
The ecosystem of containerization technologies is rapidly evolving and changing, so there is a lack of good working practices in this area. However, Kubernetes and containers are increasingly being used, both for modernizing legacy applications and for developing modern cloud applications. 

Team Kubernetes aaS from Mail.ru collected forecasts, tips and best practices for market leaders from Gartner, 451 Research, StacxRox and others. They will enable and accelerate the deployment of containers in production environments.

How to Know if Your Company is Ready to Deploy Containers in a Production Environment

According to Gartner, in 2022 more than 75% of organizations will use containerized applications in production. This is significantly more than at present, when less than 30% of companies use such applications. 

According to 451Research, the forecast market for container technology applications in 2022 will be $4,3 billion. This is more than double the amounts forecast in 2019, the market growth rate is 30%.

Π’ Portworx and Aqua Security survey 87% of respondents stated that they are currently using container technology. For comparison, in 2017 there were 55% of such respondents. 

Despite the growing interest and increasing adoption of containers, their launch into production requires training due to technological immaturity and lack of know-how. Organizations need to be realistic about business processes that require application containerization. IT leaders should evaluate if they have the skill set to move forward with the need to learn quickly. 

Gartner Experts believe that the questions in the figure below will help you understand if you are ready to deploy containers in production:

Best Practices and Best Practices for Running Containers and Kubernetes in Production Environments

The most common mistakes when using containers in production

Organizations often underestimate the effort required to operate containers in production. Gartner discovered A few common mistakes in client scripts when using containers in production environments:

Best Practices and Best Practices for Running Containers and Kubernetes in Production Environments

How to secure containers

Security cannot be dealt with β€œlater”. It should be built into the DevOps process, so even a special term appeared - DevSecOps. Organizations need to plan container environment protection throughout the development life cycle, which includes the build and development process, deployment, and launch of the application.

Recommendations from Gartner

  1. Integrate your application image scanning process to find vulnerabilities into your continuous integration/continuous delivery (CI/CD) pipeline. Applications are scanned at the build and run stages of the software. Emphasize the need to scan and identify open source components, libraries, and frameworks. The use of old vulnerable versions by developers is one of the main causes of container vulnerabilities.
  2. Improve your configuration with Center for Internet Security tests (CIS) that are available for both Docker and Kubernetes.
  3. Be sure to enforce access control, enforce segregation of duties, and implement a secret management policy. Sensitive information such as Secure Sockets Layer (SSL) keys or database credentials are encrypted by the orchestrator or third party management services and provided at run time
  4. Avoid elevated containers by managing security policies to reduce potential hacking risks.
  5. Use security tools that provide whitelisting, behavioral monitoring, and anomaly detection to prevent malicious activity.

Recommendations from StacxRox:

  1. Leverage built-in Kubernetes capabilities. Set up access for users using roles. Make sure you don't grant unnecessary permissions to individual subjects, even though it may take some time to think about the minimum required permissions. It may be tempting to give the cluster administrator wide privileges, as this saves time at the outset. However, any compromise or mistakes in the account can lead to devastating consequences down the road. 
  2. Avoid duplicating access permissions. It can sometimes be useful to have different roles overlap, but this can lead to operational issues and also create dead zones when permissions are removed. It is also important to remove unused and inactive roles.
  3. Set network policies: isolate modules to restrict access to them; explicitly allow access to the Internet to those modules that need it, using labels; explicitly allow communication between those modules that need to communicate with each other. 

How to organize monitoring of containers and services in them

Security and Monitoring - main problems of companies when deploying Kubernetes clusters. Developers are always more focused on the features of the applications they develop than on aspects monitoring these applications

Recommendations from Gartner:

  1. Try to monitor the state of containers or services in them in conjunction with monitoring host systems.
  2. Favor vendors and tools with deep container orchestration integration, especially Kubernetes.
  3. Choose tools that provide detailed logging, automatic service discovery, and real-time recommendations powered by analytics and/or machine learning.

The SolarWinds blog advises:

  1. Use tools to automatically discover and track container metrics, correlate performance metrics such as CPU, memory, and uptime.
  2. Ensure optimal capacity planning by predicting when capacity will run out based on container monitoring metrics.
  3. Monitor containerized applications for availability and performance, which is useful for both capacity planning and performance troubleshooting.
  4. Automate workflows by providing management and scaling support for containers and their hosting environments.
  5. Automate access control to keep track of the user base, disable legacy and guest accounts, remove unnecessary privileges.
  6. Verify that your toolbox can monitor these containers and applications across multiple environments (cloud, on-premise, or hybrid) to visualize and benchmark performance across infrastructure, network, systems, and applications.

How to store data and keep it safe

As the number of stateful work containers increases, clients need to consider the location of data outside of the host, as well as the need to protect that data. 

According to Portworx and Aqua Security surveys, data security is at the top of the list of security issues noted by the majority of respondents (61%). 

Data encryption is the main security strategy (64%), but respondents also use runtime monitoring

(49%), vulnerability scanning in registries (49%), vulnerability scanning in CI/CD pipelines (49%), and anomaly blocking through runtime protection (48%).

Recommendations from Gartner:

  1. Choose storage solutions built on the principles microservice architecture. It is better to focus on those that meet the storage requirements for container services, are hardware independent, are API driven, have a distributed architecture, support local deployment and deployment in the public cloud.
  2. Avoid proprietary plugins and interfaces. Choose vendors that integrate with Kubernetes and support standard interfaces such as CSI (Container Storage Interfaces).

How to work with networks

The traditional enterprise network model, where IT professionals create networked development, test, quality assurance, and production environments for each project, does not always fit well with a continuous development workflow. In addition, container networks cover several levels.

Π’ blog Magalix collected high-level rules that the implementation of a cluster-network solution must comply with:

  1. Pods scheduled on the same node must be able to communicate with other pods without using NAT (network address translation).
  2. All system daemons (background processes like kubelet) running on a particular node can interact with pods running on the same node.
  3. Pods using host network, should be able to communicate with all other pods on all other hosts without using NAT. Note that host networking is only supported on Linux hosts.

Networking solutions should be tightly integrated with Kubernetes primitives and policies. CIOs should aim for a high degree of network automation, provide developers with the right tools, and enough flexibility.

Recommendations from Gartner:

  1. Find out if your CaaS (container as a service) or your SDN (Software Defined Network) network supports Kubernetes networks. If not, or if the support is insufficient, use the CNI (Container Network Interface) network interface for your containers, which supports the necessary functionality and policies.
  2. Make sure your CaaS or PaaS (platform as a service) supports the creation of incoming controllers and/or load balancers that distribute incoming traffic among the cluster nodes. If this is not possible, consider using third party proxies or service meshes.
  3. Train your network engineers on Linux networking and network automation tools to narrow the skill gap and increase flexibility.

How to manage the application lifecycle

For automated and seamless application delivery, you need to complement container orchestration with other automation tools, such as infrastructure-as-code (IaC) products. These include Chef, Puppet, Ansible and Terraform. 

Tools for automating the assembly and rollout of applications are also required (see "Magic Quadrant for Application Release Orchestration"). Containers also provide extensibility similar to what existed when virtual machines (VMs) were deployed. Thus, IT leaders should have container lifecycle management tools.

Recommendations from Gartner:

  1. Set container base image standards with size, licensing, and flexibility for developers to add features.
  2. Use configuration management systems to manage the lifecycle of containers that layer configuration based on base images in public or private repositories.
  3. Integrate the CaaS platform with automation tools to automate the entire application workflow.

How to manage containers with orchestrators

The core functionality for deploying containers is provided at the orchestration and scheduling levels. When scheduling, containers are placed on the most optimal hosts in the cluster, as dictated by orchestration layer requirements. 

Kubernetes has become the de facto container orchestration standard with a vibrant community and is supported by most of the leading commercial vendors. 

Recommendations from Gartner:

  1. Define basic requirements for security controls, monitoring, policy management, data persistence, network management, and container lifecycle management.
  2. Based on these requirements, choose the tool that best suits your requirements and use cases.
  3. Leverage Gartner research (see "How to Choose a Kubernetes Deployment Model”) to understand the pros and cons of various Kubernetes deployment models and choose the best one for your application.
  4. Choose a provider that can provide hybrid orchestration for work containers across multiple environments with tight backend integrations, common management plans, and consistent pricing models.

How to take advantage of cloud providers

Gartner considersthat interest in deploying containers in the IaaS public cloud is growing due to the availability of pre-built CaaS offerings, as well as the tight integration of these offerings with other products offered by cloud providers.

IaaS clouds offer on-demand resource consumption, rapid scalability and service management, which will help avoid the need for in-depth knowledge about the infrastructure and its maintenance. Most cloud providers offer a container management service, and some offer multiple orchestration options. 

The key cloud managed service providers are listed in the table: 

cloud provider
Service type
Product/Service

Alibaba
Native Cloud Service
Alibaba Cloud Container Service, Alibaba Cloud Container Service for Kubernetes

Amazon Web Services (AWS)
Native Cloud Service
Amazon Elastic Container Services (ECS), Amazon ECS for Kubernetes (EKS), AWS Fargate

Giant Swarm
MSP
Giant Swarm Managed Kubernetes Infrastructure

Google
Native Cloud Service
Google Container Engine (GKE)

IBM
Native Cloud Service
IBM Cloud Kubernetes Service

Microsoft
Native Cloud Service
Azure Kubernetes Service, Azure Service Fabric

Oracle
Native Cloud Service
OCI Container Engine for Kubernetes

Platform9
MSP
Managed Kubernetes

Red Hat
Hosted Service
OpenShift Dedicated & Online

VMware
Hosted Service
Cloud PKS (Beta)

Mail.ru Cloud Solutions*
Native Cloud Service
Mail.ru Cloud Containers

* Let's not hide, we added ourselves here when translating πŸ™‚

Public cloud providers are also adding new features and releasing local products. In the near future, cloud providers will develop support for hybrid clouds and multi-cloud environments. 

Gartner Recommendations:

  1. Take an objective look at your organization's ability to deploy and manage the appropriate tools, and consider alternative cloud container management services.
  2. Choose software carefully, use open source wherever possible.
  3. Choose providers with common operating models in hybrid environments that offer single-pane management of federated clusters, and providers that make it easy to run IaaS on your own.

A few tips for choosing a Kubernetes aaS provider from the Replex blog:

  1. It's worth looking for distributions that support high availability out of the box. This includes support for several major architectures, highly available etcd components, and backup and restore.
  2. To ensure the mobility of Kubernetes environments, it is best to choose cloud providers that support a wide range of deployment models, from on-premise to hybrid to multi-cloud. 
  3. Provider offerings are also worth evaluating for ease of setup, installation, and clustering, as well as upgrades, monitoring, and troubleshooting. The base requirement is to support fully automated cluster upgrades with zero downtime. The solution you choose should also allow you to run updates manually. 
  4. Identity and access management is important from both a security and management standpoint. Make sure that the Kubernetes distribution you choose supports integration with the authentication and authorization tools that are used internally. RBAC and granular access control are also important feature sets.
  5. The distribution you choose must either have its own software-defined networking solution that covers a wide range of requirements for different applications or infrastructure, or support one of the popular CNI-based networking implementations, including Flannel, Calico, kube-router, or OVN.

The introduction of containers into production is becoming the main direction, as evidenced by the results of a survey conducted on Gartner sessions for Infrastructure, Operations and Cloud Strategies (IOCS) in December 2018:

Best Practices and Best Practices for Running Containers and Kubernetes in Production Environments
As you can see, 27% of respondents are already using containers in their work, and 63% are going to do so.

Π’ Portworx and Aqua Security survey 24% of respondents reported that they invest more than half a million dollars a year on container technologies, and 17% of respondents spend more than a million dollars a year on them. 

Article prepared by the cloud platform team Mail.ru Cloud Solutions.

What else to read on the topic:

  1. DevOps Best Practices: DORA Report.
  2. Kubernetes in the spirit of piracy with a template for implementation.
  3. 25 Useful Tools for Deploying and Implementing Kubernetes.

Source: habr.com

Add a comment