Delete an obsolete feature branch in a Kubernetes cluster

Delete an obsolete feature branch in a Kubernetes cluster

Hi! feature branch (aka deploy preview, review app) is when not only the master branch is deployed, but also each pull request to a unique URL. You can check if the code works in the production environment, you can show the feature to other programmers or product managers. While you're working on a pull request, every new commit the current deploy for the old code is removed, and a new deploy for the new code is rolled out. Questions may arise when you have merged a pull request into the master branch. You don't need the feature branch anymore, but the Kubernetes resources are still in the cluster.

More about feature branches

One approach to making feature branches in Kubernetes is to use namespaces. In short, the production configuration looks like this:

kind: Namespace
apiVersion: v1
metadata:
  name: habr-back-end
...

kind: Deployment
apiVersion: apps/v1
metadata:
  namespace: habr-back-end
spec:
  replicas: 3
...

For a feature branch, a namespace is created with its identifier (for example, the pull request number) and some prefix / postfix (for example, -pr-):

kind: Namespace
apiVersion: v1
metadata:
  name: habr-back-end-pr-17
...

kind: Deployment
apiVersion: apps/v1
metadata:
  namespace: habr-back-end-pr-17
spec:
  replicas: 1
...

In general, I wrote Kubernetes Operator (application that has access to cluster resources), project link on Github. It removes namespaces that belong to old feature branches. In Kubernetes, if you delete a namespace, other resources in that namespace are automatically deleted as well.

$ kubectl get pods --all-namespaces | grep -e "-pr-"
NAMESPACE            ... AGE
habr-back-end-pr-264 ... 4d8h
habr-back-end-pr-265 ... 5d7h

You can read about how to implement feature branches in a cluster here ΠΈ here.

Motivation

Let's take a look at a typical continuous integration pull request lifecycle (continuous integration):

  1. We push a new commit to the branch.
  2. On the build, linters and/or tests are run.
  3. Kubernetes pull request configurations are formed on the fly (for example, its number is substituted into the finished template).
  4. Using kubectl apply, the configurations get into the cluster (deploy).
  5. The pull request is merged into the master branch.

While you're working on a pull request, every new commit the current deploy for the old code is removed, and a new deploy for the new code is rolled out. But when a pull request is merged into the master branch, only the master branch will be built. As a result, it turns out that we have already forgotten about the pull request, and its Kubernetes resources are still in the cluster.

How to use

Install the project with the command below:

$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/configs/production.yml

Create a file with the following content and install via kubectl apply -f:

apiVersion: feature-branch.dmytrostriletskyi.com/v1
kind: StaleFeatureBranch
metadata:
  name: stale-feature-branch
spec:
  namespaceSubstring: -pr-
  afterDaysWithoutDeploy: 3

Parameter namespaceSubstring needed to filter pull request namespaces from other namespaces. For example, if the cluster has the following namespaces: habr-back-end, habr-front-end, habr-back-end-pr-17, habr-back-end-pr-33, then the candidates for deletion will be habr-back-end-pr-17, habr-back-end-pr-33.

Parameter afterDaysWithoutDeploy needed to remove old namespaces. For example, if namespace is created 3 дня 1 час back, and the parameter specifies 3 дня, this namespace will be removed. Works the other way around if namespace is created 2 дня 23 часа back, and the parameter specifies 3 дня, this namespace will not be removed.

There is one more parameter, it is responsible for how often to scan all namespaces and check for days without deploy - checkEveryMinutes. By default it is equal to 30 ΠΌΠΈΠ½ΡƒΡ‚Π°ΠΌ.

How it works

In practice, you will need:

  1. Docker to work in an isolated environment.
  2. minicube will raise a Kubernetes cluster locally.
  3. kubectl - command line interface for cluster management.

We raise the Kubernetes cluster locally:

$ minikube start --vm-driver=docker
minikube v1.11.0 on Darwin 10.15.5
Using the docker driver based on existing profile.
Starting control plane node minikube in cluster minikube.

Specify kubectl use default local cluster:

$ kubectl config use-context minikube
Switched to context "minikube".

Download the configuration for the production environment:

$ curl https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/configs/production.yml > stale-feature-branch-production-configs.yml

Since the production configurations are configured to check for old namespaces, and there are none in our newly raised cluster, we will replace the environment variable IS_DEBUG on true. With this value, the parameter afterDaysWithoutDeploy is not taken into account and namespaces are not checked for days without deploy, only for the occurrence of a substring (-pr-).

If you are on Linux:

$ sed -i 's|false|true|g' stale-feature-branch-production-configs.yml

If you are on macOS:

$ sed -i "" 's|false|true|g' stale-feature-branch-production-configs.yml

Installing the project:

$ kubectl apply -f stale-feature-branch-production-configs.yml

Checking that a resource has appeared in the cluster StaleFeatureBranch:

$ kubectl api-resources | grep stalefeaturebranches
NAME                 ... APIGROUP                             ... KIND
stalefeaturebranches ... feature-branch.dmytrostriletskyi.com ... StaleFeatureBranch

We check that an operator has appeared in the cluster:

$ kubectl get pods --namespace stale-feature-branch-operator
NAME                                           ... STATUS  ... AGE
stale-feature-branch-operator-6bfbfd4df8-m7sch ... Running ... 38s

If you look at his logs, he is ready to process resources StaleFeatureBranch:

$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator
... "msg":"Operator Version: 0.0.1"}
...
... "msg":"Starting EventSource", ... , "source":"kind source: /, Kind="}
... "msg":"Starting Controller", ...}
... "msg":"Starting workers", ..., "worker count":1}

We install ready fixtures (ready-made configurations for modeling cluster resources) for a resource StaleFeatureBranch:

$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/stale-feature-branch.yml

The configurations indicate to look for namespaces with a substring -pr- once in 1 ΠΌΠΈΠ½ΡƒΡ‚Ρƒ.:

apiVersion: feature-branch.dmytrostriletskyi.com/v1
kind: StaleFeatureBranch
metadata:
  name: stale-feature-branch
spec:
  namespaceSubstring: -pr-
  afterDaysWithoutDeploy: 1 
  checkEveryMinutes: 1

The operator has reacted and is ready to check namespaces:

$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator
... "msg":"Stale feature branch is being processing.","namespaceSubstring":"-pr-","afterDaysWithoutDeploy":1,"checkEveryMinutes":1,"isDebug":"true"}

Set fixtures, containing two namespaces (project-pr-1, project-pr-2) and them deployments, services, ingress, and so on:

$ kubectl apply -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/first-feature-branch.yml -f https://raw.githubusercontent.com/dmytrostriletskyi/stale-feature-branch-operator/master/fixtures/second-feature-branch.yml
...
namespace/project-pr-1 created
deployment.apps/project-pr-1 created
service/project-pr-1 created
horizontalpodautoscaler.autoscaling/project-pr-1 created
secret/project-pr-1 created
configmap/project-pr-1 created
ingress.extensions/project-pr-1 created
namespace/project-pr-2 created
deployment.apps/project-pr-2 created
service/project-pr-2 created
horizontalpodautoscaler.autoscaling/project-pr-2 created
secret/project-pr-2 created
configmap/project-pr-2 created
ingress.extensions/project-pr-2 created

Check that all resources above are successfully created:

$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-1 && kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-2
...
NAME                              ... READY ... STATUS  ... AGE
pod/project-pr-1-848d5fdff6-rpmzw ... 1/1   ... Running ... 67s

NAME                         ... READY ... AVAILABLE ... AGE
deployment.apps/project-pr-1 ... 1/1   ... 1         ... 67s
...

Since we included debug, namespaces project-pr-1 ΠΈ project-pr-2, therefore, all other resources will have to be deleted immediately without taking into account the parameter afterDaysWithoutDeploy. This can be seen in the logs of the operator:

$ kubectl logs stale-feature-branch-operator-6bfbfd4df8-m7sch -n stale-feature-branch-operator
... "msg":"Namespace should be deleted due to debug mode is enabled.","namespaceName":"project-pr-1"}
... "msg":"Namespace is being processing.","namespaceName":"project-pr-1","namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
... "msg":"Namespace has been deleted.","namespaceName":"project-pr-1"}
... "msg":"Namespace should be deleted due to debug mode is enabled.","namespaceName":"project-pr-2"}
... "msg":"Namespace is being processing.","namespaceName":"project-pr-2","namespaceCreationTimestamp":"2020-06-16 18:43:58 +0300 EEST"}
... "msg":"Namespace has been deleted.","namespaceName":"project-pr-2"}

If you check the availability of resources, they will be in the status Terminating (deletion process) or already deleted (command output is empty).

$ kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-1 && kubectl get namespace,pods,deployment,service,horizontalpodautoscaler,configmap,ingress -n project-pr-2
...

You can repeat the creation process fixtures several times and make sure they are removed within a minute.

Alternatives

What can be done instead of an operator that works in a cluster? There are several approaches, all of them are not ideal (and their shortcomings are subjective), and everyone decides for himself what is best for a particular project:

  1. Remove feature branch during master branch continuous integration build.

    • To do this, you need to know which pull request belongs to the commit that is being built. Since the feature branch namespace contains the pull request identifier - its number, or the name of the branch, the identifier will always have to be specified in the commit.
    • Builds of master branches fail. For example, you have the following stages: download the project, run tests, build the project, make a release, send notifications, clear the feature branch of the last pull request. If the build fails on sending a notification, you will have to delete all resources in the cluster by hand.
    • Without proper context, deleting a feature branch in a master build is not obvious.

  2. Using webhooks (example).

    • Perhaps this is not your approach. For example, in Jenkins, only one type of pipeline supports the ability to save its configurations in the source code. When using webhooks, you need to write your own script to process them. This script will have to be placed in the Jenkins interface, which is difficult to maintain.

  3. Write cronjob and add a Kubernetes cluster.

    • Time spent writing and maintaining.
    • The operator already works in a similar style, is documented and supported.

Thank you for your attention to the article. Link to the project on Github.

Source: habr.com

Add a comment