Canary Deployment in Kubernetes #1: Gitlab CI

We will use Gitlab CI and manual GitOps to implement and use the Canary deployment in Kubernetes

Canary Deployment in Kubernetes #1: Gitlab CI

Articles from this series:

We will perform the Canary deployment manually through GitOps and creating / modifying the main Kubernetes resources. This article is intended primarily for acquaintance with how deployment works in Kubernetes Canary, as there are more efficient ways to automate, which we will look at in future articles.


Canary Deployment in Kubernetes #1: Gitlab CI

https://www.norberteder.com/canary-deployment/

Canary Deployment

With the Canary strategy, updates are first applied to only a subset of users. Through monitoring, logging, manual testing, or other feedback channels, the release is tested before it is released to all users.

Kubernetes Deployment (rolling update)

The default strategy for Kubernetes Deployment is rolling-update, where a certain number of pods are launched with new image versions. If they were created without problems, the pods with the old versions of the images are terminated, and the new pods are created in parallel.

gitops

We are using GitOps in this example because we:

  • using Git as a single source of truth
  • use Git Operations to build and deploy (no commands other than git tag/merge needed)

Example

Let's take a good practice - to have one repository for application code and one for infrastructure.

Application repository

This is a very simple Python+Flask API that returns a JSON response. We will build the package via GitlabCI and push the result to the Gitlab Registry. In the registry, we have two different release versions:

  • wuestkamp/k8s-deployment-example-app:v1
  • wuestkamp/k8s-deployment-example-app:v2

The only difference between them is the change in the returned JSON file. We use this app to visualize as easily as possible which version we are talking to.

infrastructure repository

In this turnip, we will deploy via GitlabCI to Kubernetes, .gitlab-ci.yml as follows:

image: traherom/kustomize-docker

before_script:
   - printenv
   - kubectl version

stages:
 - deploy

deploy test:
   stage: deploy
   before_script:
     - echo $KUBECONFIG
   script:
     - kubectl get all
     - kubectl apply -f i/k8s

   only:
     - master

To run it yourself, you need a cluster, you can use Gcloud:

gcloud container clusters create canary --num-nodes 3 --zone europe-west3-b

gcloud compute firewall-rules create incoming-80 --allow tcp:80

You need to fork https://gitlab.com/wuestkamp/k8s-deployment-example-canary-infrastructure and create a variable KUBECONFIG in GitlabCI, which will contain the config for access kubectl to your cluster.

You can read about how to get credentials for a cluster (Gcloud) here.

Infrastructural Yaml

In the infrastructure repository, we have a service:

apiVersion: v1
kind: Service
metadata:
 labels:
   id: app
 name: app
spec:
 ports:
 - port: 80
   protocol: TCP
   targetPort: 5000
 selector:
   id: app
 type: LoadBalancer

And deployment in deploy.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
 name: app
spec:
 replicas: 10
 selector:
   matchLabels:
     id: app
     type: main
 template:
   metadata:
     labels:
       id: app
       type: main
   spec:
     containers:
     - image: registry.gitlab.com/wuestkamp/k8s-deployment-example-app:v1
       name: app
       resources:
         limits:
           cpu: 100m
           memory: 100Mi

And another deployment in deploy-canary.yaml:

kind: Deployment
metadata:
 name: app-canary
spec:
 replicas: 0
 selector:
   matchLabels:
     id: app
     type: canary
 template:
   metadata:
     labels:
       id: app
       type: canary
   spec:
     containers:
     - image: registry.gitlab.com/wuestkamp/k8s-deployment-example-app:v2
       name: app
       resources:
         limits:
           cpu: 100m
           memory: 100Mi

Note that app-deploy doesn't have replicas defined yet.

Performing an Initial Deployment

To start the initial deployment, you can start the GitlabCI pipeline manually on the master branch. After that kubectl should output the following:

Canary Deployment in Kubernetes #1: Gitlab CI

We see app deployment with 10 replicas and app-canary with 0. There is also a LoadBalancer from which we can access via curl by external IP:

while true; do curl -s 35.198.149.232 | grep label; sleep 0.1; done

Canary Deployment in Kubernetes #1: Gitlab CI

We can see that our test application only returns "v1".

Performing a Canary Deployment

Step 1: release a new version for a part of users

We set the number of replicas to 1 in the deploy-canary.yaml file and the new version image:

kind: Deployment
metadata:
 name: app-canary
spec:
 replicas: 1
 selector:
   matchLabels:
     id: app
     type: canary
 template:
   metadata:
     labels:
       id: app
       type: canary
   spec:
     containers:
     - image: registry.gitlab.com/wuestkamp/k8s-deployment-example-app:v2
       name: app
       resources:
         limits:
           cpu: 100m
           memory: 100Mi

In file deploy.yaml we changed the number of replicas to 9:

kind: Deployment
metadata:
 name: app
spec:
 replicas: 9
 selector:
   matchLabels:
     id: app
...

We push these changes to the repository from which the deployment will start (via GitlabCI) and see as a result:

Canary Deployment in Kubernetes #1: Gitlab CI

Our Service will point to both deployments since both have the app selector. Due to the default random distribution in Kubernetes, we should see different responses for ~10% of requests:

Canary Deployment in Kubernetes #1: Gitlab CI

The current state of our application (GitOps, taken from Git as Single Source Of Truth) is having two deployments with active replicas, one for each version.

~10% of users become familiar with the new version and unintentionally test it. Now it's time to check for errors in the logs and monitoring data to find problems.

Step 2: Release the new version to all users

We decided that everything went well and now we need to deploy the new version to all users. To do this, we simply update deploy.yaml installing a new version of the image and the number of replicas is 10. In deploy-canary.yaml we set the number of replicas back to 0. After deployment, the result will be as follows:

Canary Deployment in Kubernetes #1: Gitlab CI

Summing up

For me, running a deployment manually in this way helps to understand how easily it can be configured with k8s. Since Kubernetes allows you to update everything through an API, these steps can be automated through scripts.

Another thing to implement is a tester entry point (LoadBalancer or via Ingress) through which only the new version can be accessed. It can be used for manual browsing.

In future articles, we'll check out other automated solutions that implement most of what we've done.

Also read other articles on our blog:

Source: habr.com

Add a comment