Automatic canary deployments with Flagger and Istio

Automatic canary deployments with Flagger and Istio

CD is recognized as an enterprise software practice and is the result of a natural evolution of established CI principles. However, CD is still quite rare, perhaps due to the complexity of management and the fear of failed deployments affecting system availability.

flagger is an open source Kubernetes operator that aims to eliminate confusing relationships. It automates the promotion of canary deployments using Istio traffic offset and Prometheus metrics to analyze application behavior during a managed rollout.

Below is a step by step guide to setting up and using Flagger on Google Kubernetes Engine (GKE).

Setting up a Kubernetes cluster

You start by creating a GKE cluster with the Istio add-on (if you don't have a GCP account, you can sign up here - to get free credits).

Sign in to Google Cloud, create a project, and enable billing for it. Install the command line utility gcloud and set up your project with gcloud init.

Set default project, compute area, and zone (replace PROJECT_ID for your project):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Enable the GKE service and create a cluster with HPA and Istio add-ons:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

The above command will create a default node pool including two VMs n1-standard-2 (vCPU: 2, RAM 7,5 GB, disk: 30 GB). Ideally, you should isolate Istio components from your workloads, but there is no easy way to run Istio Pods in a dedicated pool of nodes. Istio manifests are considered read-only and GKE will undo any changes, such as linking to a node or detaching from a pod.

Set up credentials for kubectl:

gcloud container clusters get-credentials istio

Create a cluster administrator role binding:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Install the command line tool Helmet:

brew install kubernetes-helm

Homebrew 2.0 is now also available for Linux.

Create a service account and cluster role binding for Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Expand Tiller in namespace kube-system:

helm init --service-account tiller

You should consider using SSL between Helm and Tiller. For more information about protecting your Helm installation, see docs.helm.sh

Confirm settings:

kubectl -n istio-system get svc

After a few seconds, GCP should assign an external IP address for the service istio-ingressgateway.

Configuring the Istio Ingress Gateway

Create a static IP address with a name istio-gatewayusing the IP address of the Istio gateway:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Now you need an internet domain and access to your DNS registrar. Add two A records (replace example.com to your domain):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

Verify that the DNS wildcard is working:

watch host test.istio.example.com

Create a generic Istio gateway to provide services outside the service mesh over HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Save the above resource as public-gateway.yaml and then apply it:

kubectl apply -f ./public-gateway.yaml

No production system should provide services on the Internet without SSL. To secure the Istio ingress gateway with cert-manager, CloudDNS and Let's Encrypt, please read documentation Flagger G.K.E.

Flagger Installation

The GKE Istio add-on does not include a Prometheus instance that cleans up the Istio telemetry service. Because Flagger uses Istio HTTP metrics to perform canary analysis, you need to deploy the following Prometheus configuration, similar to the one that comes with the official Istio Helm schema.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Add the Flagger Helm repository:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Expand Flagger to namespace istio-systemby enabling Slack notifications:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

You can install Flagger in any namespace as long as it can communicate with the Istio Prometheus service on port 9090.

Flagger has a Grafana dashboard for canary analysis. Install Grafana in namespace istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Expose Grafana through an open gateway by creating a virtual service (replace example.com to your domain):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Save the above resource as grafana-virtual-service.yaml and then apply it:

kubectl apply -f ./grafana-virtual-service.yaml

When moving to http://grafana.istio.example.com in the browser, you should be directed to the Grafana login page.

Deploying web applications with Flagger

Flagger deploys Kubernetes and optionally scales out automatically (HPA), then creates a series of objects (Kubernetes deployments, ClusterIP services, and Istio virtual services). These objects expose the application to the service mesh and control canary analysis and progress.

Automatic canary deployments with Flagger and Istio

Create a test namespace with Istio Sidecar injection enabled:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Create a deployment and a pod automatic scale-out tool:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Deploy a test load service to generate traffic during canary analysis:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Create a custom canary resource (replace example.com to your domain):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Save the above resource as podinfo-canary.yaml and then apply it:

kubectl apply -f ./podinfo-canary.yaml

The above analysis, if successful, will run for five minutes, checking HTTP metrics every half minute. You can determine the minimum time required to validate and promote a canary deployment using the following formula: interval * (maxWeight / stepWeight). Canary CRD fields are documented here.

After a couple of seconds, Flagger will create canary objects:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Open a browser and go to app.istio.example.com, you should see the version number demo apps.

Automatic canary analysis and promotion

Flagger implements a control loop that gradually moves traffic to the canary while measuring key performance metrics such as HTTP request success rate, average request duration, and pod health. Based on the KPI analysis, the canary is promoted or interrupted, and the results of the analysis are published to Slack.

Automatic canary deployments with Flagger and Istio

Canary deployment is triggered when one of the following objects changes:

  • Deploy PodSpec (container image, command, ports, env, etc.)
  • ConfigMaps are mounted as volumes or mapped to environment variables
  • Secrets are mounted as volumes or converted to environment variables

Run canary deploy when updating a container image:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger detects that the deployment version has changed and starts parsing it:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

During analysis, canary results can be tracked using Grafana:

Automatic canary deployments with Flagger and Istio

Please note that if new changes are applied to a deployment during canary analysis, then Flagger will restart the analysis phase.

Make a list of all canaries in your cluster:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

If you have enabled Slack notifications, you will receive the following messages:

Automatic canary deployments with Flagger and Istio

Automatic rollback

During canary analysis, you can generate synthetic HTTP 500 errors and high response latency to see if Flagger will stop the deployment.

Create a test pod and do the following in it:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Generating HTTP 500 errors:

watch curl http://podinfo-canary:9898/status/500

Delay generation:

watch curl http://podinfo-canary:9898/delay/1

When the number of failed checks reaches the threshold, the traffic is routed back to the primary channel, the canary is scaled to zero, and the deployment is marked as failed.

Canary errors and latency spikes are logged as Kubernetes events and logged by Flagger in JSON format:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

If you have enabled Slack notifications, you will receive a message when the deadline is exceeded or the maximum number of failed checks in the analysis is reached:

Automatic canary deployments with Flagger and Istio

In conclusion

Running a service mesh like Istio in addition to Kubernetes will provide automatic metrics, logs, and protocols, but workload deployment still depends on external tools. Flagger aims to change this by adding Istio capabilities progressive delivery.

Flagger is compatible with any Kubernetes CI/CD solution, and canary analysis can be easily extended with webhooks to perform system integration/acceptance tests, load tests, or any other custom checks. Since Flagger is declarative and responds to Kubernetes events, it can be used in GitOps pipelines along with Weave Flux or JenkinsX. If you are using JenkinsX you can install Flagger with jx addons.

Flagger supported Weaveworks and provides canary deployments in Weave Cloud. The project is being tested on GKE, EKS, and bare metal with kubeadm.

If you have suggestions to improve Flagger, please submit an issue or PR on GitHub at stefanprodan/flagger. Contributions are more than welcome!

Thank you Ray Tsang.

Source: habr.com

Add a comment