Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

CD jẹ idanimọ bi adaṣe sọfitiwia ile-iṣẹ ati pe o jẹ itankalẹ adayeba ti awọn ipilẹ CI ti iṣeto. Sibẹsibẹ, CD ṣi jẹ ohun toje, boya nitori idiju ti iṣakoso ati iberu ti awọn imuṣiṣẹ ti kuna ti o ni ipa lori wiwa eto.

Flagger jẹ orisun ṣiṣi silẹ oniṣẹ Kubernetes ti o ni ero lati yọkuro awọn ibatan airoju. O ṣe adaṣe adaṣe igbega ti awọn imuṣiṣẹ canary ni lilo awọn aiṣedeede ijabọ Istio ati awọn metiriki Prometheus lati ṣe itupalẹ ihuwasi ohun elo lakoko ifilọlẹ iṣakoso.

Ni isalẹ ni itọsọna igbese-nipasẹ-igbesẹ lati ṣeto ati lilo Flagger lori Google Kubernetes Engine (GKE).

Ṣiṣeto iṣupọ Kubernetes kan

O bẹrẹ nipa ṣiṣẹda iṣupọ GKE kan pẹlu afikun Istio (ti o ko ba ni akọọlẹ GCP kan, o le forukọsilẹ nibi - lati gba awọn kirediti ọfẹ).

Wọlé si Google Cloud, ṣẹda iṣẹ akanṣe kan, ki o si mu owo-owo ṣiṣẹ fun. Fi sori ẹrọ IwUlO laini aṣẹ gcloud ki o si tunto rẹ ise agbese pẹlu gcloud init.

Ṣeto iṣẹ akanṣe aiyipada, agbegbe iṣiro, ati agbegbe (rọpo PROJECT_ID fun ise agbese rẹ):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Mu iṣẹ GKE ṣiṣẹ ki o ṣẹda iṣupọ pẹlu HPA ati awọn afikun Istio:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

Aṣẹ ti o wa loke yoo ṣẹda adagun oju ipade aiyipada ti o ni awọn VM meji n1-standard-2 (vCPU: 2, Ramu 7,5 GB, disk: 30 GB). Bi o ṣe yẹ, awọn paati Istio yẹ ki o ya sọtọ lati awọn ẹru iṣẹ wọn, ṣugbọn ko si ọna ti o rọrun lati ṣiṣe awọn pods Istio lori adagun ipade ipade kan. Awọn ifarahan Istio ni a gba kika-nikan, ati GKE yoo yi iyipada eyikeyi pada gẹgẹbi isopọmọ si ipade kan tabi yọkuro lati inu podu kan.

Ṣeto awọn iwe-ẹri fun kubectl:

gcloud container clusters get-credentials istio

Ṣẹda iṣupọ abojuto ipa abuda:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Fi ọpa laini aṣẹ sori ẹrọ Iranlọwọ:

brew install kubernetes-helm

Homebrew 2.0 tun wa fun Linux.

Ṣẹda akọọlẹ iṣẹ kan ati asopọ ipa iṣupọ fun Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Faagun Tiller ni aaye orukọ kube-system:

helm init --service-account tiller

O yẹ ki o ronu lilo SSL laarin Helm ati Tiller. Fun alaye diẹ sii nipa aabo fifi sori Helm rẹ, wo docs.helm.sh

Jẹrisi awọn eto:

kubectl -n istio-system get svc

Lẹhin iṣẹju diẹ, GCP yẹ ki o fi adiresi IP ita si iṣẹ naa istio-ingressgateway.

Eto soke ohun Istio Ingress Gateway

Ṣẹda adiresi IP aimi pẹlu orukọ istio-gatewaylilo adiresi IP ẹnu-ọna Istio:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Bayi o nilo aaye ayelujara kan ati iwọle si Alakoso DNS rẹ. Ṣafikun awọn igbasilẹ A meji (rọpo example.com si agbegbe rẹ):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

Jẹrisi pe kaadi egan DNS n ṣiṣẹ:

watch host test.istio.example.com

Ṣẹda ẹnu-ọna Istio jeneriki lati pese awọn iṣẹ ni ita apapo iṣẹ lori HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Ṣafipamọ awọn orisun ti o wa loke bi public-gateway.yaml ati lẹhinna lo:

kubectl apply -f ./public-gateway.yaml

Ko si eto iṣelọpọ yẹ ki o pese awọn iṣẹ lori Intanẹẹti laisi SSL. Lati ni aabo ẹnu-ọna Ingress Istio rẹ pẹlu oluṣakoso ijẹrisi, CloudDNS ati Jẹ ki a Encrypt, jọwọ ka iwe aṣẹ Flagger G.K.E.

Fi sori ẹrọ Flagger

Fikun-un GKE Istio ko pẹlu apẹẹrẹ Prometheus ti o wẹ iṣẹ istio telemetry di mimọ. Niwọn igba ti Flagger nlo awọn metiriki Istio HTTP lati ṣe itupalẹ canary, o nilo lati ran iṣeto ni Prometheus atẹle yii, ti o jọra si ọkan ti o wa pẹlu eto Istio Helm osise.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Ṣafikun ibi ipamọ Helm Flagger:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Faagun Flagger si aaye orukọ istio-systemnipa ṣiṣe awọn iwifunni Slack:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

O le fi Flagger sori ẹrọ ni aaye orukọ eyikeyi niwọn igba ti o le ṣe ibasọrọ pẹlu iṣẹ Istio Prometheus lori ibudo 9090.

Flagger ni dasibodu Grafana fun itupalẹ canary. Fi Grafana sori ẹrọ ni aaye orukọ istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Ṣe afihan Grafana nipasẹ ẹnu-ọna ṣiṣi nipa ṣiṣẹda iṣẹ foju kan (rọpo example.com si agbegbe rẹ):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Ṣafipamọ awọn orisun ti o wa loke bi grafana-virtual-service.yaml ati lẹhinna lo:

kubectl apply -f ./grafana-virtual-service.yaml

Nigbati lilọ si http://grafana.istio.example.com Aṣàwákiri rẹ yẹ ki o darí rẹ si oju-iwe iwọle Grafana.

Gbigbe awọn ohun elo wẹẹbu pẹlu Flagger

Flagger ran Kubernetes ṣiṣẹ ati, ti o ba jẹ dandan, petele autoscaling (HPA), lẹhinna ṣẹda lẹsẹsẹ awọn nkan (awọn imuṣiṣẹ Kubernetes, awọn iṣẹ ClusterIP ati awọn iṣẹ foju Istio). Awọn nkan wọnyi ṣafihan ohun elo naa si apapo iṣẹ ati itupalẹ canary iṣakoso ati igbega.

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Ṣẹda aaye orukọ idanwo pẹlu imuse Istio Sidecar ṣiṣẹ:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Ṣẹda imuṣiṣẹ ati ohun elo igbelowọn petele laifọwọyi fun adarọ-ese:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Mu iṣẹ idanwo fifuye kan ṣiṣẹ lati ṣe ina ijabọ lakoko itupalẹ canary:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Ṣẹda orisun orisun Canary ti aṣa (rọpo example.com si agbegbe rẹ):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Ṣafipamọ awọn orisun ti o wa loke bi podinfo-canary.yaml ati lẹhinna lo:

kubectl apply -f ./podinfo-canary.yaml

Onínọmbà ti o wa loke, ti o ba ṣaṣeyọri, yoo ṣiṣẹ fun iṣẹju marun, ṣayẹwo awọn metiriki HTTP ni gbogbo iṣẹju idaji. O le pinnu akoko ti o kere julọ ti o nilo lati ṣe idanwo ati igbega imuṣiṣẹ canary nipa lilo agbekalẹ atẹle: interval * (maxWeight / stepWeight). Awọn aaye Canary CRD ti wa ni akọsilẹ nibi.

Lẹhin iṣẹju-aaya diẹ, Flagger yoo ṣẹda awọn nkan canary:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Ṣii ẹrọ aṣawakiri rẹ ki o lọ si app.istio.example.com, o yẹ ki o wo nọmba ikede naa demo ohun elo.

Aifọwọyi canary onínọmbà ati igbega

Flagger ṣe imuṣẹ lupu iṣakoso kan ti o n gbe ijabọ diẹdiẹ si canary lakoko wiwọn awọn afihan iṣẹ ṣiṣe bọtini bii oṣuwọn aṣeyọri ibeere HTTP, iye akoko ibeere apapọ, ati ilera podu. Da lori itupalẹ KPI, canary ti ni igbega tabi fopin, ati pe awọn abajade ti itupalẹ ni a tẹjade ni Slack.

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Ifilọlẹ Canary jẹ okunfa nigbati ọkan ninu awọn nkan wọnyi ba yipada:

  • Gbe PodSpec (aworan apoti, aṣẹ, awọn ebute oko oju omi, env, ati bẹbẹ lọ)
  • ConfigMaps ti wa ni gbigbe bi awọn iwọn didun tabi yi pada si awọn oniyipada ayika
  • Awọn asiri ti wa ni gbigbe bi awọn iwọn didun tabi yi pada si awọn oniyipada ayika

Ṣiṣe imuṣiṣẹ canary nigba mimudojuiwọn aworan eiyan:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger ṣe awari pe ẹya imuṣiṣẹ ti yipada ati bẹrẹ lati ṣe itupalẹ rẹ:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

Lakoko itupalẹ, awọn abajade canary le ṣe abojuto ni lilo Grafana:

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Jọwọ ṣakiyesi: ti awọn ayipada tuntun ba lo si imuṣiṣẹ lakoko itupalẹ canary, Flagger yoo tun bẹrẹ ipele onínọmbà naa.

Ṣe atokọ ti gbogbo awọn canaries ninu iṣupọ rẹ:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Ti o ba ti mu awọn iwifunni Slack ṣiṣẹ, iwọ yoo gba awọn ifiranṣẹ atẹle wọnyi:

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Yipada laifọwọyi

Lakoko itupalẹ canary, o le ṣe agbekalẹ awọn aṣiṣe HTTP 500 sintetiki ati airi esi giga lati ṣayẹwo boya Flagger yoo da imuṣiṣẹ naa duro.

Ṣẹda adarọ-ese kan ki o ṣe awọn atẹle ninu rẹ:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Ṣiṣẹda awọn aṣiṣe HTTP 500:

watch curl http://podinfo-canary:9898/status/500

Idaduro iran:

watch curl http://podinfo-canary:9898/delay/1

Nigbati nọmba awọn sọwedowo ti o kuna ba de opin, ijabọ yoo pada si ikanni akọkọ, canary jẹ iwọn si odo, ati imuṣiṣẹ naa ti samisi bi kuna.

Awọn aṣiṣe Canary ati awọn spikes lairi jẹ ibuwolu wọle bi awọn iṣẹlẹ Kubernetes ati gbasilẹ nipasẹ Flagger ni ọna kika JSON:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Ti o ba ti mu awọn iwifunni Slack ṣiṣẹ, iwọ yoo gba ifiranṣẹ nigbati akoko ipari fun ipari tabi de ọdọ nọmba ti o pọju ti awọn atunwo ti o kuna ninu itupalẹ ti kọja:

Awọn imuṣiṣẹ canary aifọwọyi pẹlu Flagger ati Istio

Ni ipari

Ṣiṣe mesh iṣẹ kan bi Istio lori oke Kubernetes yoo pese awọn metiriki adaṣe, awọn akọọlẹ, ati awọn akọọlẹ, ṣugbọn gbigbe awọn iṣẹ ṣiṣe tun da lori awọn irinṣẹ ita. Flagger ni ero lati yi eyi pada nipa fifi awọn agbara Istio kun onitẹsiwaju ifijiṣẹ.

Flagger ni ibamu pẹlu eyikeyi ojutu CI/CD fun Kubernetes, ati itupalẹ canary le ni irọrun faagun pẹlu webhooks lati ṣe awọn idanwo isọpọ / gbigba awọn eto, awọn idanwo fifuye tabi eyikeyi awọn idanwo aṣa miiran. Nitori Flagger jẹ asọye ati idahun si awọn iṣẹlẹ Kubernetes, o le ṣee lo ni awọn opo gigun ti GitOps pẹlu Weave Flux tabi JenkinsX. Ti o ba nlo JenkinsX, o le fi Flagger sori ẹrọ pẹlu jx add-ons.

Flagger ṣe atilẹyin Awọn iṣẹ-ọṣọ ati ki o pese Canary deployments ni Weave Awọsanma. Ise agbese na ni idanwo lori GKE, EKS ati igboro irin pẹlu kubeadm.

Ti o ba ni awọn imọran fun ilọsiwaju Flagger, jọwọ fi ọrọ kan silẹ tabi PR lori GitHub ni stefanprodan / flagger. Awọn ifunni jẹ diẹ sii ju kaabo!

Спасибо Ray Tsang.

orisun: www.habr.com

Fi ọrọìwòye kun