Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

CD e tsejoa e le ts'ebetso ea software ea khoebo mme ke litholoana tsa phetoho ea tlhaho ea melao-motheo ea CI. Leha ho le joalo, CD e ntse e sa tloaeleha, mohlomong ka lebaka la ho rarahana ha tsamaiso le tšabo ea ho hloleha ho romelloa ho amang ho fumaneha ha sistimi.

Molaoli ke mohloli o bulehileng oa Kubernetes o ikemiselitseng ho felisa likamano tse ferekanyang. E iketsetsa papatso ea li-canary deployments e sebelisa Istio traffic offset le Prometheus metrics ho sekaseka boitšoaro ba ts'ebeliso nakong ea phatlalatso e laoloang.

Ka tlase ke tataiso ea mohato ka mohato ho theha le ho sebelisa Flagger ho Google Kubernetes Engine (GKE).

Ho theha sehlopha sa Kubernetes

U qala ka ho theha sehlopha sa GKE ka tlatsetso ea Istio (haeba u sena ak'haonte ea GCP, u ka ingolisa. mona - ho fumana likalimo tsa mahala).

Kena ho Google Cloud, etsa projeke, 'me u lumelle ho lefa bakeng sa eona. Kenya sesebelisoa sa mola oa taelo gcloud le ho theha morero oa hau ka gcloud init.

Beha projeke ea kamehla, sebaka sa komporo, le libaka (fetola sebaka PROJECT_ID bakeng sa morero oa hau):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Numella tšebeletso ea GKE 'me u thehe sehlopha se nang le li-add-on tsa HPA le Istio:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

Taelo e kaholimo e tla theha letamo la node la kamehla ho kenyelletsa le li-VM tse peli n1-standard-2 (vCPU: 2, RAM 7,5 GB, disk: 30 GB). Ka tsela e loketseng, o lokela ho arola likarolo tsa Istio mesebetsing ea hau, empa ha ho na tsela e bonolo ea ho tsamaisa Istio Pods ka letamo le inehetseng la li-node. Lipontšo tsa Istio li nkoa li baloa feela 'me GKE e tla etsolla liphetoho leha e le life, tse kang ho hokahanya le node kapa ho tlosa pod.

Theha mangolo a bopaki bakeng sa kubectl:

gcloud container clusters get-credentials istio

Theha karolo e tlamang ea molaoli oa sehlopha:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Kenya sesebelisoa sa mola oa taelo helmete:

brew install kubernetes-helm

Homebrew 2.0 e se e fumaneha hape bakeng sa Linux.

Theha ak'haonte ea litšebeletso le karolo e tlamang bakeng sa Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Eketsa Tiller sebakeng sa mabitso kube-system:

helm init --service-account tiller

U lokela ho nahana ka ho sebelisa SSL pakeng tsa Helm le Tiller. Bakeng sa tlhaiso-leseling e batsi mabapi le ho sireletsa sesebelisoa sa hau sa Helm, bona docs.helm.sh

Netefatsa litlhophiso:

kubectl -n istio-system get svc

Kamora metsotsoana e seng mekae, GCP e lokela ho fana ka aterese ea IP ea kantle bakeng sa ts'ebeletso istio-ingressgateway.

Ho lokisa Istio Ingress Gateway

Theha aterese ea IP e sa fetoheng e nang le lebitso istio-gatewayho sebelisa aterese ea IP ea heke ea Istio:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Joale o hloka sebaka sa marang-rang le phihlello ho ngoliso ea hau ea DNS. Kenya lirekoto tse peli tsa A (fetola sebaka example.com sebakeng sa hau sa marang-rang):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

Netefatsa hore karete ea hlaha ea DNS ea sebetsa:

watch host test.istio.example.com

Theha tsela e akaretsang ea Istio ho fana ka litšebeletso ka ntle ho letlooeng la litšebeletso holim'a HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Boloka sesebelisoa se ka holimo e le public-gateway.yaml ebe u se sebelisa:

kubectl apply -f ./public-gateway.yaml

Ha ho tsamaiso ea tlhahiso e lokelang ho fana ka litšebeletso Inthaneteng ntle le SSL. Ho boloka heke ea Istio ingress ka cert-manager, CloudDNS le Let's Encrypt, ka kopo bala litokomane Flagger G.K.E.

Ho kenya Flagger

Keketso ea GKE Istio ha e kenyelle mohlala oa Prometheus o hloekisang tšebeletso ea telemetry ea Istio. Hobane Flagger e sebelisa metrics ea Istio HTTP ho etsa tlhahlobo ea canary, o hloka ho sebelisa tlhophiso e latelang ea Prometheus, e ts'oanang le e tlang le schema ea semmuso ea Istio Helm.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Kenya sebaka sa polokelo ea Flagger Helm:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Atolosa Folaga sebakeng sa mabitso istio-systemka ho nolofalletsa litsebiso tsa Slack:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

O ka kenya Flagger sebakeng sefe kapa sefe sa mabitso ha feela e khona ho buisana le ts'ebeletso ea Istio Prometheus ho port 9090.

Flagger e na le dashboard ea Grafana bakeng sa tlhahlobo ea canary. Kenya Grafana sebakeng sa mabitso istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Hlahisa Grafana ka heke e bulehileng ka ho theha tšebeletso ea sebele (fetola sebaka example.com sebakeng sa hau sa marang-rang):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Boloka sesebelisoa se ka holimo e le grafana-virtual-service.yaml ebe u se sebelisa:

kubectl apply -f ./grafana-virtual-service.yaml

Ha u fallela ho http://grafana.istio.example.com ho sebatli, o lokela ho lebisoa ho leqephe la ho kena la Grafana.

Ho tsamaisa lits'ebetso tsa marang-rang ka Flagger

Flagger e sebelisa Kubernetes 'me ka boikhethelo e ikakhela ka setotsoana (HPA), ebe e theha letoto la lintho (Kubernetes deployments, ClusterIP services, le Istio virtual services). Lintho tsena li pepesa ts'ebeliso ho mesh ea lits'ebeletso le ho laola tlhahlobo le tsoelo-pele ea canary.

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Theha sebaka sa mabitso sa tlhahlobo ka ente ea Istio Sidecar e lumelletsoeng:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Theha sesebelisoa le sesebelisoa sa ho ikemela sa pod automatic:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Kenya ts'ebeletso ea boima ba liteko ho hlahisa sephethephethe nakong ea tlhahlobo ea canary:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Theha sesebelisoa sa tloaelo sa canary (fetola sebaka example.com sebakeng sa hau sa marang-rang):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Boloka sesebelisoa se kaholimo joalo ka podinfo-canary.yaml ebe u se sebelisa:

kubectl apply -f ./podinfo-canary.yaml

Tlhahlobo e kaholimo, haeba e atlehile, e tla nka metsotso e mehlano, e sheba metrics ea HTTP motsotso o mong le o mong. O ka tseba nako e nyane e hlokahalang ho netefatsa le ho khothaletsa phepelo ea canary o sebelisa foromo e latelang: interval * (maxWeight / stepWeight). Libaka tsa Canary CRD li ngotsoe mona.

Kamora metsotsoana e seng mekae, Flagger e tla etsa lintho tsa canary:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Bula sebatli 'me u ee ho app.istio.example.com, o lokela ho bona nomoro ea phetolelo lisebelisoa tsa demo.

Tlhahlobo e ikemetseng ea canary le papatso

Flagger e sebelisa loop ea taolo e tsamaisang sephethephethe butle-butle ho ea canary ha e ntse e lekanya lintlha tsa bohlokoa tsa ts'ebetso joalo ka sekhahla sa katleho ea kopo ea HTTP, nako e tloaelehileng ea kopo, le bophelo bo botle ba pod. Ho ipapisitsoe le tlhahlobo ea KPI, canary e khothaletsoa kapa e sitisoe, 'me liphetho tsa tlhahlobo li phatlalalitsoe ho Slack.

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Ho tsamaisoa ha Canary ho qala ha e 'ngoe ea lintho tse latelang e fetoha:

  • Deploy PodSpec (setšoantšo sa setshelo, taelo, likou, env, joalo-joalo)
  • ConfigMaps e behiloe joalo ka li-volumes kapa li entsoe 'mapa ho latela maemo a tikoloho
  • Liphiri li behiloe e le li-volumes kapa li fetoloa ho mefuta e fapaneng ea tikoloho

Sebelisa canary deploy ha u nchafatsa setšoantšo sa setshelo:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger e fumana hore mofuta oa deployment o fetohile mme o qala ho e hlalosa:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

Nakong ea tlhahlobo, liphetho tsa canary li ka lateloa ho sebelisoa Grafana:

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Ka kopo elelloa hore haeba ho ka sebelisoa liphetoho tse ncha molemong oa phepelo nakong ea tlhahlobo ea canary, Flagger e tla qala mohato oa tlhahlobo bocha.

Etsa lethathamo la li-canary tsohle sehlopheng sa hau:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Haeba u nolofalitse litsebiso tsa Slack, u tla fumana melaetsa e latelang:

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Khutlisetsa ka ho iketsa

Nakong ea tlhahlobo ea canary, o ka hlahisa liphoso tsa maiketsetso tsa HTTP 500 le latency e phahameng ea karabelo ho bona hore na Flagger e tla emisa ho tsamaisoa.

Etsa tlhahlobo ea teko 'me u etse se latelang ho eona:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Ho hlahisa liphoso tsa HTTP 500:

watch curl http://podinfo-canary:9898/status/500

Ho hlahisa tieho:

watch curl http://podinfo-canary:9898/delay/1

Ha palo ea licheke tse hlōlehileng e fihla moeling, sephethephethe se khutlisetsoa mocha o ka sehloohong, canary e lekanyelitsoe ho zero, 'me ho tsamaisoa ho tšoauoa e hlōlehile.

Liphoso tsa Canary le latency spikes li ngotsoe joalo ka liketsahalo tsa Kubernetes 'me li kentsoe ke Flagger ka sebopeho sa JSON:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Haeba u nolofalitse litsebiso tsa Slack, u tla fumana molaetsa ha nako ea ho qetela e fetisitsoe kapa palo e kholo ea licheke tse hlolehileng tlhahlobong e fihletsoe:

Ho tsamaisoa ka othomathike ka canary ka Flagger le Istio

Qetellong

Ho tsamaisa mesh ea lits'ebeletso joalo ka Istio ho kenyelletsa Kubernetes ho tla fana ka metrics, logs, le protocol, empa phepelo ea mosebetsi e ntse e ipapisitse le lisebelisoa tsa kantle. Flagger e ikemiselitse ho fetola sena ka ho eketsa bokhoni ba Istio phepelo e tsoelang pele.

Flagger e lumellana le tharollo efe kapa efe ea Kubernetes CI/CD, 'me tlhahlobo ea canary e ka atolosoa habonolo le li-webhooks ho etsa liteko tsa ho kopanya tsamaiso / ho amohela, liteko tsa mojaro, kapa licheke tse ling tsa tloaelo. Kaha Flagger e phatlalatsa ebile e arabela liketsahalong tsa Kubernetes, e ka sebelisoa liphaepheng tsa GitOps hammoho le Weave Flux kapa Boitumelo. Haeba u sebelisa JenkinsX u ka kenya Flagger ka jx addons.

Flagger e tšehetsoa Mosebetsi oa ho loha mme e fana ka li-deployments tsa canary ho Weave Cloud. Morero o ntse o lekoa ho GKE, EKS, le tšepe e se nang letho ka kubeadm.

Haeba u na le litlhahiso tsa ho ntlafatsa Flagger, ka kopo romella bothata kapa PR ho GitHub ho stefanprodan/flagger. Menehelo e amohetsoe ho feta tekano!

Спасибо Ray Tsang.

Source: www.habr.com

Eketsa ka tlhaloso