Mbugharị canary akpaaka na Flagger na Istio

Mbugharị canary akpaaka na Flagger na Istio

A na-amata CD dị ka omume sọftụwia ụlọ ọrụ ma bụrụ nsonaazụ sitere na mmalite okike nke ụkpụrụ CI guzobere. Otú ọ dị, CD ka dị ụkọ, ikekwe n'ihi mgbagwoju anya nke njikwa na egwu nke ntinye aka na-adịghị emetụta na-emetụta nnweta usoro.

Flag bụ onye ọrụ Kubernetes mepere emepe nke na-achọ ikpochapụ mmekọrịta dị mgbagwoju anya. Ọ na-akpaghị aka na nkwalite nke mbugharị canary site na iji Istio trafic offset na Prometheus metrics iji nyochaa omume ngwa n'oge a na-ebugharị achịkwa.

N'okpuru bụ ntụzịaka nzọụkwụ site na ịtọlite ​​​​na iji Flagger na Google Kubernetes Engine (GKE).

Ịtọlite ​​ụyọkọ Kubernetes

Ị na-amalite site na ịmepụta ụyọkọ GKE na mgbakwunye Istio (ọ bụrụ na ịnweghị akaụntụ GCP, ị nwere ike ịdebanye aha. ebe a - iji nweta kredit efu).

Banye na Google Cloud, mepụta ọrụ, ma mee ka ịgba ụgwọ maka ya. Wụnye akụrụngwa ahịrị iwu gcloud ma melite oru ngo gi na gcloud init.

Tọọ ọrụ ndabara, gbakọọ mpaghara na mpaghara (dochie PROJECT_ID maka oru ngo gi):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Kwado ọrụ GKE wee mepụta ụyọkọ na mgbakwunye HPA na Istio:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

Iwu a dị n'elu ga-emepụta ọdọ mmiri node ndabara gụnyere VM abụọ n1-standard-2 (vCPU: 2, RAM 7,5 GB, diski: 30 GB). Dị ka o kwesịrị, ịkwesịrị ịwepụ ihe ndị Istio na-arụ ọrụ gị, mana ọ nweghị ụzọ dị mfe iji mee Istio Pods na ọdọ mmiri raara onwe ya nye. A na-ewere ngosipụta nke Istio dị ka agụ-naanị na GKE ga-emegharị mgbanwe ọ bụla, dị ka ijikọ ọnụ ma ọ bụ ịwepụ na pọd.

Hazie nzere maka kubectl:

gcloud container clusters get-credentials istio

Mepụta njide ọrụ nchịkwa ụyọkọ:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Wụnye ngwá ọrụ ahịrị iwu Helm:

brew install kubernetes-helm

Homebrew 2.0 dịkwa ugbu a maka Linux.

Mepụta akaụntụ ọrụ yana njikọ njikọ ọrụ maka Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Gbasaa Tiller na oghere aha kube-system:

helm init --service-account tiller

Ịkwesịrị ịtụle iji SSL n'etiti Helm na Tiller. Maka ozi ndị ọzọ gbasara ichekwa nrụnye Helm gị, hụ docs.helm.sh

Kwenye ntọala:

kubectl -n istio-system get svc

Mgbe sekọnd ole na ole gachara, GCP ga-ekenye adreesị IP mpụga maka ọrụ ahụ istio-ingressgateway.

Na-ahazi ọnụ ụzọ ámá Istio Ingress

Mepụta adreesị IP kwụ ọtọ nwere aha istio-gatewayiji adreesị IP nke ọnụ ụzọ Istio:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Ugbu a ịchọrọ ngalaba ịntanetị wee nweta onye na-edeba aha DNS gị. Tinye ndekọ A abụọ (dochie example.com gaa na ngalaba gị):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

Chọpụta na kaadị DNS na-arụ ọrụ:

watch host test.istio.example.com

Mepụta ọnụ ụzọ Istio ọnyà iji nye ọrụ na-abụghị nhịahụ ọrụ n'elu HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Chekwa akụ dị n'elu dị ka public-gateway.yaml wee tinye ya:

kubectl apply -f ./public-gateway.yaml

Ọ dịghị usoro mmepụta kwesịrị inye ọrụ na Ịntanetị na-enweghị SSL. Iji chekwaa ọnụ ụzọ Istio ingress na cert-manager, CloudDNS na Let's Encrypt, biko gụọ akwụkwọ Flagger G.K.E.

Nwụnye ọkọlọtọ

Ihe mgbakwunye GKE Istio anaghị agụnye ihe atụ Prometheus nke na-ehichapụ ọrụ telemetry Istio. N'ihi na Flagger na-eji Istio HTTP metrics mee nyocha canary, ịkwesịrị ibuga nhazi Prometheus ndị a, dịka nke na-abịa na atụmatụ Istio Helm gọọmentị.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Tinye ebe nchekwa Helm Flagger:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Gbasaa ọkọlọtọ na oghere aha istio-systemsite n'ịkwado ọkwa Slack:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

Ị nwere ike ịwụnye Flagger na aha ọ bụla ma ọ bụrụhaala na ọ nwere ike ịkparịta ụka na ọrụ Istio Prometheus na ọdụ ụgbọ mmiri 9090.

Flagger nwere dashboard Grafana maka nyocha canary. Wụnye Grafana na oghere aha istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Wepụ Grafana site n'ọnụ ụzọ mepere emepe site na ịmepụta ọrụ mebere (dochie example.com gaa na ngalaba gị):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Chekwa akụ dị n'elu dị ka grafana-virtual-service.yaml wee tinye ya:

kubectl apply -f ./grafana-virtual-service.yaml

Mgbe ịkwaga na http://grafana.istio.example.com na ihe nchọgharị ahụ, a ga-eduru gị gaa na ibe nbanye Grafana.

Na-ebufe ngwa weebụ na Flagger

Flagger na-ebuga Kubernetes na nhọrọ na-apụ apụ na-akpaghị aka (HPA), wee mepụta usoro ihe (nkwadebe Kubernetes, ọrụ ClusterIP, na ọrụ Istio mebere). Ihe ndị a na-ekpughe ngwa a na ntupu ọrụ yana nyocha canary na ọganihu.

Mbugharị canary akpaaka na Flagger na Istio

Mepụta oghere aha ule site na iji injection Istio Sidecar agbanyere:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Mepụta ihe nnyekwasa na ngwa pọd akpaka ọnụ ọgụgụ:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Nyefee ọrụ ibu ule iji mepụta okporo ụzọ n'oge nyocha canary:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Mepụta akụrụngwa canary omenala (dochie example.com gaa na ngalaba gị):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Chekwa akụ dị n'elu dị ka podinfo-canary.yaml wee tinye ya:

kubectl apply -f ./podinfo-canary.yaml

Ntụle dị n'elu, ọ bụrụ na ọ ga-aga nke ọma, ga-agba ọsọ nkeji ise, na-elele metric HTTP kwa ọkara nkeji. Ị nwere ike ikpebi oge opekempe achọrọ iji kwado ma kwalite mbugharị canary site na iji usoro a: interval * (maxWeight / stepWeight). A na-edekọ ubi Canary CRD ebe a.

Mgbe sekọnd ole na ole gachara, Flagger ga-emepụta ihe canary:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Mepee ihe nchọgharị wee gaa na app.istio.example.com, ị kwesịrị ịhụ nọmba ụdị ngwa ngosi.

Nyocha na nkwalite canary akpaaka

Flagger na-arụ ọrụ akaghị njikwa nke na-eji nwayọọ nwayọọ na-ebugharị okporo ụzọ gaa na canary ka ọ na-atụ metrik arụmọrụ isi dị ka ọnụego arịrịọ HTTP, ogologo oge arịrịọ na ahụike pod. Dabere na nyocha KPI, a na-akwalite canary ma ọ bụ kwụsịtụrụ, a na-ebipụta nsonaazụ nyocha ahụ na Slack.

Mbugharị canary akpaaka na Flagger na Istio

A na-ebute mbugharị Canary mgbe otu n'ime ihe ndị a gbanwere:

  • Nyefee PodSpec (onyinyo akpa, iwu, ọdụ ụgbọ mmiri, env, wdg)
  • A na-atụkwasị ConfigMaps dị ka mpịakọta ma ọ bụ mapụta ya na mgbanwe gburugburu ebe obibi
  • A na-etinye ihe nzuzo dị ka mpịakọta ma ọ bụ gbanwee ka ọ bụrụ mgbanwe gburugburu ebe obibi

Gbaa mbugharị canary mgbe ị na-emelite onyonyo akpa:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger chọpụtara na ụdị mbunye agbanweela wee malite ịtụgharị ya:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

N'oge nyocha, enwere ike nyochaa nsonaazụ canary site na iji Grafana:

Mbugharị canary akpaaka na Flagger na Istio

Biko mara na ọ bụrụ na etinyere mgbanwe ọhụrụ na ntinye n'oge nyocha canary, Flagger ga-amalitegharị usoro nyocha.

Depụta aha canary niile dị na ụyọkọ gị:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Ọ bụrụ na ịmeela ọkwa Slack, ị ga-enweta ozi ndị a:

Mbugharị canary akpaaka na Flagger na Istio

Nghaghachi na akpaghị aka

N'oge nyocha canary, ị nwere ike iwepụta njehie HTTP 500 sịntetik yana latency nzaghachi dị elu iji hụ ma Flagger ọ ga-akwụsị mbugharị ahụ.

Mepụta pọd ule wee mee ihe ndị a n'ime ya:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Na-emepụta njehie HTTP 500:

watch curl http://podinfo-canary:9898/status/500

Ọgbọ igbu oge:

watch curl http://podinfo-canary:9898/delay/1

Mgbe ọnụọgụ nlele ndị dara ada rutere n'ọnụ ụzọ, a na-ebugharị okporo ụzọ ahụ na ọwa bụ isi, a na-atụba canary ahụ ka ọ bụrụ efu, a na-akara ntinye ahụ ka ọ dara.

A na-edebanye mperi Canary na spikes latency dị ka mmemme Kubernetes wee banye Flagger n'ụdị JSON:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Ọ bụrụ na ịmeela ọkwa Slack, ị ga-enweta ozi mgbe oge ngwụcha gafere ma ọ bụ nweta ọnụọgụ nlele dara ada na nyocha:

Mbugharị canary akpaaka na Flagger na Istio

N'ikpeazụ

Na-agba ọsọ ntupu ọrụ dị ka Istio na mgbakwunye na Kubernetes ga-enye metrik, ndekọ, na protocol akpaka, mana nbunye ọrụ ka na-adabere na ngwaọrụ mpụga. Flagger na-achọ ịgbanwe nke a site na ịgbakwunye ike Istio nnyefe na-aga n'ihu.

Flagger dakọtara na ngwọta Kubernetes CI/CD ọ bụla, yana enwere ike ịgbatị nyocha canary ngwa ngwa site na iji. webhooks ime ule nnabata/anabata usoro, ule ibu, ma ọ bụ nlele omenala ọ bụla ọzọ. Ebe ọ bụ na Flagger bụ nkwupụta ma na-aza ihe omume Kubernetes, enwere ike iji ya na pipeline GitOps yana Weave Flux ma ọ bụ JenkinsX. Ọ bụrụ na ị na-eji JenkinsX ị nwere ike iwunye Flagger na jx addons.

Akwadoro ọkọlọtọ Ihe eji akpa ákwà ma na-enye mbugharị canary na Weave ígwé ojii. A na-anwale ọrụ a na GKE, EKS, na ígwè efu na kubeadm.

Ọ bụrụ na ị nwere aro iji melite Flagger, biko nyefee esemokwu ma ọ bụ PR na GitHub na stefanprodan/flagger. Ntụnye na-anabata karịa!

Achọpụta Ray Tsang.

isi: www.habr.com

Tinye a comment