Otomatik deplwaman Canary ak Flagger ak Istio

Otomatik deplwaman Canary ak Flagger ak Istio

CD rekonèt kòm yon pratik lojisyèl antrepriz epi li se rezilta yon evolisyon natirèl prensip CI etabli yo. Sepandan, CD toujou byen ra, petèt akòz konpleksite nan jesyon ak krentif pou deplwaman echwe ki afekte disponiblite sistèm lan.

drapo se yon operatè sous louvri Kubernetes ki vize pou elimine relasyon konfizyon. Li otomatize pwomosyon deplwaman Canary lè l sèvi avèk konpanse trafik Istio ak mezi Prometheus pou analize konpòtman aplikasyon pandan yon deplwaye jere.

Anba la a se yon gid etap pa etap pou mete kanpe ak itilize Flagger sou Google Kubernetes Engine (GKE).

Mete kanpe yon gwoup Kubernetes

Ou kòmanse pa kreye yon gwoup GKE ak sipleman Istio (si ou pa gen yon kont GCP, ou ka enskri. isit la - pou jwenn kredi gratis).

Siyen nan Google Cloud, kreye yon pwojè, epi pèmèt bòdwo pou li. Enstale sèvis piblik liy lòd la gcloud epi mete sou pye pwojè ou a gcloud init.

Mete pwojè default, zòn kalkile, ak zòn (ranplase PROJECT_ID pou pwojè ou a):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Pèmèt sèvis GKE a epi kreye yon gwoup ak sipleman HPA ak Istio:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

Kòmand ki pi wo a pral kreye yon pisin ne default ki gen ladan de VM n1-standard-2 (vCPU: 2, RAM 7,5 GB, disk: 30 GB). Idealman, ou ta dwe izole eleman Istio nan chaj travay ou yo, men pa gen okenn fason fasil pou kouri Istio Pods nan yon pisin devwe nœuds. Manifest Istio yo konsidere kòm lekti sèlman epi GKE pral defèt nenpòt chanjman, tankou lyen ak yon node oswa detache nan yon gous.

Fikse kalifikasyon pou kubectl:

gcloud container clusters get-credentials istio

Kreye yon lyezon wòl administratè gwoup:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Enstale zouti liy lòd la Helm:

brew install kubernetes-helm

Homebrew 2.0 kounye a disponib tou pou Linux.

Kreye yon kont sèvis ak gwoup wòl obligatwa pou Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Elaji Tiller nan espas non kube-system:

helm init --service-account tiller

Ou ta dwe konsidere itilize SSL ant Helm ak Tiller. Pou plis enfòmasyon sou pwoteksyon enstalasyon Helm ou a, gade docs.helm.sh

Konfime paramèt yo:

kubectl -n istio-system get svc

Apre kèk segond, GCP ta dwe bay yon adrès IP ekstèn pou sèvis la istio-ingressgateway.

Konfigirasyon Istio Ingress Gateway la

Kreye yon adrès IP estatik ak yon non istio-gatewaylè l sèvi avèk adrès IP pòtay Istio a:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Koulye a, ou bezwen yon domèn entènèt ak aksè nan rejistrè DNS ou a. Ajoute de dosye A (ranplase example.com nan domèn ou a):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

Verifye si DNS wildcard la ap travay:

watch host test.istio.example.com

Kreye yon pòtay Istio jenerik pou bay sèvis deyò may sèvis la sou HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Sove resous ki anwo a kòm public-gateway.yaml epi aplike li:

kubectl apply -f ./public-gateway.yaml

Pa gen okenn sistèm pwodiksyon ta dwe bay sèvis sou entènèt la san SSL. Pou sekirize pòtay antre Istio ak cert-manager, CloudDNS ak ann ankripte, tanpri li dokimantasyon Flagger G.K.E.

Enstalasyon Flagger

Add-on GKE Istio a pa genyen yon egzanp Prometheus ki netwaye sèvis telemetri Istio. Paske Flagger itilize paramèt Istio HTTP pou fè analiz Canary, ou bezwen deplwaye konfigirasyon Prometheus sa a, menm jan ak sa ki vini ak chema ofisyèl Istio Helm la.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Ajoute depo Flagger Helm la:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Elaji Flagger nan espas non istio-systempa pèmèt notifikasyon Slack:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

Ou ka enstale Flagger nan nenpòt espas non osi lontan ke li ka kominike ak sèvis Istio Prometheus sou pò 9090.

Flagger gen yon tablodbò Grafana pou analiz Canary. Enstale Grafana nan namespace istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Ekspoze Grafana atravè yon pòtay louvri lè w kreye yon sèvis vityèl (ranplase example.com nan domèn ou a):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Sove resous ki anwo a kòm grafana-virtual-service.yaml epi aplike li:

kubectl apply -f ./grafana-virtual-service.yaml

Lè w ap deplase http://grafana.istio.example.com nan navigatè a, ou ta dwe dirije yo nan paj la konekte Grafana.

Deplwaye aplikasyon entènèt ak Flagger

Flagger deplwaye Kubernetes epi opsyonèlman ogmante otomatikman (HPA), answit kreye yon seri objè (deplwaman Kubernetes, sèvis ClusterIP, ak sèvis vityèl Istio). Objè sa yo ekspoze aplikasyon an nan may sèvis la ak kontwòl Canary analiz ak pwogrè.

Otomatik deplwaman Canary ak Flagger ak Istio

Kreye yon espas non tès ak piki Istio Sidecar pèmèt:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Kreye yon deplwaman ak yon zouti otomatik echèl:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Deplwaye yon sèvis chaj tès pou jenere trafik pandan analiz Canary:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Kreye yon resous Canary koutim (ranplase example.com nan domèn ou a):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Sove resous ki anwo a kòm podinfo-canary.yaml epi aplike li:

kubectl apply -f ./podinfo-canary.yaml

Analiz ki pi wo a, si yo reyisi, pral kouri pou senk minit, tcheke mezi HTTP chak mwatye minit. Ou ka detèmine tan minimòm ki nesesè pou valide ak ankouraje yon deplwaman Canary lè l sèvi avèk fòmil sa a: interval * (maxWeight / stepWeight). Jaden CRD Canary yo dokimante isit la.

Apre yon koup de segonn, Flagger pral kreye objè Canary:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Louvri yon navigatè epi ale nan app.istio.example.com, ou ta dwe wè nimewo vèsyon an aplikasyon Demo.

Otomatik analiz Canary ak pwomosyon

Flagger aplike yon bouk kontwòl ki piti piti deplase trafik nan Canary pandan y ap mezire mezi pèfòmans kle tankou pousantaj siksè demann HTTP, dire mwayèn demann, ak sante gous. Ki baze sou analiz KPI a, Canary a ankouraje oswa entèwonp, ak rezilta yo nan analiz yo pibliye nan Slack.

Otomatik deplwaman Canary ak Flagger ak Istio

Deplwaman Canary deklanche lè youn nan objè sa yo chanje:

  • Deplwaye PodSpec (imaj veso, lòd, pò, env, elatriye)
  • ConfigMaps yo monte kòm volim oswa trase nan varyab anviwònman an
  • Sekrè yo monte kòm volim oswa konvèti nan varyab anviwònman

Kouri deplwaye Canary lè w ap mete ajou yon imaj veso:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger detekte ke vèsyon deplwaman an chanje epi li kòmanse analize li:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

Pandan analiz, rezilta Canary yo ka swiv lè l sèvi avèk Grafana:

Otomatik deplwaman Canary ak Flagger ak Istio

Tanpri sonje ke si yo aplike nouvo chanjman nan yon deplwaman pandan analiz Canary, Lè sa a, Flagger pral rekòmanse faz analiz la.

Fè yon lis tout kanari nan gwoup ou a:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Si ou te aktive notifikasyon Slack, w ap resevwa mesaj sa yo:

Otomatik deplwaman Canary ak Flagger ak Istio

Rollback otomatik

Pandan analiz Canary, ou ka jenere erè sentetik HTTP 500 ak gwo latansi repons pou wè si Flagger ap sispann deplwaman an.

Kreye yon gous tès epi fè bagay sa yo nan li:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Jenere erè HTTP 500:

watch curl http://podinfo-canary:9898/status/500

Reta jenerasyon:

watch curl http://podinfo-canary:9898/delay/1

Lè kantite chèk ki echwe rive nan papòt la, trafik la ap dirije tounen nan kanal prensipal la, Canary a echèl a zewo, epi deplwaman an make kòm echwe.

Erè Canary ak pwen latansi yo anrejistre kòm evènman Kubernetes epi Flagger anrejistre nan fòma JSON:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Si ou te aktive notifikasyon Slack, w ap resevwa yon mesaj lè dat limit la depase oswa kantite maksimòm chèk ki echwe nan analiz la rive:

Otomatik deplwaman Canary ak Flagger ak Istio

Nan konklizyon

Kouri yon may sèvis tankou Istio anplis Kubernetes pral bay mezi otomatik, mòso bwa, ak pwotokòl, men deplwaman kantite travay toujou depann de zouti ekstèn. Flagger gen pou objaktif pou chanje sa a lè li ajoute kapasite Istio pwovizyon pwogresif.

Flagger se konpatib ak nenpòt solisyon Kubernetes CI/CD, ak analiz Canary ka fasilman pwolonje ak webhooks pou fè tès entegrasyon/akseptasyon sistèm, tès chaj, oswa nenpòt lòt chèk koutim. Depi Flagger se deklaratif epi li reponn a evènman Kubernetes, li ka itilize nan tiyo GitOps ansanm ak Tise Flux oswa JenkinsX. Si w ap itilize JenkinsX ou ka enstale Flagger ak addons jx.

Flagger sipòte Weaveworks epi li bay deplwaman Canary nan Mare nwaj. Pwojè a ap teste sou GKE, EKS, ak metal vid ak kubeadm.

Si w gen sijesyon pou amelyore Flagger, tanpri voye yon pwoblèm oswa PR sou GitHub nan stefanprodan/flagger. Kontribisyon yo plis pase akeyi!

Mèsi Ray Tsang.

Sous: www.habr.com

Add nouvo kòmantè