Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

Ua ʻike ʻia ʻo CD ma ke ʻano he polokalamu lako polokalamu ʻoihana a ʻo ia ka hopena o ka hoʻololi kūlohelohe o nā kumu CI i hoʻokumu ʻia. Eia nō naʻe, kakaikahi mau ka CD, ma muli paha o ka paʻakikī o ka hoʻokele a me ka makaʻu i ka hāʻule ʻole o ka hoʻoili ʻana e pili ana i ka loaʻa ʻana o ka ʻōnaehana.

Hōʻailona hae He mea hoʻohana Kubernetes open source e manaʻo nei e hoʻopau i nā pilina huikau. Hoʻomaʻamaʻa ia i ka hoʻolaha ʻana i nā hoʻolaha canary me ka hoʻohana ʻana iā Istio traffic offset a me nā metric Prometheus e nānā i ka ʻano noi i ka wā o ka hoʻokele ʻana.

Aia ma lalo kahi alakaʻi i kēlā me kēia pae i ka hoʻonohonoho ʻana a me ka hoʻohana ʻana i ka Flagger ma Google Kubernetes Engine (GKE).

Hoʻonohonoho i kahi pūʻulu Kubernetes

Hoʻomaka ʻoe ma ka hana ʻana i kahi pūʻulu GKE me ka hoʻohui Istio (inā ʻaʻohe āu moʻokāki GCP, hiki iā ʻoe ke kau inoa. maanei - e kiʻi i nā hōʻaiʻē manuahi).

E ʻeʻe i ka Google Cloud, e hana i kahi papahana, a hiki ke hoʻohana i ka bila no ia. E hoʻouka i ka pono laina kauoha gcloud a hoʻonohonoho i kāu papahana me gcloud init.

E hoʻonohonoho i ka papahana paʻamau, helu helu, a me ka ʻāpana (hoʻololi PROJECT_ID no kāu papahana):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

E ho'ā i ka lawelawe GKE a hana i kahi hui me nā mea hoʻohui HPA a me Istio:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

ʻO ke kauoha i luna e hana i kahi wai node paʻamau me ʻelua mau VM n1-standard-2 (vCPU: 2, RAM 7,5 GB, diski: 30 GB). ʻO ke kūpono, pono ʻoe e hoʻokaʻawale i nā ʻāpana Istio mai kāu mau hana, akā ʻaʻohe ala maʻalahi e holo ai i nā Istio Pods i loko o kahi loko i hoʻolaʻa ʻia o nā nodes. Manaʻo ʻia nā hōʻike Istio he heluhelu wale nō a na GKE e wehe i nā loli, e like me ka hoʻopili ʻana i kahi node a i ʻole ka wehe ʻana mai kahi pod.

Hoʻonohonoho i nā hōʻoia no kubectl:

gcloud container clusters get-credentials istio

E hana i kahi luna hoʻomalu puʻupuʻu e paʻa ana:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

E hoʻouka i ka mea hana laina kauoha Helm:

brew install kubernetes-helm

Loaʻa ka Homebrew 2.0 i kēia manawa no Linux.

E hana i kahi moʻokāki lawelawe a me ka hui puʻupuʻu paʻa no Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

E hoʻonui i ka Tiller ma ka inoa inoa kube-system:

helm init --service-account tiller

Pono ʻoe e noʻonoʻo e hoʻohana i ka SSL ma waena o Helm a me Tiller. No ka ʻike hou aku e pili ana i ka pale ʻana i kāu hoʻonohonoho Helm, e ʻike docs.helm.sh

E hōʻoia i nā hoʻonohonoho:

kubectl -n istio-system get svc

Ma hope o kekahi mau kekona, pono e hāʻawi ʻo GCP i kahi leka uila IP waho no ka lawelawe istio-ingressgateway.

Ke hoʻonohonoho nei i ka Istio Ingress Gateway

E hana i kahi helu IP static me kahi inoa istio-gatewayme ka hoʻohana ʻana i ka helu IP o ka ʻīpuka Istio:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

I kēia manawa pono ʻoe i kahi pūnaewele pūnaewele a me ke komo ʻana i kāu mea kākau inoa DNS. Hoʻohui i ʻelua mau moʻolelo A (hoʻololi example.com i kāu kikowaena):

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

E hōʻoia i ka hana ʻana o ka DNS wildcard:

watch host test.istio.example.com

E hana i ka puka Istio maʻamau e hāʻawi i nā lawelawe ma waho o ka mesh lawelawe ma HTTP:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

E mālama i ka punawai ma luna ma ke ʻano he public-gateway.yaml a laila hoʻohana iā ia:

kubectl apply -f ./public-gateway.yaml

ʻAʻohe ʻōnaehana hana pono e hāʻawi i nā lawelawe ma ka Pūnaewele me ka ʻole o SSL. No ka hoʻopaʻa ʻana i ka ʻīpuka Istio ingress me ka luna hōʻoia, CloudDNS a me Let's Encrypt, e ʻoluʻolu e heluhelu. palapala Hae G.K.E.

Hoʻokomo ʻia ka hae

ʻAʻole i loaʻa i ka GKE Istio add-on kahi hiʻohiʻona Prometheus e hoʻomaʻemaʻe i ka lawelawe telemetry Istio. Ma muli o ka hoʻohana ʻana o Flagger i nā metric Istio HTTP e hana i ka kānana canary, pono ʻoe e kau i ka hoʻonohonoho Prometheus aʻe, e like me ka mea e hele mai me ka schema Istio Helm mana.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Hoʻohui i ka waihona Helm Flagger:

helm repo add flagger [https://flagger.app](https://flagger.app/)

E hoʻonui i ka Flagger i ka namespace istio-systemma ka ʻae ʻana i nā leka Slack:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

Hiki iā ʻoe ke hoʻokomo iā Flagger i kekahi inoa inoa ʻoiai hiki iā ia ke kamaʻilio me ka lawelawe ʻo Istio Prometheus ma ke awa 9090.

Loaʻa iā Flagger kahi dashboard Grafana no ka nānā ʻana i nā canary. E hoʻouka iā Grafana ma ka inoa inoa istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

E hōʻike iā Grafana ma kahi puka hāmama ma ka hana ʻana i kahi lawelawe virtual (hoʻololi example.com i kāu kikowaena):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

E mālama i ka punawai ma luna e like me grafana-virtual-service.yaml a laila e hoʻohana iā ia:

kubectl apply -f ./grafana-virtual-service.yaml

I ka neʻe ʻana i http://grafana.istio.example.com i ka polokalamu kele pūnaewele, pono ʻoe e kuhikuhi i ka ʻaoʻao login Grafana.

Ke hoʻolālā nei i nā polokalamu pūnaewele me Flagger

Hoʻopuka ʻo Flagger i nā Kubernetes a koho i nā unahi ʻokoʻa (HPA), a laila hana i nā ʻano mea (kubernetes deployments, ClusterIP services, and Istio virtual services). Hōʻike kēia mau mea i ka noi i ka mesh lawelawe a me ka mālama ʻana i ka kānana canary a me ka holomua.

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

E hana i kahi inoa hoʻāʻo me Istio Sidecar injection i hiki:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

E hana i kahi hoʻolālā a me kahi hāmeʻa pod auto scale:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

E hoʻolālā i kahi lawelawe hoʻouka hoʻāʻo e hoʻohua i ke kaʻa i ka wā o ka loiloi canary:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

E hana i kahi punawai canary maʻamau (hoʻololi example.com i kāu kikowaena):

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

E mālama i ka punawai ma luna e like me podinfo-canary.yaml a laila e hoʻohana iā ia:

kubectl apply -f ./podinfo-canary.yaml

ʻO ka loiloi ma luna, inā kūleʻa, e holo no ʻelima mau minuke, e nānā ana i nā metric HTTP i kēlā me kēia hapalua minuke. Hiki iā ʻoe ke hoʻoholo i ka manawa liʻiliʻi e pono ai e hōʻoia a hoʻolaha i kahi hoʻolaha canary me ka hoʻohana ʻana i kēia ʻano: interval * (maxWeight / stepWeight). Hoʻopaʻa ʻia nā kahua Canary CRD maanei.

Ma hope o ʻelua kekona, e hana ʻo Flagger i nā mea canary:

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

E wehe i kahi polokalamu kele pūnaewele a hele i app.istio.example.com, pono ʻoe e ʻike i ka helu mana nā polokalamu demo.

Ka nānā 'ana a me ka paipai 'ana i ka canary

Hoʻokomo ʻo Flagger i kahi loop control e hoʻoneʻe mālie i ke kaʻa i ka canary ʻoiai ke ana ʻana i nā metric hana nui e like me ka helu kūleʻa noi HTTP, ka lōʻihi o ka noi ʻana, a me ke olakino pod. Ma muli o ka loiloi KPI, hoʻolaha ʻia a hoʻopau ʻia ka canary, a paʻi ʻia nā hopena o ka loiloi iā Slack.

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

Hoʻomaka ka hoʻolaha ʻana o Canary ke hoʻololi kekahi o kēia mau mea:

  • Hoʻopau i ka PodSpec (kiʻi pahu, kauoha, nā awa, env, a pēlā aku)
  • Hoʻopili ʻia ʻo ConfigMaps ma ke ʻano he nui a i ʻole palapala ʻia i nā ʻano hoʻololi kaiapuni
  • Hoʻopili ʻia nā mea huna ma ke ʻano he nui a hoʻololi ʻia i nā ʻano hoʻololi kaiapuni

E holo i ka canary deploy ke hoʻohou i kahi kiʻi pahu:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

ʻIke ʻo Flagger ua hoʻololi ka mana hoʻolālā a hoʻomaka ʻo ia e pākuʻi:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

I ka wā o ka nānā ʻana, hiki ke ʻike ʻia nā hopena canary me ka hoʻohana ʻana iā Grafana:

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

E ʻoluʻolu, inā e hoʻohana ʻia nā hoʻololi hou i kahi hoʻolālā i ka wā o ka nānā ʻana i nā canary, a laila e hoʻomaka hou ʻo Flagger i ke kaʻina loiloi.

E hana i ka papa inoa o nā canaries a pau i kāu hui:

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Inā ua hiki iā ʻoe ke hoʻomaopopo iā Slack, e loaʻa iā ʻoe kēia mau memo:

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

'Oli 'akomi

I ka wā o ka nānā ʻana i ka canary, hiki iā ʻoe ke hana i nā hewa HTTP 500 synthetic a me ka latency pane kiʻekiʻe e ʻike ai inā e hoʻōki ʻo Flagger i ka hoʻoili ʻana.

E hana i kahi pahu hoʻāʻo a hana i kēia i loko:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

Ke hana nei i nā hewa HTTP 500:

watch curl http://podinfo-canary:9898/status/500

Hana lohi:

watch curl http://podinfo-canary:9898/delay/1

Ke hōʻea ka helu o nā hōʻoia hāʻule i ka paepae, hoʻihoʻi ʻia ke kaʻa i ke ala mua, hoʻonui ʻia ka canary i ʻole, a ʻike ʻia ke kau ʻana he hemahema.

Hoʻopaʻa ʻia nā hewa Canary a me ka latency spike e like me nā hanana Kubernetes a hoʻopaʻa ʻia e Flagger ma JSON format:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Inā ua hiki iā ʻoe ke hoʻomaopopo iā Slack, e loaʻa iā ʻoe kahi leka i ka wā i hala aku ai ka lā palena a i ʻole ka helu kiʻekiʻe o nā loiloi i hāʻule ʻole i ka nānā ʻana:

Hoʻokomo ʻia nā canary maʻamau me Flagger a me Istio

I ka hopena

ʻO ka holo ʻana i kahi mesh lawelawe e like me Istio i hoʻohui ʻia i nā Kubernetes e hāʻawi i nā metric metric, logs, a me nā protocols, akā hilinaʻi mau ka hoʻoili ʻana o ka hana i nā mea hana o waho. Manaʻo ʻo Flagger e hoʻololi i kēia ma ka hoʻohui ʻana i nā mana Istio lako holomua.

Hoʻopili ʻia ʻo Flagger me nā hopena Kubernetes CI/CD, a hiki ke hoʻonui maʻalahi i ka nānā ʻana i nā canary nā makau pūnaewele e hana i nā ho'āʻo hoʻohui/ʻae ʻana i ka ʻōnaehana, hoʻāʻo hoʻouka, a i ʻole nā ​​loiloi maʻamau ʻē aʻe. No ka mea he haʻi ʻōlelo ʻo Flagger a pane i nā hanana Kubernetes, hiki ke hoʻohana ʻia i nā pipelines GitOps me Weave Flux ai ole ia, ʻO JenkinsX. Inā ʻoe e hoʻohana ana iā JenkinsX hiki iā ʻoe ke hoʻokomo iā Flagger me nā jx addons.

Kākoʻo ʻia ka hae Nā mea ulana a hāʻawi i nā hoʻolaha canary i loko Kapua ulana. Ke hoʻāʻo ʻia nei ka pāhana ma GKE, EKS, a me ka metala ʻole me kubeadm.

Inā he mau manaʻo kāu e hoʻomaikaʻi ai iā Flagger, e ʻoluʻolu e hoʻouna i kahi pilikia a i ʻole PR ma GitHub ma stefanprodan/flag. Hoʻokipa ʻia nā haʻawina!

Спасибо Ray Tsang.

Source: www.habr.com

Pākuʻi i ka manaʻo hoʻopuka