CD imadziwika kuti ndi pulogalamu yamabizinesi ndipo ndikusintha kwachilengedwe kwa mfundo zokhazikitsidwa za CI. Komabe, ma CD akadali osowa, mwina chifukwa cha zovuta za kasamalidwe komanso kuopa kulephera kutumizidwa komwe kumakhudza kupezeka kwadongosolo.
Pansipa pali chitsogozo chokhazikitsa ndikugwiritsa ntchito Flagger pa Google Kubernetes Engine (GKE).
Kukhazikitsa gulu la Kubernetes
Mumayamba pakupanga gulu la GKE ndi chowonjezera cha Istio (ngati mulibe akaunti ya GCP, mutha kulembetsa
Lowani mu Google Cloud, pangani pulojekiti, ndikuyitanitsa kulipira. Ikani mzere wothandizira mzere gcloud init
.
Khazikitsani projekiti yokhazikika, malo owerengera, ndi zone (sinthani PROJECT_ID
za polojekiti yanu):
gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
Yambitsani ntchito ya GKE ndikupanga gulu lowonjezera la HPA ndi Istio:
gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio
--cluster-version=${K8S_VERSION}
--zone=us-central1-a
--num-nodes=2
--machine-type=n1-standard-2
--disk-size=30
--enable-autorepair
--no-enable-cloud-logging
--no-enable-cloud-monitoring
--addons=HorizontalPodAutoscaling,Istio
--istio-config=auth=MTLS_PERMISSIVE
Lamulo lomwe lili pamwambapa lipanga dziwe losasinthika lomwe lili ndi ma VM awiri n1-standard-2
(vCPU: 2, RAM 7,5 GB, disk: 30 GB). Momwemo, zigawo za Istio ziyenera kukhala zolekanitsidwa ndi katundu wawo, koma palibe njira yosavuta yoyendetsera ma pods a Istio padziwe lodzipatulira la node. Istio ikuwonetseratu imawerengedwa kuti ndi yowerengeka, ndipo GKE idzabwezeretsa kusintha kulikonse monga kumangirira ku node kapena kuchoka ku pod.
Khazikitsani zidziwitso za kubectl
:
gcloud container clusters get-credentials istio
Pangani zochita zomangirira ma cluster:
kubectl create clusterrolebinding "cluster-admin-$(whoami)"
--clusterrole=cluster-admin
--user="$(gcloud config get-value core/account)"
Ikani chida cha mzere wolamula
brew install kubernetes-helm
Homebrew 2.0 tsopano ikupezekanso
Pangani akaunti yantchito ndi magulu omanga a Tiller:
kubectl -n kube-system create sa tiller &&
kubectl create clusterrolebinding tiller-cluster-rule
--clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
Wonjezerani Tiller mu malo a mayina kube-system
:
helm init --service-account tiller
Muyenera kuganizira kugwiritsa ntchito SSL pakati pa Helm ndi Tiller. Kuti mumve zambiri zachitetezo chanu cha Helm, onani
Tsimikizirani zokonda:
kubectl -n istio-system get svc
Pambuyo pa masekondi angapo, GCP iyenera kupereka adilesi yakunja ya IP ku ntchitoyo istio-ingressgateway
.
Kukhazikitsa Chipata cha Istio Ingress
Pangani adilesi ya IP yokhazikika yokhala ndi dzina istio-gateway
pogwiritsa ntchito adilesi ya IP ya Istio gateway:
export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1
Tsopano mukufunika malo ochezera a pa intaneti komanso mwayi wofikira ku registrar yanu ya DNS. Onjezani zolemba ziwiri za A (sinthani example.com
ku domain yanu):
istio.example.com A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}
Onetsetsani kuti DNS wildcard ikugwira ntchito:
watch host test.istio.example.com
Pangani chipata cha generic cha Istio kuti mupereke ntchito zakunja kwa ma mesh pa HTTP:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Sungani zomwe zili pamwambapa monga public-gateway.yaml ndiyeno zigwiritseni ntchito:
kubectl apply -f ./public-gateway.yaml
Palibe njira yopangira yomwe iyenera kupereka ntchito pa intaneti popanda SSL. Kuti muteteze chipata chanu cha Istio cholowera ndi cert-manager, CloudDNS ndi Let's Encrypt, chonde werengani
Kuyika mbendera
Zowonjezera za GKE Istio sizimaphatikizapo chitsanzo cha Prometheus chomwe chimatsuka utumiki wa telemetry wa Istio. Popeza Flagger amagwiritsa ntchito ma metrics a Istio HTTP kuti apange kusanthula kwa canary, muyenera kuyika makonzedwe otsatirawa a Prometheus, ofanana ndi omwe amabwera ndi schema yovomerezeka ya Istio Helm.
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml
Onjezani chosungira cha Flagger Helm:
helm repo add flagger [https://flagger.app](https://flagger.app/)
Wonjezerani Flagger kukhala malo a mayina istio-system
poyambitsa zidziwitso za Slack:
helm upgrade -i flagger flagger/flagger
--namespace=istio-system
--set metricsServer=http://prometheus.istio-system:9090
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID
--set slack.channel=general
--set slack.user=flagger
Mutha kukhazikitsa Flagger m'malo aliwonse amtundu bola ngati ilumikizana ndi ntchito ya Istio Prometheus padoko 9090.
Flagger ali ndi dashboard ya Grafana yowunikira canary. Ikani Grafana mu namespace istio-system
:
helm upgrade -i flagger-grafana flagger/grafana
--namespace=istio-system
--set url=http://prometheus.istio-system:9090
--set user=admin
--set password=change-me
Onetsani Grafana kudzera pachipata chotseguka popanga ntchito yeniyeni (m'malo example.com
ku domain yanu):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: flagger-grafana
Sungani zomwe zili pamwambapa monga grafana-virtual-service.yaml ndiyeno zigwiritseni ntchito:
kubectl apply -f ./grafana-virtual-service.yaml
Popita ku http://grafana.istio.example.com
Msakatuli wanu akuyenera kukulozerani kutsamba lolowera ku Grafana.
Kutumiza mapulogalamu ndi Flagger
Flagger imatumiza Kubernetes ndipo, ngati kuli kofunikira, yopingasa autoscaling (HPA), kenako imapanga zinthu zingapo (Kubernetes deployments, ClusterIP services ndi Istio virtual services). Zinthu izi zimawulula kugwiritsa ntchito ma mesh ndikuwongolera kusanthula ndi kukwezedwa kwa canary.
Pangani malo oyesera ndikugwiritsa ntchito Istio Sidecar:
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
Pangani zotumiza ndi chida chowongoka chokhazikika cha pod:
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
Tumizani ntchito yoyesa katundu kuti mupangitse kuchuluka kwa anthu pa nthawi ya canary:
helm upgrade -i flagger-loadtester flagger/loadtester
--namepace=test
Pangani chida cha canary (kusintha example.com
ku domain yanu):
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- app.istio.example.com
canaryAnalysis:
interval: 30s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: istio_requests_total
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
Sungani zomwe zili pamwambapa monga podinfo-canary.yaml ndikuziyika:
kubectl apply -f ./podinfo-canary.yaml
Kusanthula pamwambapa, ngati kuli bwino, kumayenda kwa mphindi zisanu, ndikuwunika ma metric a HTTP mphindi iliyonse. Mutha kudziwa nthawi yochepera yofunikira kuyesa ndikulimbikitsa kutumizidwa kwa canary pogwiritsa ntchito njira iyi: interval * (maxWeight / stepWeight)
. Minda ya Canary CRD yalembedwa
Pambuyo pamasekondi angapo, Flagger ipanga zinthu za canary:
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
Tsegulani msakatuli wanu ndikupita ku app.istio.example.com
, muyenera kuwona nambala yamtunduwu
Kusanthula ndi kukwezedwa kwa canary
Flagger imagwiritsa ntchito chipika chowongolera chomwe chimasuntha pang'onopang'ono kuchuluka kwa anthu kupita ku canary kwinaku akuyesa zisonyezo zazikuluzikulu zogwirira ntchito monga kuchuluka kwa zopempha za HTTP, kutalika kwa nthawi yopempha, komanso thanzi la pod. Kutengera kuwunika kwa KPI, canary imakwezedwa kapena kuthetsedwa, ndipo zotsatira za kusanthula zimasindikizidwa mu Slack.
Kutumiza kwa Canary kumayambika pamene chimodzi mwazinthu zotsatirazi chikusintha:
- Deploy PodSpec (chithunzi chotengera, lamulo, madoko, env, etc.)
- ConfigMaps amayikidwa ngati ma voliyumu kapena amasinthidwa kukhala zosintha zachilengedwe
- Zinsinsi zimayikidwa ngati voliyumu kapena zimasinthidwa kukhala zosintha zachilengedwe
Yambitsani kutumiza kwa canary pokonzanso chithunzi cha chidebe:
kubectl -n test set image deployment/podinfo
podinfod=quay.io/stefanprodan/podinfo:1.4.1
Flagger amazindikira kuti mtundu wa deployment wasintha ndikuyamba kuusanthula:
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
Pakuwunika, zotsatira za canary zitha kuyang'aniridwa pogwiritsa ntchito Grafana:
Chonde dziwani: ngati kusintha kwatsopano kukugwiritsidwa ntchito pakuwunika kwa canary, Flagger iyambiranso gawo lowunikira.
Lembani mndandanda wa ma canary onse mumagulu anu:
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-01-16T14:05:07Z
prod frontend Succeeded 0 2019-01-15T16:15:07Z
prod backend Failed 0 2019-01-14T17:05:07Z
Ngati mwatsegula zidziwitso za Slack, mudzalandira mauthenga awa:
Zobweza zokha
Pakuwunika kwa canary, mutha kupanga zolakwika za HTTP 500 ndikuyankha kwachangu kuti muwone ngati Flagger ayimitsa kutumiza.
Pangani poto yoyeserera ndikuchita zotsatirazi mmenemo:
kubectl -n test run tester
--image=quay.io/stefanprodan/podinfo:1.2.1
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
Kupanga zolakwika za HTTP 500:
watch curl http://podinfo-canary:9898/status/500
Kuchedwetsa:
watch curl http://podinfo-canary:9898/delay/1
Chiwerengero cha macheke olephera chikafika pachimake, kuchuluka kwa magalimoto kumabwereranso ku njira yoyambira, canary imakulitsidwa mpaka zero, ndipo kutumizidwa kumalembedwa ngati kwalephera.
Zolakwa za Canary ndi latency spikes zasungidwa ngati zochitika za Kubernetes ndikujambulidwa ndi Flagger mu mtundu wa JSON:
kubectl -n istio-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
Ngati mwathandizira zidziwitso za Slack, mudzalandira uthenga nthawi yomaliza yomaliza kapena kufikira kuchuluka kwa ndemanga zomwe zalephera pakuwunika zadutsa:
Pomaliza
Kuthamangitsa mauna a ntchito ngati Istio pamwamba pa Kubernetes kudzapereka ma metric, zipika, ndi zipika zokha, koma kutumiza zolemetsa zimatengera zida zakunja. Flagger ikufuna kusintha izi powonjezera mphamvu za Istio
Flagger imagwirizana ndi yankho lililonse la CI/CD la Kubernetes, ndipo kusanthula kwa canary kumatha kukulitsidwa mosavuta ndi
Flagger imathandizidwa
Ngati muli ndi malingaliro owongolera Flagger, chonde perekani vuto kapena PR pa GitHub pa
Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ
Source: www.habr.com