CD diakoni minangka praktik piranti lunak perusahaan lan minangka evolusi alami saka prinsip CI sing mapan. Nanging, CD isih cukup langka, bisa uga amarga kerumitan manajemen lan wedi yen panyebaran gagal bisa mengaruhi kasedhiyan sistem.
Ing ngisor iki minangka pandhuan langkah-langkah kanggo nyetel lan nggunakake Flagger ing Google Kubernetes Engine (GKE).
Nggawe kluster Kubernetes
Sampeyan miwiti kanthi nggawe kluster GKE nganggo add-on Istio (yen sampeyan ora duwe akun GCP, sampeyan bisa ndhaptar
Mlebu menyang Google Cloud, gawe proyek, lan aktifake tagihan. Instal utilitas baris perintah gcloud init
.
Setel proyek standar, area komputasi, lan zona (ganti PROJECT_ID
kanggo proyek sampeyan):
gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
Aktifake layanan GKE lan gawe kluster nganggo tambahan HPA lan Istio:
gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio
--cluster-version=${K8S_VERSION}
--zone=us-central1-a
--num-nodes=2
--machine-type=n1-standard-2
--disk-size=30
--enable-autorepair
--no-enable-cloud-logging
--no-enable-cloud-monitoring
--addons=HorizontalPodAutoscaling,Istio
--istio-config=auth=MTLS_PERMISSIVE
Printah ing ndhuwur bakal nggawe blumbang simpul standar sing dumadi saka rong VM n1-standard-2
(vCPU: 2, RAM 7,5 GB, disk: 30 GB). Saenipun, komponen Istio kudu diisolasi saka beban kerja, nanging ora ana cara sing gampang kanggo mbukak pod Istio ing blumbang simpul khusus. Manifestasi Istio dianggep mung diwaca, lan GKE bakal mbalekake owah-owahan apa wae kayata naleni simpul utawa pisah saka pod.
Nggawe kredensial kanggo kubectl
:
gcloud container clusters get-credentials istio
Nggawe ikatan peran admin cluster:
kubectl create clusterrolebinding "cluster-admin-$(whoami)"
--clusterrole=cluster-admin
--user="$(gcloud config get-value core/account)"
Instal alat baris perintah
brew install kubernetes-helm
Homebrew 2.0 saiki uga kasedhiya kanggo
Nggawe akun layanan lan ngiket peran kluster kanggo Tiller:
kubectl -n kube-system create sa tiller &&
kubectl create clusterrolebinding tiller-cluster-rule
--clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
Expand Tiller ing namespace kube-system
:
helm init --service-account tiller
Sampeyan kudu nimbang nggunakake SSL antarane Helm lan Tiller. Kanggo informasi luwih lengkap babagan nglindhungi instalasi Helm, waca
Konfirmasi setelan:
kubectl -n istio-system get svc
Sawise sawetara detik, GCP kudu menehi alamat IP eksternal menyang layanan kasebut istio-ingressgateway
.
Nggawe Istio Ingress Gateway
Nggawe alamat IP statis kanthi jeneng istio-gateway
nggunakake alamat IP gateway Istio:
export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1
Saiki sampeyan butuh domain internet lan akses menyang registrar DNS sampeyan. Tambah rong cathetan A (ganti example.com
menyang domain sampeyan):
istio.example.com A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}
Priksa manawa wildcard DNS bisa digunakake:
watch host test.istio.example.com
Gawe gateway Istio umum kanggo nyedhiyakake layanan ing njaba jaringan layanan liwat HTTP:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Simpen sumber ing ndhuwur minangka public-gateway.yaml banjur gunakake:
kubectl apply -f ./public-gateway.yaml
Ora ana sistem produksi sing kudu nyedhiyakake layanan ing Internet tanpa SSL. Kanggo ngamanake gateway ingress Istio sampeyan karo manajer sertifikat, CloudDNS lan Ayo Encrypt, waca
Pemasangan Flagger
Tambahan GKE Istio ora kalebu conto Prometheus sing ngresiki layanan telemetri Istio. Wiwit Flagger nggunakake metrik HTTP Istio kanggo nindakake analisis kenari, sampeyan kudu masang konfigurasi Prometheus ing ngisor iki, padha karo skema Istio Helm resmi.
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml
Tambah repositori Flagger Helm:
helm repo add flagger [https://flagger.app](https://flagger.app/)
Expand Flagger kanggo namespace istio-system
kanthi ngaktifake kabar Slack:
helm upgrade -i flagger flagger/flagger
--namespace=istio-system
--set metricsServer=http://prometheus.istio-system:9090
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID
--set slack.channel=general
--set slack.user=flagger
Sampeyan bisa nginstal Flagger ing sembarang namespace anggere bisa komunikasi karo layanan Istio Prometheus ing port 9090.
Flagger duwe dashboard Grafana kanggo analisis kenari. Instal Grafana ing namespace istio-system
:
helm upgrade -i flagger-grafana flagger/grafana
--namespace=istio-system
--set url=http://prometheus.istio-system:9090
--set user=admin
--set password=change-me
Mbukak Grafana liwat gateway mbukak kanthi nggawe layanan virtual (ganti example.com
menyang domain sampeyan):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: flagger-grafana
Simpen sumber ing ndhuwur minangka grafana-virtual-service.yaml banjur gunakake:
kubectl apply -f ./grafana-virtual-service.yaml
Nalika arep http://grafana.istio.example.com
Browser sampeyan kudu ngarahake sampeyan menyang kaca login Grafana.
Nyebarake aplikasi web nganggo Flagger
Flagger nyebarke Kubernetes lan, yen perlu, autoscaling horisontal (HPA), banjur nggawe seri obyek (penyebaran Kubernetes, layanan ClusterIP lan layanan virtual Istio). Objek kasebut nyedhiyakake aplikasi menyang bolong layanan lan ngatur analisis lan promosi kenari.
Gawe ruang jeneng tes kanthi implementasi Istio Sidecar diaktifake:
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
Gawe panyebaran lan alat skala horisontal otomatis kanggo pod:
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
Pasang layanan tes beban kanggo ngasilake lalu lintas sajrone analisis kenari:
helm upgrade -i flagger-loadtester flagger/loadtester
--namepace=test
Gawe sumber kenari khusus (ganti example.com
menyang domain sampeyan):
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- app.istio.example.com
canaryAnalysis:
interval: 30s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: istio_requests_total
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
Simpen sumber ing ndhuwur minangka podinfo-canary.yaml banjur gunakake:
kubectl apply -f ./podinfo-canary.yaml
Analisis ing ndhuwur, yen sukses, bakal mlaku limang menit, mriksa metrik HTTP saben setengah menit. Sampeyan bisa nemtokake wektu minimal sing dibutuhake kanggo nyoba lan ningkatake penyebaran kenari kanthi nggunakake rumus ing ngisor iki: interval * (maxWeight / stepWeight)
. Kolom CRD Canary didokumentasikan
Sawise sawetara detik, Flagger bakal nggawe obyek kenari:
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
Bukak browser lan pindhah menyang app.istio.example.com
, sampeyan kudu ndeleng nomer versi
Analisis lan promosi kenari otomatis
Flagger ngetrapake loop kontrol sing mboko sithik mindhah lalu lintas menyang kenari nalika ngukur indikator kinerja utama kayata tingkat sukses panjalukan HTTP, durasi panjalukan rata-rata, lan kesehatan pod. Adhedhasar analisis KPI, kenari dipromosikan utawa dihentikan, lan asil analisis kasebut diterbitake ing Slack.
Penyebaran Canary dipicu nalika salah siji saka obyek ing ngisor iki diganti:
- Deploy PodSpec (gambar wadah, printah, port, env, lsp.)
- ConfigMaps dipasang minangka volume utawa diowahi dadi variabel lingkungan
- Rahasia dipasang minangka volume utawa diowahi dadi variabel lingkungan
Jalanake penyebaran kenari nalika nganyari gambar wadah:
kubectl -n test set image deployment/podinfo
podinfod=quay.io/stefanprodan/podinfo:1.4.1
Flagger ndeteksi manawa versi panyebaran wis diganti lan wiwit nganalisa:
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
Sajrone analisis, asil kenari bisa dipantau nggunakake Grafana:
Wigati dicathet: yen owah-owahan anyar ditrapake kanggo penyebaran sajrone analisis kenari, Flagger bakal miwiti maneh fase analisis.
Gawe dhaptar kabeh kenari ing kluster sampeyan:
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-01-16T14:05:07Z
prod frontend Succeeded 0 2019-01-15T16:15:07Z
prod backend Failed 0 2019-01-14T17:05:07Z
Yen sampeyan wis ngaktifake kabar Slack, sampeyan bakal nampa pesen ing ngisor iki:
Rollback otomatis
Sajrone analisis kenari, sampeyan bisa ngasilake kesalahan HTTP 500 sintetik lan latensi respon dhuwur kanggo mriksa yen Flagger bakal mungkasi panyebaran kasebut.
Gawe pod test lan tindakake ing ngisor iki:
kubectl -n test run tester
--image=quay.io/stefanprodan/podinfo:1.2.1
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
Ngasilake kesalahan HTTP 500:
watch curl http://podinfo-canary:9898/status/500
Generasi tundha:
watch curl http://podinfo-canary:9898/delay/1
Nalika jumlah mriksa gagal tekan batesan, lalu lintas wis routed bali menyang saluran utami, kenari wis scaled menyang nul, lan penyebaran prajurit wis ditandhani minangka gagal.
Kesalahan canary lan lonjakan latensi dicathet minangka acara Kubernetes lan direkam dening Flagger ing format JSON:
kubectl -n istio-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
Yen sampeyan wis ngaktifake kabar Slack, sampeyan bakal nampa pesen nalika tenggat wektu kanggo ngrampungake utawa tekan jumlah maksimum review sing gagal ing analisis ngluwihi:
Ing kesimpulan
Nganggo bolong layanan kaya Istio ing ndhuwur Kubernetes bakal nyedhiyakake metrik, log, lan log otomatis, nanging nggunakake beban kerja isih gumantung karo alat eksternal. Flagger duwe tujuan kanggo ngganti iki kanthi nambah kemampuan Istio
Flagger kompatibel karo solusi CI/CD kanggo Kubernetes, lan analisis kenari bisa gampang ditambahi
Flagger didhukung
Yen sampeyan duwe saran kanggo nambah Flagger, kirimake masalah utawa PR ing GitHub ing
Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ
Source: www.habr.com