CD korporativ dasturiy ta'minot amaliyoti sifatida tan olingan va o'rnatilgan CI tamoyillarining tabiiy evolyutsiyasidir. Biroq, CD hali ham juda kam uchraydi, ehtimol boshqaruvning murakkabligi va tizim mavjudligiga ta'sir qiluvchi muvaffaqiyatsiz joylashtirish qo'rquvi tufayli.
Quyida Google Kubernetes Engine (GKE) da Flagger-ni sozlash va undan foydalanish bo‘yicha bosqichma-bosqich qo‘llanma keltirilgan.
Kubernetes klasterini o'rnatish
Siz Istio qo'shimchasi bilan GKE klasterini yaratishdan boshlaysiz (agar sizda GCP hisobi bo'lmasa, ro'yxatdan o'tishingiz mumkin
Google Cloud xizmatiga kiring, loyiha yarating va u uchun hisob-kitobni yoqing. Buyruqlar qatori yordam dasturini o'rnating gcloud init
.
Standart loyihani, hisoblash maydonini va zonani o'rnating (almashtiring PROJECT_ID
loyihangiz uchun):
gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
GKE xizmatini yoqing va HPA va Istio plaginlari bilan klaster yarating:
gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio
--cluster-version=${K8S_VERSION}
--zone=us-central1-a
--num-nodes=2
--machine-type=n1-standard-2
--disk-size=30
--enable-autorepair
--no-enable-cloud-logging
--no-enable-cloud-monitoring
--addons=HorizontalPodAutoscaling,Istio
--istio-config=auth=MTLS_PERMISSIVE
Yuqoridagi buyruq ikkita VM dan iborat standart tugun pulini yaratadi n1-standard-2
(vCPU: 2, RAM 7,5 GB, disk: 30 GB). Ideal holda, Istio komponentlari ish yuklaridan ajratilgan bo'lishi kerak, ammo Istio podslarini maxsus tugunlar hovuzida ishga tushirishning oson yo'li yo'q. Istio manifestlari faqat oʻqish uchun moʻljallangan hisoblanadi va GKE har qanday oʻzgarishlarni qaytaradi, masalan, tugunga ulanish yoki poddan ajratish.
Hisob maʼlumotlarini oʻrnating kubectl
:
gcloud container clusters get-credentials istio
Klaster administrator rolini bog'lashni yarating:
kubectl create clusterrolebinding "cluster-admin-$(whoami)"
--clusterrole=cluster-admin
--user="$(gcloud config get-value core/account)"
Buyruqlar qatori vositasini o'rnating
brew install kubernetes-helm
Homebrew 2.0 hozir ham mavjud
Tiller uchun xizmat hisobini va klaster rolini bog'lashni yarating:
kubectl -n kube-system create sa tiller &&
kubectl create clusterrolebinding tiller-cluster-rule
--clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
Nomlar maydonida Tiller-ni kengaytiring kube-system
:
helm init --service-account tiller
Helm va Tiller o'rtasida SSL dan foydalanishni o'ylab ko'rishingiz kerak. Helm o'rnatishingizni himoya qilish haqida ko'proq ma'lumot olish uchun qarang
Sozlamalarni tasdiqlang:
kubectl -n istio-system get svc
Bir necha soniyadan so'ng, GCP xizmatga tashqi IP manzilini belgilashi kerak istio-ingressgateway
.
Istio Ingress Gatewayni sozlash
Nomi bilan statik IP-manzil yarating istio-gateway
Istio shlyuz IP manzilidan foydalanish:
export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1
Endi sizga internet domeni va DNS registratoriga kirishingiz kerak. Ikkita A yozuvini qo'shing (almashtiring example.com
domeningizga):
istio.example.com A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}
DNS joker belgisi ishlayotganligini tekshiring:
watch host test.istio.example.com
HTTP orqali xizmat ko'rsatish tarmog'idan tashqari xizmatlarni taqdim qilish uchun umumiy Istio shlyuzini yarating:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Yuqoridagi manbani public-gateway.yaml sifatida saqlang va keyin uni qo'llang:
kubectl apply -f ./public-gateway.yaml
Hech bir ishlab chiqarish tizimi SSLsiz Internetda xizmatlar ko'rsatmasligi kerak. Istio kirish shlyuzingizni sertifikat menejeri, CloudDNS va Let's Encrypt bilan himoya qilish uchun o'qing.
Flagger o'rnatish
GKE Istio plaginiga Istio telemetriya xizmatini tozalaydigan Prometey misoli kirmaydi. Flagger kanareyka tahlilini o'tkazish uchun Istio HTTP ko'rsatkichlaridan foydalanganligi sababli, rasmiy Istio Helm sxemasi bilan birga keladiganga o'xshash quyidagi Prometey konfiguratsiyasini o'rnatishingiz kerak.
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml
Flagger Helm omborini qo'shing:
helm repo add flagger [https://flagger.app](https://flagger.app/)
Flagger-ni nomlar maydoniga kengaytiring istio-system
Slack bildirishnomalarini yoqish orqali:
helm upgrade -i flagger flagger/flagger
--namespace=istio-system
--set metricsServer=http://prometheus.istio-system:9090
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID
--set slack.channel=general
--set slack.user=flagger
Flagger-ni istalgan nom maydoniga o'rnatishingiz mumkin, agar u 9090 portida Istio Prometheus xizmati bilan bog'lana olsa.
Flagger kanareykalarni tahlil qilish uchun Grafana boshqaruv paneliga ega. Grafana-ni nom maydoniga o'rnating istio-system
:
helm upgrade -i flagger-grafana flagger/grafana
--namespace=istio-system
--set url=http://prometheus.istio-system:9090
--set user=admin
--set password=change-me
Virtual xizmatni yaratish orqali Grafana-ni ochiq shlyuz orqali oching (almashtiring example.com
domeningizga):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: flagger-grafana
Yuqoridagi manbani grafana-virtual-service.yaml sifatida saqlang va keyin uni qo'llang:
kubectl apply -f ./grafana-virtual-service.yaml
Borayotganda http://grafana.istio.example.com
Brauzeringiz sizni Grafana kirish sahifasiga yo'naltirishi kerak.
Flagger yordamida veb-ilovalarni joylashtirish
Flagger Kubernetes-ni o'rnatadi va agar kerak bo'lsa, gorizontal avtomatik o'lchov (HPA), so'ngra bir qator ob'ektlarni yaratadi (Kubernetes joylashtirishlari, ClusterIP xizmatlari va Istio virtual xizmatlari). Ushbu ob'ektlar ilovani xizmat ko'rsatish tarmog'iga ochib beradi va kanareyka tahlili va reklamasini boshqaradi.
Istio Sidecar ilovasi yoqilgan holda test nom maydoni yarating:
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
Pod uchun joylashtirish va avtomatik gorizontal masshtablash vositasini yarating:
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
Kanareykalarni tahlil qilish paytida trafikni yaratish uchun yuk sinovi xizmatini o'rnating:
helm upgrade -i flagger-loadtester flagger/loadtester
--namepace=test
Maxsus kanareyka resursini yarating (almashtiring example.com
domeningizga):
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- app.istio.example.com
canaryAnalysis:
interval: 30s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: istio_requests_total
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
Yuqoridagi manbani podinfo-canary.yaml sifatida saqlang va keyin uni qo'llang:
kubectl apply -f ./podinfo-canary.yaml
Yuqoridagi tahlil, agar muvaffaqiyatli bo'lsa, har yarim daqiqada HTTP ko'rsatkichlarini tekshirib, besh daqiqa davomida ishlaydi. Quyidagi formuladan foydalanib, kanareykalarni joylashtirishni sinab ko'rish va targ'ib qilish uchun zarur bo'lgan minimal vaqtni aniqlashingiz mumkin: interval * (maxWeight / stepWeight)
. Canary CRD maydonlari hujjatlashtirilgan
Bir necha soniyadan so'ng Flagger kanareyka ob'ektlarini yaratadi:
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
Brauzeringizni oching va o'ting app.istio.example.com
, siz versiya raqamini ko'rishingiz kerak
Avtomatik kanareykalarni tahlil qilish va reklama qilish
Flagger HTTP so'rovining muvaffaqiyat darajasi, o'rtacha so'rov davomiyligi va podning holati kabi asosiy ishlash ko'rsatkichlarini o'lchagan holda, trafikni asta-sekin kanareykaga o'tkazadigan boshqaruv tsiklini amalga oshiradi. KPI tahlili asosida kanareyka ko'tariladi yoki tugatiladi va tahlil natijalari Slack-da nashr etiladi.
Canary deployment quyidagi ob'ektlardan biri o'zgarganda ishga tushiriladi:
- PodSpec-ni o'rnatish (konteyner tasviri, buyruq, portlar, env va boshqalar)
- ConfigMaps jildlar sifatida o'rnatiladi yoki muhit o'zgaruvchilariga aylantiriladi
- Sirlar hajmlar sifatida o'rnatiladi yoki atrof-muhit o'zgaruvchilariga aylantiriladi
Konteyner tasvirini yangilashda canary deployment dasturini ishga tushiring:
kubectl -n test set image deployment/podinfo
podinfod=quay.io/stefanprodan/podinfo:1.4.1
Flagger tarqatish versiyasi o'zgarganligini aniqlaydi va uni tahlil qilishni boshlaydi:
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
Tahlil paytida kanareyka natijalarini Grafana yordamida kuzatish mumkin:
Iltimos, diqqat qiling: agar kanareykalarni tahlil qilish paytida joylashtirishga yangi o'zgarishlar qo'llanilsa, Flagger tahlil bosqichini qaytadan boshlaydi.
Klasteringizdagi barcha kanareykalar ro'yxatini tuzing:
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-01-16T14:05:07Z
prod frontend Succeeded 0 2019-01-15T16:15:07Z
prod backend Failed 0 2019-01-14T17:05:07Z
Agar siz Slack bildirishnomalarini yoqsangiz, quyidagi xabarlarni olasiz:
Avtomatik orqaga qaytarish
Kanareykalarni tahlil qilish paytida siz Flagger o'rnatishni to'xtatib qo'yishini tekshirish uchun sintetik HTTP 500 xatolarini va yuqori javob kechikishini yaratishingiz mumkin.
Sinov paneli yarating va unda quyidagilarni bajaring:
kubectl -n test run tester
--image=quay.io/stefanprodan/podinfo:1.2.1
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
HTTP 500 xatolarini yaratish:
watch curl http://podinfo-canary:9898/status/500
Kechikish yaratish:
watch curl http://podinfo-canary:9898/delay/1
Muvaffaqiyatsiz tekshiruvlar soni chegaraga yetganda, trafik asosiy kanalga qaytariladi, kanareyka nolga o'tkaziladi va joylashtirish muvaffaqiyatsiz deb belgilanadi.
Kanareyka xatolari va kechikish tezligi Kubernetes hodisalari sifatida qayd etiladi va Flagger tomonidan JSON formatida qayd etiladi:
kubectl -n istio-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
Agar siz Slack bildirishnomalarini yoqib qo'ygan bo'lsangiz, tahlilda bajarilmagan ko'rib chiqishlar sonining maksimal soniga yetish yoki yakunlash muddati o'tib ketganda sizga xabar keladi:
Xulosa
Kubernetes tepasida Istio kabi xizmat ko'rsatish tarmog'ini ishga tushirish avtomatik o'lchovlar, jurnallar va jurnallarni ta'minlaydi, ammo ish yuklarini o'rnatish hali ham tashqi vositalarga bog'liq. Flagger Istio imkoniyatlarini qo'shish orqali buni o'zgartirishni maqsad qilgan
Flagger Kubernetes uchun har qanday CI/CD yechimlari bilan mos keladi va kanareyka tahlilini osongina kengaytirish mumkin.
Flagger qo'llab-quvvatlanadi
Flagger-ni yaxshilash bo'yicha takliflaringiz bo'lsa, GitHub-da muammo yoki PR-ni yuboring.
Rahmat
Manba: www.habr.com