CD tau lees paub tias yog kev lag luam software xyaum thiab yog ib qho kev hloov pauv ntawm cov qauv CI tsim. Txawm li cas los xij, CD tseem tsis tshua muaj, tej zaum vim yog qhov nyuaj ntawm kev tswj hwm thiab kev ntshai ntawm kev xa tawm tsis ua haujlwm cuam tshuam rau qhov system muaj.
Hauv qab no yog cov lus qhia ua ntu zus los teeb tsa thiab siv Flagger ntawm Google Kubernetes Cav (GKE).
Teeb tsa Kubernetes pawg
Koj pib los ntawm kev tsim ib pawg GKE nrog Istio add-on (yog tias koj tsis muaj GCP account, koj tuaj yeem sau npe
Nkag mus rau Google Cloud, tsim ib qhov project, thiab pab them nqi rau nws. Nruab qhov kev siv kab hais kom ua gcloud init
.
Teem lub default project, xam cheeb tsam, thiab cheeb tsam (hloov PROJECT_ID
rau koj qhov project):
gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
Ua kom muaj kev pabcuam GKE thiab tsim ib pawg nrog HPA thiab Istio add-ons:
gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio
--cluster-version=${K8S_VERSION}
--zone=us-central1-a
--num-nodes=2
--machine-type=n1-standard-2
--disk-size=30
--enable-autorepair
--no-enable-cloud-logging
--no-enable-cloud-monitoring
--addons=HorizontalPodAutoscaling,Istio
--istio-config=auth=MTLS_PERMISSIVE
Cov lus txib saum toj no yuav tsim ib lub pas dej ua ke uas muaj ob VMs n1-standard-2
(vCPU: 2, RAM 7,5 GB, disk: 30 GB). Qhov zoo tshaj plaws, Istio Cheebtsam yuav tsum raug cais tawm ntawm lawv cov haujlwm, tab sis tsis muaj txoj hauv kev yooj yim los khiav Istio pods ntawm lub pas dej ua ke. Istio manifests raug suav hais tias yog nyeem nkaus xwb, thiab GKE yuav thim rov qab cov kev hloov pauv xws li khi rau ntawm lub pob lossis tshem tawm ntawm lub plhaub.
Teeb tsa cov ntawv pov thawj rau kubectl
:
gcloud container clusters get-credentials istio
Tsim ib pawg tswj hwm lub luag haujlwm khi:
kubectl create clusterrolebinding "cluster-admin-$(whoami)"
--clusterrole=cluster-admin
--user="$(gcloud config get-value core/account)"
Nruab cov kab hais kom ua
brew install kubernetes-helm
Homebrew 2.0 tam sim no kuj muaj rau
Tsim ib tus as-qhauj kev pabcuam thiab pawg haujlwm khi rau Tiller:
kubectl -n kube-system create sa tiller &&
kubectl create clusterrolebinding tiller-cluster-rule
--clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
Nthuav Tiller hauv namespace kube-system
:
helm init --service-account tiller
Koj yuav tsum xav txog kev siv SSL ntawm Helm thiab Tiller. Yog xav paub ntxiv txog kev tiv thaiv koj lub Helm installation, saib
Confirm nqis:
kubectl -n istio-system get svc
Tom qab ob peb feeb, GCP yuav tsum muab qhov chaw nyob IP sab nraud rau qhov kev pabcuam istio-ingressgateway
.
Teeb tsa Istio Ingress Gateway
Tsim qhov chaw nyob IP zoo li qub nrog lub npe istio-gateway
siv Istio rooj vag IP chaw nyob:
export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1
Tam sim no koj xav tau tus sau internet thiab nkag mus rau koj tus sau npe DNS. Ntxiv ob cov ntaub ntawv A (hloov example.com
rau koj lub vev xaib):
istio.example.com A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}
Xyuas kom tseeb tias DNS wildcard ua haujlwm:
watch host test.istio.example.com
Tsim ib lub rooj vag Istio generic los muab kev pab cuam sab nraum qhov kev pabcuam mesh hla HTTP:
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
Txuag cov peev txheej saum toj no ua public-gateway.yaml thiab tom qab ntawd siv nws:
kubectl apply -f ./public-gateway.yaml
Tsis muaj cov txheej txheem tsim khoom yuav tsum muab kev pabcuam hauv Is Taws Nem yam tsis muaj SSL. Txhawm rau tiv thaiv koj lub Istio ingress rooj vag nrog daim ntawv pov thawj-tus thawj tswj hwm, CloudDNS thiab Cia Encrypt, thov nyeem
Flagger installation
GKE Istio add-on tsis suav nrog Prometheus piv txwv uas ntxuav cov kev pabcuam Istio telemetry. Txij li thaum Flagger siv Istio HTTP metrics los ua qhov kev soj ntsuam canary, koj yuav tsum xa cov kev teeb tsa hauv qab no Prometheus, zoo ib yam li qhov uas los nrog Istio Helm schema.
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml
Ntxiv Flagger Helm repository:
helm repo add flagger [https://flagger.app](https://flagger.app/)
Nthuav Flagger rau namespace istio-system
los ntawm kev ua kom Slack ceeb toom:
helm upgrade -i flagger flagger/flagger
--namespace=istio-system
--set metricsServer=http://prometheus.istio-system:9090
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID
--set slack.channel=general
--set slack.user=flagger
Koj tuaj yeem nruab Flagger hauv txhua lub npe ntev npaum li nws tuaj yeem sib txuas lus nrog Istio Prometheus kev pabcuam ntawm chaw nres nkoj 9090.
Flagger muaj Grafana dashboard rau canary tsom xam. Nruab Grafana hauv namespace istio-system
:
helm upgrade -i flagger-grafana flagger/grafana
--namespace=istio-system
--set url=http://prometheus.istio-system:9090
--set user=admin
--set password=change-me
Tshaj tawm Grafana los ntawm lub qhov rooj qhib los ntawm kev tsim cov kev pabcuam virtual (hloov example.com
rau koj lub vev xaib):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: flagger-grafana
Txuag cov peev txheej saum toj no ua grafana-virtual-service.yaml thiab tom qab ntawd siv nws:
kubectl apply -f ./grafana-virtual-service.yaml
Thaum mus http://grafana.istio.example.com
Koj tus browser yuav tsum xa koj mus rau nplooj ntawv nkag mus Grafana.
Deploying web applications nrog Flagger
Flagger xa mus rau Kubernetes thiab, yog tias tsim nyog, kab rov tav autoscaling (HPA), tom qab ntawd tsim cov khoom siv (Kubernetes xa mus, ClusterIP cov kev pabcuam thiab Istio virtual kev pabcuam). Cov khoom no nthuav tawm daim ntawv thov rau cov kev pabcuam mesh thiab tswj xyuas canary thiab kev txhawb nqa.
Tsim qhov chaw kuaj lub npe nrog Istio Sidecar siv tau qhib:
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
Tsim ib qho kev xa tawm thiab ib qho cuab yeej tsis siv neeg kab rov tav scaling rau lub pod:
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
Siv cov kev pabcuam thauj khoom los tsim cov tsheb thauj mus los thaum tshawb xyuas canary:
helm upgrade -i flagger-loadtester flagger/loadtester
--namepace=test
Tsim cov peev txheej canary kev cai (hloov example.com
rau koj lub vev xaib):
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- app.istio.example.com
canaryAnalysis:
interval: 30s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: istio_requests_total
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
Txuag cov peev txheej saum toj no ua podinfo-canary.yaml thiab tom qab ntawd siv nws:
kubectl apply -f ./podinfo-canary.yaml
Cov kev soj ntsuam saum toj no, yog tias ua tiav, yuav khiav rau tsib feeb, kuaj xyuas HTTP metrics txhua ib nrab feeb. Koj tuaj yeem txiav txim siab lub sijhawm tsawg kawg uas yuav tsum tau sim thiab txhawb nqa kev xa tawm canary siv cov qauv hauv qab no: interval * (maxWeight / stepWeight)
. Canary CRD teb tau sau tseg
Tom qab ob peb feeb, Flagger yuav tsim cov khoom canary:
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
Qhib koj tus browser thiab mus rau app.istio.example.com
, koj yuav tsum pom tus lej version
Tsis siv neeg canary tsom xam thiab nce qib
Flagger siv lub voj voog tswj uas maj mam txav tsheb mus rau canary thaum ntsuas cov ntsuas kev ua tau zoo xws li HTTP thov kev vam meej, qhov nruab nrab qhov kev thov ncua sijhawm, thiab kev noj qab haus huv pod. Raws li kev txheeb xyuas KPI, canary tau nce lossis raug txiav tawm, thiab cov txiaj ntsig ntawm kev tshuaj ntsuam tau luam tawm hauv Slack.
Kev xa tawm Canary tau tshwm sim thaum ib qho ntawm cov khoom hauv qab no hloov pauv:
- Deploy PodSpec (cov duab thawv, hais kom ua, chaw nres nkoj, env, thiab lwm yam)
- ConfigMaps yog mounted li ntim los yog hloov mus rau ib puag ncig hloov pauv
- Secrets yog mounted li ntim los yog hloov mus rau ib puag ncig variables
Khiav canary deployment thaum hloov kho lub thawv duab:
kubectl -n test set image deployment/podinfo
podinfod=quay.io/stefanprodan/podinfo:1.4.1
Flagger pom tias qhov kev xa tawm tau hloov pauv thiab pib txheeb xyuas nws:
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
Thaum tshawb xyuas, cov txiaj ntsig canary tuaj yeem saib xyuas siv Grafana:
Thov nco ntsoov: yog tias cov kev hloov tshiab tau siv rau kev xa tawm thaum lub sij hawm tshawb xyuas canary, Flagger yuav rov pib qhov kev ntsuam xyuas theem.
Ua ib daim ntawv teev tag nrho cov canaries hauv koj pawg:
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-01-16T14:05:07Z
prod frontend Succeeded 0 2019-01-15T16:15:07Z
prod backend Failed 0 2019-01-14T17:05:07Z
Yog tias koj tau qhib Slack cov ntawv ceeb toom, koj yuav tau txais cov lus hauv qab no:
Tsis siv neeg rollback
Thaum lub sij hawm kev soj ntsuam canary, koj tuaj yeem tsim cov hluavtaws HTTP 500 yuam kev thiab cov lus teb siab latency los xyuas seb Flagger puas yuav tsum tso tseg.
Tsim ib qho kev xeem pod thiab ua cov hauv qab no hauv nws:
kubectl -n test run tester
--image=quay.io/stefanprodan/podinfo:1.2.1
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
Tsim HTTP 500 yuam kev:
watch curl http://podinfo-canary:9898/status/500
ncua tiam:
watch curl http://podinfo-canary:9898/delay/1
Thaum tus naj npawb ntawm cov tshev ua tsis tau tiav mus txog qhov pib, kev khiav tsheb rov qab mus rau thawj cov channel, cov canary tau ntsuas rau xoom, thiab kev xa tawm raug cim tias ua tsis tiav.
Canary yuam kev thiab latency spikes tau teev tseg raws li Kubernetes cov xwm txheej thiab kaw los ntawm Flagger hauv JSON hom:
kubectl -n istio-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
Yog tias koj tau qhib Slack cov ntawv ceeb toom, koj yuav tau txais cov lus thaum lub sijhawm kawg ntawm kev ua tiav lossis ncav cuag tus lej siab tshaj plaws ntawm kev tshuaj xyuas tsis tiav hauv kev tshuaj xyuas ntau dhau:
Nyob rau hauv xaus
Kev khiav ib qho kev pabcuam mesh zoo li Istio nyob rau sab saum toj ntawm Kubernetes yuav muab kev ntsuas tsis siv neeg, cov cav, thiab cov cav, tab sis kev siv cov khoom ua haujlwm tseem nyob ntawm cov cuab yeej sab nraud. Flagger aims los hloov qhov no los ntawm kev ntxiv cov peev xwm Istio
Flagger yog sib xws nrog ib qho kev daws teeb meem CI / CD rau Kubernetes, thiab canary tsom xam tau yooj yim txuas nrog
Flagger txhawb nqa
Yog tias koj muaj lus pom zoo rau kev txhim kho Flagger, thov xa qhov teeb meem lossis PR ntawm GitHub ntawm
Π‘ΠΏΠ°ΡΠΈΠ±ΠΎ
Tau qhov twg los: www.hab.com