CD ã¯ãšã³ã¿ãŒãã©ã€ãº ãœãããŠã§ã¢ã®å®è·µãšããŠèªèãããŠããã確ç«ããã CI ååã®èªç¶ãªé²åã®çµæã§ãã ãã ãããããã管çã®è€éããšãå°å ¥ã®å€±æãã·ã¹ãã ã®å¯çšæ§ã«åœ±é¿ãäžããæãããããããCD ã¯äŸç¶ãšããŠéåžžã«ãŸãã§ãã
以äžã¯ãGoogle Kubernetes Engine (GKE) 㧠Flagger ãèšå®ããŠäœ¿çšããããã®ã¹ããããã€ã¹ãããã®ã¬ã€ãã§ãã
Kubernetes ã¯ã©ã¹ã¿ãŒã®ã»ããã¢ãã
ãŸããIstio ã¢ããªã³ã䜿çšã㊠GKE ã¯ã©ã¹ã¿ãŒãäœæããŸã (GCP ã¢ã«ãŠã³ãããæã¡ã§ãªãå Žåã¯ããµã€ã³ã¢ããã§ããŸã)
Google Cloud ã«ãã°ã€ã³ãããããžã§ã¯ããäœæãããã®ãããžã§ã¯ãã«å¯Ÿãã課éãæå¹ã«ããŸãã ã³ãã³ãã©ã€ã³ãŠãŒãã£ãªãã£ãã€ã³ã¹ããŒã«ãã gcloud init
.
ããã©ã«ãã®ãããžã§ã¯ããã³ã³ãã¥ãŒãã£ã³ã°ãšãªã¢ãããã³ãŸãŒã³ãèšå®ããŸã (眮ãæããŸã) PROJECT_ID
ããªãã®ãããžã§ã¯ãçš):
gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a
GKE ãµãŒãã¹ãæå¹ã«ããHPA ããã³ Istio ã¢ããªã³ãå«ãã¯ã©ã¹ã¿ãŒãäœæããŸãã
gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio
--cluster-version=${K8S_VERSION}
--zone=us-central1-a
--num-nodes=2
--machine-type=n1-standard-2
--disk-size=30
--enable-autorepair
--no-enable-cloud-logging
--no-enable-cloud-monitoring
--addons=HorizontalPodAutoscaling,Istio
--istio-config=auth=MTLS_PERMISSIVE
äžèšã®ã³ãã³ãã¯ãXNUMX ã€ã® VM ãå«ãããã©ã«ãã®ããŒã ããŒã«ãäœæããŸãã n1-standard-2
(vCPU: 2ãRAM 7,5 GBããã£ã¹ã¯: 30 GB)ã çæ³çã«ã¯ãIstio ã³ã³ããŒãã³ããã¯ãŒã¯ããŒãããåé¢ããå¿
èŠããããŸãããå°çšã®ããŒã ããŒã«ã§ Istio Pod ãå®è¡ããç°¡åãªæ¹æ³ã¯ãããŸããã Istio ãããã§ã¹ãã¯èªã¿åãå°çšãšã¿ãªãããGKE ã¯ããŒããžã®ãªã³ã¯ããããããã®æ¥ç¶è§£é€ãªã©ã®å€æŽãå
ã«æ»ããŸãã
è³æ Œæ
å ±ãã»ããã¢ãããã kubectl
:
gcloud container clusters get-credentials istio
ã¯ã©ã¹ã¿ãŒç®¡çè ã®åœ¹å²ãã€ã³ãã£ã³ã°ãäœæããŸãã
kubectl create clusterrolebinding "cluster-admin-$(whoami)"
--clusterrole=cluster-admin
--user="$(gcloud config get-value core/account)"
ã³ãã³ãã©ã€ã³ããŒã«ãã€ã³ã¹ããŒã«ãã
brew install kubernetes-helm
Homebrew 2.0 ã¯ä»¥äžã§ãå©çšã§ããããã«ãªããŸããã
Tiller ã®ãµãŒãã¹ ã¢ã«ãŠã³ããšã¯ã©ã¹ã¿ãŒ ããŒã« ãã€ã³ãã£ã³ã°ãäœæããŸãã
kubectl -n kube-system create sa tiller &&
kubectl create clusterrolebinding tiller-cluster-rule
--clusterrole=cluster-admin
--serviceaccount=kube-system:tiller
åå空é㧠Tiller ãå±éããŸã kube-system
:
helm init --service-account tiller
Helm ãš Tiller ã®é㧠SSL ã®äœ¿çšãæ€èšããå¿
èŠããããŸãã Helm ã€ã³ã¹ããŒã«ã®ä¿è·ã®è©³çŽ°ã«ã€ããŠã¯ã次ãåç
§ããŠãã ããã
èšå®ã確èªããŸãã
kubectl -n istio-system get svc
æ°ç§åŸãGCP ã¯ãµãŒãã¹ã«å€éš IP ã¢ãã¬ã¹ãå²ãåœãŠãŸãã istio-ingressgateway
.
Istio Ingress ã²ãŒããŠã§ã€ã®æ§æ
ååãä»ããŠéç IP ã¢ãã¬ã¹ãäœæãã istio-gateway
Istio ã²ãŒããŠã§ã€ã® IP ã¢ãã¬ã¹ã䜿çšããŸãã
export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1
ããã§ãã€ã³ã¿ãŒããã ãã¡ã€ã³ãš DNS ã¬ãžã¹ãã©ãžã®ã¢ã¯ã»ã¹ãå¿
èŠã«ãªããŸãã XNUMX ã€ã® A ã¬ã³ãŒããè¿œå ããŸã (眮ãæããŸã) example.com
ããªãã®ãã¡ã€ã³ã«):
istio.example.com A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}
DNS ã¯ã€ã«ãã«ãŒããæ©èœããŠããããšã確èªããŸãã
watch host test.istio.example.com
æ±çš Istio ã²ãŒããŠã§ã€ãäœæããŠãHTTP çµç±ã§ãµãŒãã¹ ã¡ãã·ã¥ã®å€åŽã«ãµãŒãã¹ãæäŸããŸãã
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: public-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
äžèšã®ãªãœãŒã¹ã public-gateway.yaml ãšããŠä¿åããé©çšããŸãã
kubectl apply -f ./public-gateway.yaml
éçšã·ã¹ãã ã¯ãSSL ã䜿çšããã«ã€ã³ã¿ãŒãããäžã§ãµãŒãã¹ãæäŸãã¹ãã§ã¯ãããŸããã cert-managerãCloudDNSãLet's Encrypt ã䜿çšã㊠Istio Ingress ã²ãŒããŠã§ã€ãä¿è·ããã«ã¯ã以äžããèªã¿ãã ããã
ãã©ã¬ãŒã®ã€ã³ã¹ããŒã«
GKE Istio ã¢ããªã³ã«ã¯ãIstio ãã¬ã¡ã㪠ãµãŒãã¹ãã¯ãªãŒã³ã¢ãããã Prometheus ã€ã³ã¹ã¿ã³ã¹ã¯å«ãŸããŠããŸããã Flagger 㯠Istio HTTP ã¡ããªã¯ã¹ã䜿çšããŠã«ããªã¢åæãå®è¡ãããããå ¬åŒã® Istio Helm ã¹ããŒãã«ä»å±ãããã®ãšåæ§ã®ã次㮠Prometheus æ§æããããã€ããå¿ èŠããããŸãã
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml
Flagger Helm ãªããžããªãè¿œå ããŸãã
helm repo add flagger [https://flagger.app](https://flagger.app/)
Flager ãåå空éã«å±éãã istio-system
Slack éç¥ãæå¹ã«ããããšã§:
helm upgrade -i flagger flagger/flagger
--namespace=istio-system
--set metricsServer=http://prometheus.istio-system:9090
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID
--set slack.channel=general
--set slack.user=flagger
Flagger ã¯ãããŒã 9090 㧠Istio Prometheus ãµãŒãã¹ãšéä¿¡ã§ããéããä»»æã®åå空éã«ã€ã³ã¹ããŒã«ã§ããŸãã
Flagger ã«ã¯ãã«ããªã¢åæçšã® Grafana ããã·ã¥ããŒãããããŸãã Grafana ãåå空éã«ã€ã³ã¹ããŒã«ãã istio-system
:
helm upgrade -i flagger-grafana flagger/grafana
--namespace=istio-system
--set url=http://prometheus.istio-system:9090
--set user=admin
--set password=change-me
ä»®æ³ãµãŒãã¹ (眮æ) ãäœæããŠããªãŒãã³ ã²ãŒããŠã§ã€çµç±ã§ Grafana ãå
¬éããŸãã example.com
ããªãã®ãã¡ã€ã³ã«):
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system
spec:
hosts:
- "grafana.istio.example.com"
gateways:
- public-gateway.istio-system.svc.cluster.local
http:
- route:
- destination:
host: flagger-grafana
äžèšã®ãªãœãŒã¹ã grafana-virtual-service.yaml ãšããŠä¿åããé©çšããŸãã
kubectl apply -f ./grafana-virtual-service.yaml
ã«ç§»åãããšã http://grafana.istio.example.com
ãã©ãŠã¶ã§ã¯ãGrafana ã®ãã°ã€ã³ ããŒãžã衚瀺ãããŸãã
Flagger ã䜿çšãã Web ã¢ããªã±ãŒã·ã§ã³ã®ãããã€
Flagger 㯠Kubernetes ããããã€ãããªãã·ã§ã³ã§èªåçã«ã¹ã±ãŒã«ã¢ãŠã (HPA) ããäžé£ã®ãªããžã§ã¯ã (Kubernetes ãããã€ã¡ã³ããClusterIP ãµãŒãã¹ãããã³ Istio ä»®æ³ãµãŒãã¹) ãäœæããŸãã ãããã®ãªããžã§ã¯ãã¯ã¢ããªã±ãŒã·ã§ã³ããµãŒãã¹ ã¡ãã·ã¥ã«å ¬éããã«ããªã¢åæãšé²è¡ç¶æ³ãå¶åŸ¡ããŸãã
Istio Sidecar ã€ã³ãžã§ã¯ã·ã§ã³ãæå¹ã«ããŠãã¹ãåå空éãäœæããŸãã
REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml
ãããã€ã¡ã³ããšãããèªåã¹ã±ãŒã«ã¢ãŠã ããŒã«ãäœæããŸãã
kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml
ãã¹ã ããŒã ãµãŒãã¹ããããã€ããŠãã«ããªã¢åæäžã«ãã©ãã£ãã¯ãçæããŸãã
helm upgrade -i flagger-loadtester flagger/loadtester
--namepace=test
ã«ã¹ã¿ã ã«ããªã¢ ãªãœãŒã¹ãäœæããŸã (眮ãæããŸã) example.com
ããªãã®ãã¡ã€ã³ã«):
apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
name: podinfo
namespace: test
spec:
targetRef:
apiVersion: apps/v1
kind: Deployment
name: podinfo
progressDeadlineSeconds: 60
autoscalerRef:
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
name: podinfo
service:
port: 9898
gateways:
- public-gateway.istio-system.svc.cluster.local
hosts:
- app.istio.example.com
canaryAnalysis:
interval: 30s
threshold: 10
maxWeight: 50
stepWeight: 5
metrics:
- name: istio_requests_total
threshold: 99
interval: 30s
- name: istio_request_duration_seconds_bucket
threshold: 500
interval: 30s
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
äžèšã®ãªãœãŒã¹ã podinfo-canary.yaml ãšããŠä¿åããé©çšããŸãã
kubectl apply -f ./podinfo-canary.yaml
äžèšã®åæãæåããå ŽåãXNUMX åéå®è¡ãããXNUMX åããšã« HTTP ã¡ããªã¯ã¹ããã§ãã¯ãããŸãã 次ã®åŒã䜿çšããŠãã«ããªã¢ ãããã€ã¡ã³ãã®æ€èšŒãšããã¢ãŒãã«å¿
èŠãªæå°æéã決å®ã§ããŸãã interval * (maxWeight / stepWeight)
ã Canary CRD ãã£ãŒã«ããææžåãããŠãã
æ°ç§åŸãFlagger 㯠Canary ãªããžã§ã¯ããäœæããŸãã
# applied
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo
ãã©ãŠã¶ãéããŠã次ã®å Žæã«ç§»åããŸã app.istio.example.com
ãããŒãžã§ã³çªå·ã衚瀺ãããã¯ãã§ã
èªåã«ããªã¢åæãšããã¢ãŒã·ã§ã³
Flagger ã¯ãHTTP ãªã¯ãšã¹ãã®æåçãå¹³åãªã¯ãšã¹ãæéããããã®å¥å šæ§ãªã©ã®äž»èŠãªããã©ãŒãã³ã¹ ã¡ããªã¯ã¹ã枬å®ããªããããã©ãã£ãã¯ãåŸã ã«ã«ããªã¢ã«ç§»åããå¶åŸ¡ã«ãŒããå®è£ ããŸãã KPIåæã«åºã¥ããŠã«ããªã¢ãææ ŒãŸãã¯äžæããåæçµæãSlackã«å ¬éããŸãã
ã«ããªã¢ ãããã€ã¡ã³ãã¯ã次ã®ãªããžã§ã¯ãã®ãããããå€æŽããããšããªã¬ãŒãããŸãã
- PodSpec ã®ããã〠(ã³ã³ãã㌠ã€ã¡ãŒãžãã³ãã³ããããŒããç°å¢ãªã©)
- ConfigMap ã¯ããªã¥ãŒã ãšããŠããŠã³ããããããç°å¢å€æ°ã«ããããããŸã
- ã·ãŒã¯ã¬ããã¯ããªã¥ãŒã ãšããŠããŠã³ããããããç°å¢å€æ°ã«å€æãããŸã
ã³ã³ãã ã€ã¡ãŒãžãæŽæ°ãããšãã« Canary ãããã€ãå®è¡ããŸãã
kubectl -n test set image deployment/podinfo
podinfod=quay.io/stefanprodan/podinfo:1.4.1
Flagger ã¯ããããã€ã¡ã³ãã®ããŒãžã§ã³ãå€æŽãããããšãæ€åºãã解æãéå§ããŸãã
kubectl -n test describe canary/podinfo
Events:
New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test
åæäžã«ãGrafana ã䜿çšããŠã«ããªã¢ã®çµæã远跡ã§ããŸãã
ã«ããªã¢åæäžã«ãããã€ã¡ã³ãã«æ°ããå€æŽãé©çšãããå ŽåãFlagger ã¯åæãã§ãŒãºãåéããããšã«æ³šæããŠãã ããã
ã¯ã©ã¹ã¿ãŒå ã®ãã¹ãŠã®ã«ããªã¢ã®ãªã¹ããäœæããŸãã
watch kubectl get canaries --all-namespaces
NAMESPACE NAME STATUS WEIGHT LASTTRANSITIONTIME
test podinfo Progressing 15 2019-01-16T14:05:07Z
prod frontend Succeeded 0 2019-01-15T16:15:07Z
prod backend Failed 0 2019-01-14T17:05:07Z
Slack éç¥ãæå¹ã«ããŠããå Žåã¯ã次ã®ã¡ãã»ãŒãžã衚瀺ãããŸãã
èªåããŒã«ããã¯
ã«ããªã¢åæäžã«ãåæ HTTP 500 ãšã©ãŒãšé«ãå¿çé 延ãçæããŠãFlagger ããããã€ã¡ã³ããåæ¢ãããã©ããã確èªã§ããŸãã
ãã¹ã ããããäœæãããã®äžã§æ¬¡ã®æäœãå®è¡ããŸãã
kubectl -n test run tester
--image=quay.io/stefanprodan/podinfo:1.2.1
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh
HTTP 500 ãšã©ãŒã®çæ:
watch curl http://podinfo-canary:9898/status/500
é 延çæ:
watch curl http://podinfo-canary:9898/delay/1
倱æãããã§ãã¯ã®æ°ããããå€ã«éãããšããã©ãã£ãã¯ã¯ãã©ã€ã㪠ãã£ãã«ã«ã«ãŒãã£ã³ã°ãããã«ããªã¢ã¯ãŒãã«ã¹ã±ãŒã«ãããå±éã¯å€±æãšããŠããŒã¯ãããŸãã
ã«ããªã¢ ãšã©ãŒãšã¬ã€ãã³ã·ã®ã¹ãã€ã¯ã¯ Kubernetes ã€ãã³ããšããŠèšé²ãããFlagger ã«ãã£ãŠ JSON 圢åŒã§èšé²ãããŸãã
kubectl -n istio-system logs deployment/flagger -f | jq .msg
Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test
Slack éç¥ãæå¹ã«ããŠããå Žåã¯ãæéãè¶ éããããåæã§å€±æãããã§ãã¯ã®æ倧æ°ã«éãããšãã¡ãã»ãŒãžãå±ããŸãã
çµè«
Kubernetes ã«å ã㊠Istio ãªã©ã®ãµãŒãã¹ ã¡ãã·ã¥ãå®è¡ãããšãèªåã¡ããªã¯ã¹ããã°ããããã³ã«ãæäŸãããŸãããã¯ãŒã¯ããŒãã®ãããã€ã¡ã³ãã¯äŸç¶ãšããŠå€éšããŒã«ã«äŸåããŸãã Flagger ã¯ãIstio æ©èœãè¿œå ããããšã§ãããå€ããããšãç®æããŠããŸã
Flagger ã¯ãããã Kubernetes CI/CD ãœãªã¥ãŒã·ã§ã³ãšäºææ§ããããã«ããªã¢åæã¯æ¬¡ã®æ¹æ³ã§ç°¡åã«æ¡åŒµã§ããŸãã
ãã©ã¬ãŒã®ãµããŒã
Flagger ãæ¹åããããã®ææ¡ãããå Žåã¯ãGitHub ã§åé¡ãŸã㯠PR ãéä¿¡ããŠãã ããã
æè¬
åºæïŒ habr.com