Canaria automatica instruere cum Flagger et Istio

Canaria automatica instruere cum Flagger et Istio

CD agnoscitur ut praxis programmatis incepti et effectus naturalis evolutionis principiorum CI stabilitum est. Autem, CD adhuc admodum rara est, fortasse propter multiplicitatem administrationis ac timor delapsum instruere disponibilitate systematis afficiendi.

flagger fons apertus est Kubernetes operator qui intendit relationes confundentes tollere. Automat promotionem instrumentorum canariorum utens Istio offset negotiationis et Prometheus metrice ad applicationes agendi analyses per volutationem moderatam.

Infra gradatim dux ad erigendum et utens Flagger in Google Engine Kubernetes (GKE).

Erexit Kubernetes botrus

Incipe, creando botrum GKE cum Istio addendi (si rationem GCP non habes, subscribere potes hic - gratuitas creditas).

Inire ad Google Cloud, consilium crea et exosculatio da pro eo. Utilitatem inaugurare order versus gcloud et erigas in project gcloud init.

Set default project, computa area, et zona (reponere PROJECT_ID ad propositum tuum):

gcloud config set project PROJECT_ID
gcloud config set compute/region us-central1
gcloud config set compute/zone us-central1-a

Admitte servitium GKE et botrum cum HPA et Istio crea additiones:

gcloud services enable container.googleapis.com
K8S_VERSION=$(gcloud beta container get-server-config --format=json | jq -r '.validMasterVersions[0]')
gcloud beta container clusters create istio 
--cluster-version=${K8S_VERSION} 
--zone=us-central1-a 
--num-nodes=2 
--machine-type=n1-standard-2 
--disk-size=30 
--enable-autorepair 
--no-enable-cloud-logging 
--no-enable-cloud-monitoring 
--addons=HorizontalPodAutoscaling,Istio 
--istio-config=auth=MTLS_PERMISSIVE

Praeceptum mandatum defaltam nodi piscinam inter duos VMs creabit n1-standard-2 (vCPU: 2, RAM 7,5 GB, orbis: 30 GB). Specimen, Istio partes ab laboribus tuis segregare debes, sed via Istio Pods in dedicato gurgite nodis non est facilis. Manifestationes istio tantum leguntur, et GKE mutationes quaslibet solvet, ut nexus cum nodo vel a vasculo discedens.

Documentorum enim extruxerat kubectl:

gcloud container clusters get-credentials istio

Botrus administrator partes crea binding:

kubectl create clusterrolebinding "cluster-admin-$(whoami)" 
--clusterrole=cluster-admin 
--user="$(gcloud config get-value core/account)"

Instrue order versus instrumentum helm:

brew install kubernetes-helm

Homebrew 2.0 nunc etiam available for Linux.

Partum muneris rationem et botrum ligaturas pro Tiller:

kubectl -n kube-system create sa tiller && 
kubectl create clusterrolebinding tiller-cluster-rule 
--clusterrole=cluster-admin 
--serviceaccount=kube-system:tiller

Expand Tiller in spatio nominali kube-system:

helm init --service-account tiller

Usura SSL inter Helm et Tillam considerare debes. Pro magis informationes de tutelaris institutionis tuae Helm, vide docs.helm.sh

Adfirmare occasus:

kubectl -n istio-system get svc

Paucis secundis secundis, GCP externam IP oratio pro servitio assignare debet istio-ingressgateway.

Vestibulum Istio Ingress Gateway

Creare static IP oratio cum nomine istio-gatewayper IP inscriptione portae Istio:

export GATEWAY_IP=$(kubectl -n istio-system get svc/istio-ingressgateway -ojson | jq -r .status.loadBalancer.ingress[0].ip)
gcloud compute addresses create istio-gateway --addresses ${GATEWAY_IP} --region us-central1

Nunc opus est domain interrete et ad DNS registrarium tuum accessum. A records duo addere (reponere example.com ad tuam domain);

istio.example.com   A ${GATEWAY_IP}
*.istio.example.com A ${GATEWAY_IP}

DNS cognoscere wildcard hoc laborat;

watch host test.istio.example.com

Facere portam Istio genericam ut officia extra reticulum in HTTP serviendum praebeant:

apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: public-gateway
  namespace: istio-system
spec:
  selector:
    istio: ingressgateway
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"

Salvum fac auxilium superius sicut public-gateway.yaml et deinde applica:

kubectl apply -f ./public-gateway.yaml

Nulla ratio productionis sine SSL officia in Interrete praebere debet. Ad Istio introitum cum cert-procuratore, CloudDNS et Encrypt obtineat, lege quaeso. Litterarum Flagger G.K.E.

Installation Flagger

GKE Istio addendi non includit exemplum Promethei qui ministerium Istio telemetria expurgat. Quia Flagger Istio HTTP metricis utitur ad analysin canariam faciendam, necesse est ut sequenti schemate Promethei explices, simile illi qui cum schemate Istio Helm officiali venit.

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/gke/istio-prometheus.yaml

Adde repositorium Flagger Helm:

helm repo add flagger [https://flagger.app](https://flagger.app/)

Expand Flagger ad spatio nominali istio-systemper enabling remissa notificationes:

helm upgrade -i flagger flagger/flagger 
--namespace=istio-system 
--set metricsServer=http://prometheus.istio-system:9090 
--set slack.url=https://hooks.slack.com/services/YOUR-WEBHOOK-ID 
--set slack.channel=general 
--set slack.user=flagger

Flagger in quolibet spatio nominali instituere potes dum communicare cum Istio Promethei servitium in portu 9090 potest.

Flagger Grafana ashboardday pro analysi canaria habet. Install Grafana in spatio nominali istio-system:

helm upgrade -i flagger-grafana flagger/grafana 
--namespace=istio-system 
--set url=http://prometheus.istio-system:9090 
--set user=admin 
--set password=change-me

Exponere Grafana per portam apertam creando virtualem servitium (reponere example.com ad tuam domain);

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: grafana
  namespace: istio-system
spec:
  hosts:
    - "grafana.istio.example.com"
  gateways:
    - public-gateway.istio-system.svc.cluster.local
  http:
    - route:
        - destination:
            host: flagger-grafana

Salvum fac auxilium superius sicut grafana-virtual-service.yaml et deinde applica:

kubectl apply -f ./grafana-virtual-service.yaml

Cum movere to http://grafana.istio.example.com in navigatro, ad Grafana login pagina dirigi debes.

Explicas applicationes telae cum Flagger

Flagger Kubernetes explicat et sponte sua sponte (HPA), deinde seriem obiecti (deployments Kubernetes, officia ClusterIP, et Istio virtualis officia gignit). Res haec exponunt applicationem ad usum reticuli ac moderaminis analyseos canariae et progressus.

Canaria automatica instruere cum Flagger et Istio

Creare test spatio spatii cum Istio Sidecar iniectio para:

REPO=https://raw.githubusercontent.com/stefanprodan/flagger/master
kubectl apply -f ${REPO}/artifacts/namespaces/test.yaml

Facere instruere ac vasculum latis scalae instrumentum:

kubectl apply -f ${REPO}/artifacts/canaries/deployment.yaml
kubectl apply -f ${REPO}/artifacts/canaries/hpa.yaml

Explicare test onus ministerium generandi negotiationis in canariis analysis:

helm upgrade -i flagger-loadtester flagger/loadtester 
--namepace=test

Facere consuetudo Canariae resource (reponere example.com ad tuam domain);

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  progressDeadlineSeconds: 60
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    port: 9898
    gateways:
    - public-gateway.istio-system.svc.cluster.local
    hosts:
    - app.istio.example.com
  canaryAnalysis:
    interval: 30s
    threshold: 10
    maxWeight: 50
    stepWeight: 5
    metrics:
    - name: istio_requests_total
      threshold: 99
      interval: 30s
    - name: istio_request_duration_seconds_bucket
      threshold: 500
      interval: 30s
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

Salvum fac auxilium superius sicut podinfo-canary.yaml et deinde applica:

kubectl apply -f ./podinfo-canary.yaml

Analysis supra, si bene gesta est, quinque minutas discurret, HTTP metretas singulas minutas reprimendo. Determinare potes tempus minimum ad convalidandum requisitum ac promovendum instruere instruere ad sequentem formulam: interval * (maxWeight / stepWeight). Canariis CRD agri documenta exarata sunt hic.

Post duorum secundorum, Flagger res canarias creabit;

# applied 
deployment.apps/podinfo
horizontalpodautoscaler.autoscaling/podinfo
canary.flagger.app/podinfo
# generated 
deployment.apps/podinfo-primary
horizontalpodautoscaler.autoscaling/podinfo-primary
service/podinfo
service/podinfo-canary
service/podinfo-primary
virtualservice.networking.istio.io/podinfo

Aperire pasco et vade ad app.istio.example.com, versionem numerum videre debes demo apps.

Canaria automatica analysin et promotionem

Flagger moderatio ansa utitur quae ad canarias commercii sensim moveat dum metiendo key effectum metrics ut HTTP petitio successiva, mediocris postulatio durationis, et vasculum sanitatis. Ex analysi KPI fundata, promovetur vel intermittitur canarius, et eventus analysi ad Slack divulgantur.

Canaria automatica instruere cum Flagger et Istio

Canaria instruere utitur cum unus ex sequentibus obiectis mutationibus;

  • Deploy PodSpec (continens imaginem, mandatum, portus, env, etc.)
  • ConfigMaps conscenduntur volumina vel provisa ad environment variabilium
  • Secreta conscenduntur in volumina vel ad variabiles ambitus convertuntur

Curre pandunt canarii cum adaequationis vas imaginem:

kubectl -n test set image deployment/podinfo 
podinfod=quay.io/stefanprodan/podinfo:1.4.1

Flagger detegit versionem instruere mutatam esse et eam parsing incipit:

kubectl -n test describe canary/podinfo

Events:

New revision detected podinfo.test
Scaling up podinfo.test
Waiting for podinfo.test rollout to finish: 0 of 1 updated replicas are available
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Advance podinfo.test canary weight 20
Advance podinfo.test canary weight 25
Advance podinfo.test canary weight 30
Advance podinfo.test canary weight 35
Advance podinfo.test canary weight 40
Advance podinfo.test canary weight 45
Advance podinfo.test canary weight 50
Copying podinfo.test template spec to podinfo-primary.test
Waiting for podinfo-primary.test rollout to finish: 1 of 2 updated replicas are available
Promotion completed! Scaling down podinfo.test

In analysi, eventus canarii Grafana investigari possunt:

Canaria automatica instruere cum Flagger et Istio

Nota quaeso quod si novae mutationes in analysi canariae instruere applicantur, Flagger tunc temporis analysin sileo.

Indicem omnium canariorum in botro tuo;

watch kubectl get canaries --all-namespaces
NAMESPACE   NAME      STATUS        WEIGHT   LASTTRANSITIONTIME
test        podinfo   Progressing   15       2019-01-16T14:05:07Z
prod        frontend  Succeeded     0        2019-01-15T16:15:07Z
prod        backend   Failed        0        2019-01-14T17:05:07Z

Si Remissa notificationes dare potes, epistulas sequentes accipies:

Canaria automatica instruere cum Flagger et Istio

Lorem reverti

In analysi canaria, synthetica HTTP 500 errores generare potes et alta responsionis latentia vide an Flagger si instruere cessabit.

Vasculum crea experimentum et ea quae sequuntur fac:

kubectl -n test run tester 
--image=quay.io/stefanprodan/podinfo:1.2.1 
-- ./podinfo --port=9898
kubectl -n test exec -it tester-xx-xx sh

D errores generans HTTP:

watch curl http://podinfo-canary:9898/status/500

Mora generationis;

watch curl http://podinfo-canary:9898/delay/1

Cum numerus impedimentorum defecerit ad limen pervenerit, negotiatio ad canalem primigenium reducta fugatur, canaria ad nihilum ascenditur et institutio sicut incassum designatus est.

Canariae errores et spicis latency initium sunt ut Kubernetes eventus et initium Flagger in forma JSON:

kubectl -n istio-system logs deployment/flagger -f | jq .msg

Starting canary deployment for podinfo.test
Advance podinfo.test canary weight 5
Advance podinfo.test canary weight 10
Advance podinfo.test canary weight 15
Halt podinfo.test advancement success rate 69.17% < 99%
Halt podinfo.test advancement success rate 61.39% < 99%
Halt podinfo.test advancement success rate 55.06% < 99%
Halt podinfo.test advancement success rate 47.00% < 99%
Halt podinfo.test advancement success rate 37.00% < 99%
Halt podinfo.test advancement request duration 1.515s > 500ms
Halt podinfo.test advancement request duration 1.600s > 500ms
Halt podinfo.test advancement request duration 1.915s > 500ms
Halt podinfo.test advancement request duration 2.050s > 500ms
Halt podinfo.test advancement request duration 2.515s > 500ms
Rolling back podinfo.test failed checks threshold reached 10
Canary failed! Scaling down podinfo.test

Si notificationes remissas facere potueris, nuntium accipies cum superatur fatalibus vel maximus numerus impedimentorum in analysi deficientium ventum est:

Canaria automatica instruere cum Flagger et Istio

Ad summam:

Cursor servitutis reticulum Istio praeter Kubernetes automatice metricas, tigna et protocolla praebebit, sed quod inposuit instruere adhuc ab instrumentis externis pendet. Flagger studet hanc mutare addendo Istio capabilities progressivum traditio.

Flagger cum quavis solutione Kubernetes CI/CD compatitur, et analysi canaria facile extendi potest webhooks ad systema integrationem perficiendam/acceptio probat, onere probationes vel alia quaevis consuetudo inhibet. Cum Flagger declarativus sit et rebus Kubernetibus respondet, adhiberi potest in pipelines GitOps una cum Texere fluxum aut JenkinsX. Si JenkinsX uteris, Flagger cum additamentis jx instituere potes.

Flagger suscepit Weaveworks et providet deployments in Canariis Nubes texere. Exertum probatur in GKE, EKS, et nudum metallum cum kubeadm.

Si suggestiones habes ut Flagger emendare, quaeso bilem submittere vel PR in GitHub at stefanprodan/flagger. Conlationes magis quam gratae sunt!

Бпасибо Ray Tsang.

Source: www.habr.com