Awọn ohun elo Kubernetes Autoscaling nipa lilo Prometheus ati KEDA

Awọn ohun elo Kubernetes Autoscaling nipa lilo Prometheus ati KEDABalloon Eniyan nipa Cimuanos

Scalability jẹ ibeere bọtini fun awọn ohun elo awọsanma. Pẹlu Kubernetes, iwọn ohun elo jẹ rọrun bi jijẹ nọmba awọn ẹda fun imuṣiṣẹ ti o yẹ tabi ReplicaSet - ṣugbọn o jẹ ilana afọwọṣe.

Kubernetes ngbanilaaye awọn ohun elo lati di iwọn laifọwọyi (ie Pods ni imuṣiṣẹ tabi ReplicaSet) ni ọna asọye nipa lilo sipesifikesonu Horizontal Pod Autoscaler. Apejuwe aiyipada fun wiwọn aifọwọyi jẹ awọn metiriki lilo Sipiyu (awọn metiriki orisun), ṣugbọn o le ṣepọ aṣa ati awọn metiriki ti a pese ni ita.

Egbe Kubernetes aaS lati Mail.ru tumọ nkan kan lori bii o ṣe le lo awọn metiriki ita lati ṣe iwọn ohun elo Kubernetes laifọwọyi. Lati ṣafihan bi ohun gbogbo ṣe n ṣiṣẹ, onkọwe lo awọn metiriki ibeere wiwọle HTTP, eyiti a gba ni lilo Prometheus.

Dipo ti petele autoscaling ti awọn adarọ-ese, Kubernetes Event Driven Autoscaling (KEDA) ti lo, oniṣẹ ẹrọ Kubernetes ti ṣiṣi. O ṣepọ ni abinibi pẹlu Horizontal Pod Autoscaler lati pese aifọwọyi aifọwọyi (pẹlu si/lati odo) fun awọn ẹru iṣẹ ṣiṣe-iṣẹlẹ. Koodu ti o wa ni GitHub.

Finifini Akopọ ti awọn eto

Awọn ohun elo Kubernetes Autoscaling nipa lilo Prometheus ati KEDA

Aworan naa fihan apejuwe kukuru ti bi ohun gbogbo ṣe n ṣiṣẹ:

  1. Ohun elo naa n pese awọn metiriki lilu HTTP ni ọna kika Prometheus.
  2. Prometheus jẹ atunto lati gba awọn metiriki wọnyi.
  3. Iwọn Prometheus ni KEDA ti tunto lati ṣe iwọn ohun elo laifọwọyi da lori nọmba awọn lilu HTTP.

Bayi Emi yoo sọ fun ọ ni alaye nipa eroja kọọkan.

KEDA ati Prometheus

Prometheus jẹ ibojuwo eto orisun ṣiṣi ati ohun elo titaniji, apakan Awọsanma Native Computing Foundation. Gba awọn metiriki lati oriṣiriṣi awọn orisun ati tọju wọn bi data jara akoko. Lati wo data o le lo Grafana tabi awọn irinṣẹ iworan miiran ti o ṣiṣẹ pẹlu Kubernetes API.

KEDA ṣe atilẹyin imọran ti iwọn iwọn - o ṣe bi afara laarin KEDA ati eto ita. Awọn imuse scaler jẹ pato si eto ibi-afẹde kọọkan ati yọkuro data lati inu rẹ. KEDA lẹhinna lo wọn lati ṣakoso iwọnwọn aifọwọyi.

Scalers ṣe atilẹyin awọn orisun data lọpọlọpọ, fun apẹẹrẹ, Kafka, Redis, Prometheus. Iyẹn ni, KEDA le ṣee lo lati ṣe iwọn awọn imuṣiṣẹ Kubernetes laifọwọyi nipa lilo awọn metiriki Prometheus bi awọn ibeere.

Ohun elo idanwo

Ohun elo idanwo Golang n pese iraye si nipasẹ HTTP ati pe o ṣe awọn iṣẹ pataki meji:

  1. Nlo ile-ikawe alabara Prometheus Go lati ṣe ohun elo ohun elo ati pese metiriki http_requests, eyiti o ni kika to buruju ninu. Aaye ipari nibiti awọn metiriki Prometheus wa wa ni URI /metrics.
    var httpRequestsCounter = promauto.NewCounter(prometheus.CounterOpts{
           Name: "http_requests",
           Help: "number of http requests",
       })
    
  2. Ni idahun si ibeere kan GET Ohun elo naa ṣe alekun iye bọtini naa (access_count) ni Redis. Eyi jẹ ọna ti o rọrun lati ṣe iṣẹ naa gẹgẹbi apakan ti olutọju HTTP ati tun ṣayẹwo awọn metiriki Prometheus. Iwọn metiriki gbọdọ jẹ kanna bi iye naa access_count ninu Redis.
    func main() {
           http.Handle("/metrics", promhttp.Handler())
           http.HandleFunc("/test", func(w http.ResponseWriter, r 
    *http.Request) {
               defer httpRequestsCounter.Inc()
               count, err := client.Incr(redisCounterName).Result()
               if err != nil {
                   fmt.Println("Unable to increment redis counter", err)
                   os.Exit(1)
               }
               resp := "Accessed on " + time.Now().String() + "nAccess count " + strconv.Itoa(int(count))
               w.Write([]byte(resp))
           })
           http.ListenAndServe(":8080", nil)
       }
    

Ohun elo naa ti wa ni ran lọ si Kubernetes nipasẹ Deployment. Iṣẹ kan tun ṣẹda ClusterIP, o gba olupin Prometheus laaye lati gba awọn metiriki ohun elo.

Nibi ifihan imuṣiṣẹ fun ohun elo.

Olupin Prometheus

Ifihan imuṣiṣẹ Prometheus ni:

  • ConfigMap - lati gbe atunto Prometheus;
  • Deployment - fun gbigbe Prometheus sinu iṣupọ Kubernetes;
  • ClusterIP - iṣẹ fun wiwọle si UI Prometheus;
  • ClusterRole, ClusterRoleBinding и ServiceAccount - fun wiwa-laifọwọyi ti awọn iṣẹ ni Kubernetes (Awari-laifọwọyi).

Nibi farahan fun ṣiṣe Prometheus.

KEDA Prometheus ScaledObject

Apejuwe naa n ṣiṣẹ bi afara laarin KEDA ati eto ita lati eyiti o nilo lati gba awọn metiriki. ScaledObject jẹ orisun aṣa ti o nilo lati fi ranṣẹ lati muu imuṣiṣẹpọ pẹlu orisun iṣẹlẹ, ninu ọran yii Prometheus.

ScaledObject ni alaye igbelosoke imuṣiṣẹ, metadata orisun iṣẹlẹ (gẹgẹbi awọn aṣiri asopọ, orukọ isinyi), aarin idibo, akoko imularada, ati data miiran. O ṣe abajade ni awọn orisun autoscaling ti o baamu (itumọ HPA) lati ṣe iwọn imuṣiṣẹ naa.

Nigbati ohun kan ScaledObject ti paarẹ, itumọ HPA ti o baamu jẹ imukuro.

Eyi ni itumọ ScaledObject fun apẹẹrẹ wa, o nlo a scaler Prometheus:

apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
 name: prometheus-scaledobject
 namespace: default
 labels:
   deploymentName: go-prom-app
spec:
 scaleTargetRef:
   deploymentName: go-prom-app
 pollingInterval: 15
 cooldownPeriod:  30
 minReplicaCount: 1
 maxReplicaCount: 10
 triggers:
 - type: prometheus
   metadata:
     serverAddress: 
http://prometheus-service.default.svc.cluster.local:9090
     metricName: access_frequency
     threshold: '3'
     query: sum(rate(http_requests[2m]))

Wo awọn aaye wọnyi:

  1. O tọka si Deployment Pẹlu orukọ go-prom-app.
  2. Iru okunfa - Prometheus. Adirẹsi olupin Prometheus jẹ mẹnuba pẹlu orukọ metiriki, iloro ati Ibeere PromQL, eyi ti yoo ṣee lo. Ibeere PromQL - sum(rate(http_requests[2m])).
  3. Gegebi pollingInterval,KEDA beere ibi-afẹde kan lati Prometheus ni gbogbo iṣẹju-aaya mẹdogun. O kere ju ọkan labẹ (minReplicaCount), ati awọn ti o pọju nọmba ti pods ko koja maxReplicaCount (ninu apẹẹrẹ - mẹwa).

Le fi sori ẹrọ minReplicaCount dogba si odo. Ni ọran yii, KEDA mu imuṣiṣẹ odo-si-ọkan ṣiṣẹ ati lẹhinna ṣipaya HPA fun wiwọn aifọwọyi siwaju sii. Ilana yiyipada tun ṣee ṣe, iyẹn ni, iwọn lati ọkan si odo. Ninu apẹẹrẹ, a ko yan odo nitori eyi jẹ iṣẹ HTTP kii ṣe eto eletan.

Idan inu autoscaling

Ipele naa ni a lo bi okunfa lati ṣe iwọn imuṣiṣẹ naa. Ninu apẹẹrẹ wa, ibeere PromQL sum(rate (http_requests [2m])) da oṣuwọn ibeere HTTP ti kojọpọ pada (awọn ibeere fun iṣẹju keji), tiwọn ni iṣẹju meji to kọja.

Niwọn igba ti iye ala-ilẹ jẹ mẹta, o tumọ si pe ọkan yoo wa labẹ lakoko iye naa sum(rate (http_requests [2m])) kere ju mẹta. Ti iye naa ba pọ si, a ṣe afikun iha afikun ni igba kọọkan sum(rate (http_requests [2m])) pọ nipasẹ mẹta. Fun apẹẹrẹ, ti iye ba wa lati 12 si 14, lẹhinna nọmba awọn adarọ-ese jẹ mẹrin.

Bayi jẹ ki a gbiyanju lati ṣeto rẹ soke!

Iṣeto-tẹlẹ

Gbogbo ohun ti o nilo ni iṣupọ Kubernetes ati ohun elo ti a tunto kubectl. Apẹẹrẹ yii nlo iṣupọ kan minikube, ṣugbọn o le mu eyikeyi miiran. Lati fi sori ẹrọ iṣupọ kan wa isakoso.

Fi ẹya tuntun sori Mac:

curl -Lo minikube 
https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 
&& chmod +x minikube
sudo mkdir -p /usr/local/bin/
sudo install minikube /usr/local/bin/

Fi sori ẹrọ kubectllati wọle si iṣupọ Kubernetes.

Fi ẹya tuntun sori Mac:

curl -LO 
"https://storage.googleapis.com/kubernetes-release/release/$(curl -s
https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version

KEDA fifi sori

O le mu KEDA ṣiṣẹ ni awọn ọna pupọ, wọn ṣe atokọ sinu iwe. Mo nlo YAML monolithic:

kubectl apply -f
https://raw.githubusercontent.com/kedacore/keda/master/deploy/KedaScaleController.yaml

KEDA ati awọn paati rẹ ti fi sii sinu aaye orukọ keda. Paṣẹ lati ṣayẹwo:

kubectl get pods -n keda

Duro fun oniṣẹ KEDA lati bẹrẹ ki o lọ si Running State. Ati lẹhin naa, tẹsiwaju.

Fifi Redis lilo Helm

Ti o ko ba ni Helm sori ẹrọ, lo eyi olori. Aṣẹ lati fi sori ẹrọ lori Mac:

brew install kubernetes-helm
helm init --history-max 200

helm init initializes awọn agbegbe pipaṣẹ ila ni wiwo ati ki o tun fi sori ẹrọ Tiller si iṣupọ Kubernetes.

kubectl get pods -n kube-system | grep tiller

Duro fun awọn Tiller pod lati tẹ awọn Nṣiṣẹ ipinle.

Akọsilẹ onitumọ: Onkọwe lo Helm @ 2, eyiti o nilo paati olupin Tiller lati fi sii. Bayi Helm@3 jẹ pataki, ko nilo apakan olupin kan.

Lẹhin fifi Helm sori ẹrọ, aṣẹ kan to lati bẹrẹ Redis:

helm install --name redis-server --set cluster.enabled=false --set 
usePassword=false stable/redis

Jẹrisi pe Redis ti bẹrẹ ni aṣeyọri:

kubectl get pods/redis-server-master-0

Duro fun Redis lati lọ si ipinle Running.

Ohun elo imuṣiṣẹ

Aṣẹ imuṣiṣẹ:

kubectl apply -f go-app.yaml

//output
deployment.apps/go-prom-app created
service/go-prom-app-service created

Ṣayẹwo pe ohun gbogbo ti bẹrẹ:

kubectl get pods -l=app=go-prom-app

Duro fun Redis lati tẹ ipinle sii Running.

Gbigbe olupin Prometheus kan

Prometheus farahan nlo Awari Iṣẹ Kubernetes fun Prometheus. O ngbanilaaye iṣawari agbara ti awọn adarọ-ese ohun elo ti o da lori aami iṣẹ naa.

kubernetes_sd_configs:
   - role: service
   relabel_configs:
   - source_labels: [__meta_kubernetes_service_label_run]
     regex: go-prom-app-service
     action: keep

Lati ran:

kubectl apply -f prometheus.yaml

//output
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/default configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prom-conf created
deployment.extensions/prometheus-deployment created
service/prometheus-service created

Ṣayẹwo pe ohun gbogbo ti bẹrẹ:

kubectl get pods -l=app=prometheus-server

Duro fun Prometheus lati lọ si ipinle Running.

Lo kubectl port-forward lati wọle si wiwo olumulo Prometheus (tabi olupin API) ni http://localhost:9090.

kubectl port-forward service/prometheus-service 9090

Gbigbe KEDA Autoscaling iṣeto ni

Paṣẹ lati ṣẹda ScaledObject:

kubectl apply -f keda-prometheus-scaledobject.yaml

Ṣayẹwo awọn akọọlẹ oniṣẹ KEDA:

KEDA_POD_NAME=$(kubectl get pods -n keda 
-o=jsonpath='{.items[0].metadata.name}')
kubectl logs $KEDA_POD_NAME -n keda

Abajade dabi nkan wọnyi:

time="2019-10-15T09:38:28Z" level=info msg="Watching ScaledObject:
default/prometheus-scaledobject"
time="2019-10-15T09:38:28Z" level=info msg="Created HPA with 
namespace default and name keda-hpa-go-prom-app"

Ṣayẹwo labẹ awọn ohun elo. Ọkan apẹẹrẹ gbọdọ wa ni nṣiṣẹ nitori minReplicaCount dọgba 1:

kubectl get pods -l=app=go-prom-app

Jẹrisi pe a ṣẹda orisun HPA ni aṣeyọri:

kubectl get hpa

O yẹ ki o wo nkan bi:

NAME                   REFERENCE                TARGETS     MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-go-prom-app   Deployment/go-prom-app   0/3 (avg)   1         10        1          45s

Ayẹwo ilera: wiwọle ohun elo

Lati wọle si aaye ipari REST ohun elo wa, ṣiṣe:

kubectl port-forward service/go-prom-app-service 8080

Bayi o le wọle si ohun elo Go rẹ nipa lilo adirẹsi naa http://localhost:8080. Lati ṣe eyi, ṣiṣe aṣẹ naa:

curl http://localhost:8080/test

Abajade dabi nkan wọnyi:

Accessed on 2019-10-21 11:29:10.560385986 +0000 UTC 
m=+406004.817901246
Access count 1

Ni aaye yii tun ṣayẹwo Redis. Iwọ yoo rii pe bọtini naa access_count pọ si 1:

kubectl exec -it redis-server-master-0 -- redis-cli get access_count
//output
"1"

Rii daju pe metric iye jẹ http_requests ikan na:

curl http://localhost:8080/metrics | grep http_requests
//output
# HELP http_requests number of http requests
# TYPE http_requests counter
http_requests 1

Ẹda fifuye

A yoo lo hey - IwUlO fun ṣiṣẹda fifuye:

curl -o hey https://storage.googleapis.com/hey-release/hey_darwin_amd64 
&& chmod a+x hey

O tun le ṣe igbasilẹ ohun elo fun Linux tabi Windows.

Ṣiṣe rẹ:

./hey http://localhost:8080/test

Nipa aiyipada, ohun elo naa firanṣẹ awọn ibeere 200. O le jẹrisi eyi nipa lilo awọn metiriki Prometheus bakanna bi Redis.

curl http://localhost:8080/metrics | grep http_requests
//output
# HELP http_requests number of http requests
# TYPE http_requests counter
http_requests 201
kubectl exec -it redis-server-master-0 -- redis-cli get access_count
//output
201

Fidi iye metiriki gangan (da pada nipasẹ ibeere PromQL):

curl -g 
'http://localhost:9090/api/v1/query?query=sum(rate(http_requests[2m]))'
//output
{"status":"success","data":{"resultType":"vector","result":[{"metric":{},"value":[1571734214.228,"1.686057971014493"]}]}}

Ni idi eyi abajade gangan jẹ 1,686057971014493 ati pe o han ni aaye value. Eyi ko to fun iwọnwọn, nitori iloro ti a ṣeto jẹ 3.

Diẹ fifuye!

Ninu ebute tuntun, ṣe atẹle nọmba awọn adarọ-ese ohun elo:

kubectl get pods -l=app=go-prom-app -w

Jẹ ki a mu fifuye pọ si nipa lilo aṣẹ:

./hey -n 2000 http://localhost:8080/test

Lẹhin igba diẹ, iwọ yoo rii irẹjẹ HPA imuṣiṣẹ ati ifilọlẹ awọn adarọ-ese tuntun. Ṣayẹwo HPA rẹ lati rii daju:

kubectl get hpa
NAME                   REFERENCE                TARGETS         MINPODS   MAXPODS   REPLICAS   AGE
keda-hpa-go-prom-app   Deployment/go-prom-app   1830m/3 (avg)   1         10        6          4m22s

Ti ẹru naa ko ba ni ibamu, imuṣiṣẹ yoo dinku si aaye nibiti adarọ ese kan ṣoṣo ti nṣiṣẹ. Ti o ba fẹ ṣayẹwo metric gangan (ti o pada nipasẹ ibeere PromQL), lẹhinna lo aṣẹ naa:

curl -g 
'http://localhost:9090/api/v1/query?query=sum(rate(http_requests[2m]))'

Ninu

//Delete KEDA
kubectl delete namespace keda
//Delete the app, Prometheus server and KEDA scaled object
kubectl delete -f .
//Delete Redis
helm del --purge redis-server

ipari

KEDA gba ọ laaye lati ṣe iwọn awọn imuṣiṣẹ Kubernetes rẹ laifọwọyi (si/lati odo) da lori data lati awọn metiriki ita. Fun apẹẹrẹ, ti o da lori awọn metiriki Prometheus, gigun isinyi ni Redis, lairi olumulo ni koko Kafka.

KEDA ṣepọ pẹlu orisun ita ati tun pese awọn metiriki rẹ nipasẹ Server Metrics si Horizontal Pod Autoscaler.

Ti o dara orire!

Kini ohun miiran lati ka:

  1. Awọn iṣe ti o dara julọ ati awọn iṣe ti o dara julọ fun ṣiṣiṣẹ awọn apoti ati Kubernetes ni awọn agbegbe iṣelọpọ.
  2. Awọn irinṣẹ 90+ ti o wulo fun Kubernetes: imuṣiṣẹ, iṣakoso, ibojuwo, aabo ati diẹ sii.
  3. Ikanni wa ni ayika Kubernetes ni Telegram.

orisun: www.habr.com

Fi ọrọìwòye kun