Scalability jẹ ibeere bọtini fun awọn ohun elo awọsanma. Pẹlu Kubernetes, iwọn ohun elo jẹ rọrun bi jijẹ nọmba awọn ẹda fun imuṣiṣẹ ti o yẹ tabi ReplicaSet - ṣugbọn o jẹ ilana afọwọṣe.
Kubernetes ngbanilaaye awọn ohun elo lati di iwọn laifọwọyi (ie Pods ni imuṣiṣẹ tabi ReplicaSet) ni ọna asọye nipa lilo sipesifikesonu Horizontal Pod Autoscaler. Apejuwe aiyipada fun wiwọn aifọwọyi jẹ awọn metiriki lilo Sipiyu (awọn metiriki orisun), ṣugbọn o le ṣepọ aṣa ati awọn metiriki ti a pese ni ita.
Egbe Kubernetes aaS lati Mail.ru tumọ nkan kan lori bii o ṣe le lo awọn metiriki ita lati ṣe iwọn ohun elo Kubernetes laifọwọyi. Lati ṣafihan bi ohun gbogbo ṣe n ṣiṣẹ, onkọwe lo awọn metiriki ibeere wiwọle HTTP, eyiti a gba ni lilo Prometheus.
Dipo ti petele autoscaling ti awọn adarọ-ese, Kubernetes Event Driven Autoscaling (KEDA) ti lo, oniṣẹ ẹrọ Kubernetes ti ṣiṣi. O ṣepọ ni abinibi pẹlu Horizontal Pod Autoscaler lati pese aifọwọyi aifọwọyi (pẹlu si/lati odo) fun awọn ẹru iṣẹ ṣiṣe-iṣẹlẹ. Koodu ti o wa ni GitHub.
Finifini Akopọ ti awọn eto
Aworan naa fihan apejuwe kukuru ti bi ohun gbogbo ṣe n ṣiṣẹ:
Ohun elo naa n pese awọn metiriki lilu HTTP ni ọna kika Prometheus.
Prometheus jẹ atunto lati gba awọn metiriki wọnyi.
Iwọn Prometheus ni KEDA ti tunto lati ṣe iwọn ohun elo laifọwọyi da lori nọmba awọn lilu HTTP.
Bayi Emi yoo sọ fun ọ ni alaye nipa eroja kọọkan.
KEDA ati Prometheus
Prometheus jẹ ibojuwo eto orisun ṣiṣi ati ohun elo titaniji, apakan Awọsanma Native Computing Foundation. Gba awọn metiriki lati oriṣiriṣi awọn orisun ati tọju wọn bi data jara akoko. Lati wo data o le lo Grafana tabi awọn irinṣẹ iworan miiran ti o ṣiṣẹ pẹlu Kubernetes API.
KEDA ṣe atilẹyin imọran ti iwọn iwọn - o ṣe bi afara laarin KEDA ati eto ita. Awọn imuse scaler jẹ pato si eto ibi-afẹde kọọkan ati yọkuro data lati inu rẹ. KEDA lẹhinna lo wọn lati ṣakoso iwọnwọn aifọwọyi.
Scalers ṣe atilẹyin awọn orisun data lọpọlọpọ, fun apẹẹrẹ, Kafka, Redis, Prometheus. Iyẹn ni, KEDA le ṣee lo lati ṣe iwọn awọn imuṣiṣẹ Kubernetes laifọwọyi nipa lilo awọn metiriki Prometheus bi awọn ibeere.
Ohun elo idanwo
Ohun elo idanwo Golang n pese iraye si nipasẹ HTTP ati pe o ṣe awọn iṣẹ pataki meji:
Nlo ile-ikawe alabara Prometheus Go lati ṣe ohun elo ohun elo ati pese metiriki http_requests, eyiti o ni kika to buruju ninu. Aaye ipari nibiti awọn metiriki Prometheus wa wa ni URI /metrics.
var httpRequestsCounter = promauto.NewCounter(prometheus.CounterOpts{
Name: "http_requests",
Help: "number of http requests",
})
Ni idahun si ibeere kan GET Ohun elo naa ṣe alekun iye bọtini naa (access_count) ni Redis. Eyi jẹ ọna ti o rọrun lati ṣe iṣẹ naa gẹgẹbi apakan ti olutọju HTTP ati tun ṣayẹwo awọn metiriki Prometheus. Iwọn metiriki gbọdọ jẹ kanna bi iye naa access_count ninu Redis.
Ohun elo naa ti wa ni ran lọ si Kubernetes nipasẹ Deployment. Iṣẹ kan tun ṣẹda ClusterIP, o gba olupin Prometheus laaye lati gba awọn metiriki ohun elo.
Apejuwe naa n ṣiṣẹ bi afara laarin KEDA ati eto ita lati eyiti o nilo lati gba awọn metiriki. ScaledObject jẹ orisun aṣa ti o nilo lati fi ranṣẹ lati muu imuṣiṣẹpọ pẹlu orisun iṣẹlẹ, ninu ọran yii Prometheus.
ScaledObject ni alaye igbelosoke imuṣiṣẹ, metadata orisun iṣẹlẹ (gẹgẹbi awọn aṣiri asopọ, orukọ isinyi), aarin idibo, akoko imularada, ati data miiran. O ṣe abajade ni awọn orisun autoscaling ti o baamu (itumọ HPA) lati ṣe iwọn imuṣiṣẹ naa.
Nigbati ohun kan ScaledObject ti paarẹ, itumọ HPA ti o baamu jẹ imukuro.
Eyi ni itumọ ScaledObject fun apẹẹrẹ wa, o nlo a scaler Prometheus:
Iru okunfa - Prometheus. Adirẹsi olupin Prometheus jẹ mẹnuba pẹlu orukọ metiriki, iloro ati Ibeere PromQL, eyi ti yoo ṣee lo. Ibeere PromQL - sum(rate(http_requests[2m])).
Gegebi pollingInterval,KEDA beere ibi-afẹde kan lati Prometheus ni gbogbo iṣẹju-aaya mẹdogun. O kere ju ọkan labẹ (minReplicaCount), ati awọn ti o pọju nọmba ti pods ko koja maxReplicaCount (ninu apẹẹrẹ - mẹwa).
Le fi sori ẹrọ minReplicaCount dogba si odo. Ni ọran yii, KEDA mu imuṣiṣẹ odo-si-ọkan ṣiṣẹ ati lẹhinna ṣipaya HPA fun wiwọn aifọwọyi siwaju sii. Ilana yiyipada tun ṣee ṣe, iyẹn ni, iwọn lati ọkan si odo. Ninu apẹẹrẹ, a ko yan odo nitori eyi jẹ iṣẹ HTTP kii ṣe eto eletan.
Idan inu autoscaling
Ipele naa ni a lo bi okunfa lati ṣe iwọn imuṣiṣẹ naa. Ninu apẹẹrẹ wa, ibeere PromQL sum(rate (http_requests [2m])) da oṣuwọn ibeere HTTP ti kojọpọ pada (awọn ibeere fun iṣẹju keji), tiwọn ni iṣẹju meji to kọja.
Niwọn igba ti iye ala-ilẹ jẹ mẹta, o tumọ si pe ọkan yoo wa labẹ lakoko iye naa sum(rate (http_requests [2m])) kere ju mẹta. Ti iye naa ba pọ si, a ṣe afikun iha afikun ni igba kọọkan sum(rate (http_requests [2m])) pọ nipasẹ mẹta. Fun apẹẹrẹ, ti iye ba wa lati 12 si 14, lẹhinna nọmba awọn adarọ-ese jẹ mẹrin.
Bayi jẹ ki a gbiyanju lati ṣeto rẹ soke!
Iṣeto-tẹlẹ
Gbogbo ohun ti o nilo ni iṣupọ Kubernetes ati ohun elo ti a tunto kubectl. Apẹẹrẹ yii nlo iṣupọ kan minikube, ṣugbọn o le mu eyikeyi miiran. Lati fi sori ẹrọ iṣupọ kan wa isakoso.
kubectl apply -f prometheus.yaml
//output
clusterrole.rbac.authorization.k8s.io/prometheus created
serviceaccount/default configured
clusterrolebinding.rbac.authorization.k8s.io/prometheus created
configmap/prom-conf created
deployment.extensions/prometheus-deployment created
service/prometheus-service created
Ṣayẹwo pe ohun gbogbo ti bẹrẹ:
kubectl get pods -l=app=prometheus-server
Duro fun Prometheus lati lọ si ipinle Running.
Lo kubectl port-forward lati wọle si wiwo olumulo Prometheus (tabi olupin API) ni http://localhost:9090.
KEDA_POD_NAME=$(kubectl get pods -n keda
-o=jsonpath='{.items[0].metadata.name}')
kubectl logs $KEDA_POD_NAME -n keda
Abajade dabi nkan wọnyi:
time="2019-10-15T09:38:28Z" level=info msg="Watching ScaledObject:
default/prometheus-scaledobject"
time="2019-10-15T09:38:28Z" level=info msg="Created HPA with
namespace default and name keda-hpa-go-prom-app"
Ṣayẹwo labẹ awọn ohun elo. Ọkan apẹẹrẹ gbọdọ wa ni nṣiṣẹ nitori minReplicaCount dọgba 1:
kubectl get pods -l=app=go-prom-app
Jẹrisi pe a ṣẹda orisun HPA ni aṣeyọri:
kubectl get hpa
O yẹ ki o wo nkan bi:
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-go-prom-app Deployment/go-prom-app 0/3 (avg) 1 10 1 45s
Bayi o le wọle si ohun elo Go rẹ nipa lilo adirẹsi naa http://localhost:8080. Lati ṣe eyi, ṣiṣe aṣẹ naa:
curl http://localhost:8080/test
Abajade dabi nkan wọnyi:
Accessed on 2019-10-21 11:29:10.560385986 +0000 UTC
m=+406004.817901246
Access count 1
Ni aaye yii tun ṣayẹwo Redis. Iwọ yoo rii pe bọtini naa access_count pọ si 1:
kubectl exec -it redis-server-master-0 -- redis-cli get access_count
//output
"1"
Rii daju pe metric iye jẹ http_requests ikan na:
curl http://localhost:8080/metrics | grep http_requests
//output
# HELP http_requests number of http requests
# TYPE http_requests counter
http_requests 1
Ni idi eyi abajade gangan jẹ 1,686057971014493 ati pe o han ni aaye value. Eyi ko to fun iwọnwọn, nitori iloro ti a ṣeto jẹ 3.
Diẹ fifuye!
Ninu ebute tuntun, ṣe atẹle nọmba awọn adarọ-ese ohun elo:
kubectl get pods -l=app=go-prom-app -w
Jẹ ki a mu fifuye pọ si nipa lilo aṣẹ:
./hey -n 2000 http://localhost:8080/test
Lẹhin igba diẹ, iwọ yoo rii irẹjẹ HPA imuṣiṣẹ ati ifilọlẹ awọn adarọ-ese tuntun. Ṣayẹwo HPA rẹ lati rii daju:
kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
keda-hpa-go-prom-app Deployment/go-prom-app 1830m/3 (avg) 1 10 6 4m22s
Ti ẹru naa ko ba ni ibamu, imuṣiṣẹ yoo dinku si aaye nibiti adarọ ese kan ṣoṣo ti nṣiṣẹ. Ti o ba fẹ ṣayẹwo metric gangan (ti o pada nipasẹ ibeere PromQL), lẹhinna lo aṣẹ naa:
//Delete KEDA
kubectl delete namespace keda
//Delete the app, Prometheus server and KEDA scaled object
kubectl delete -f .
//Delete Redis
helm del --purge redis-server
ipari
KEDA gba ọ laaye lati ṣe iwọn awọn imuṣiṣẹ Kubernetes rẹ laifọwọyi (si/lati odo) da lori data lati awọn metiriki ita. Fun apẹẹrẹ, ti o da lori awọn metiriki Prometheus, gigun isinyi ni Redis, lairi olumulo ni koko Kafka.
KEDA ṣepọ pẹlu orisun ita ati tun pese awọn metiriki rẹ nipasẹ Server Metrics si Horizontal Pod Autoscaler.