Kubernetes: cur tam magni momenti est ratio subsidiorum configurare procuratio?
Pro regula semper necesse est ut lacus rerum dicatas praebeat applicationem ad rectam ac stabilem operationem. Sed quid si in eadem potentia plures applicationes currunt? Quomodo unumquemque earum praebere cum minimis facultatibus necessariis? Quomodo limitare resource consummatio? Quomodo sarcinam nodis recte distribuere? Quomodo efficere potest scalis mechanismus horizontalis, si onus applicationis augetur?
Incipere debes cum praecipuis facultatum generibus in systemate existant β hoc, sane, temporis ac RAM est processus. In k8s manifestat haec subsidia genera in sequentibus unitatibus metiuntur;
CPU - in coros
RAM - in bytes
Singulis autem subsidiis fieri potest ut duo genera requiruntur. petitiones ΠΈ fines. Petitiones - minimas requisita pro liberis nodi opum describit ut continens (et vasculum totum), dum limites ponit terminum difficilem in promptu opum continenti.
Est autem interest ad intelligendum quod manifesta non habet utrasque species explicite definias, sed mores erit talis;
Si solum subsidii limites explicite specificantur, postulationes huius subsidii automatice valorem aequalem limitibus capiunt (id comprobare potes vocando res describere). Illae. immo continens limitatum erit tantundem facultatum, quam ad currendum requirit.
Si tantum petitiones pro subsidio expresse specificatae sunt, nullae restrictiones superiores huic subsidio positae sunt - i.e. continens tantummodo opibus ipsius nodi limitatur.
Potest etiam procurationem resource configurare non solum in ambitu cuiusdam continentis, sed etiam in gradu spatii nominali utentibus sequentibus entibus:
LimitRange - restrictionem designat consilium in continente/vasculi plano in ns et opus est ad describendas limites defectus in continente/vasculi, ac ne creatio vasorum pinguium/sliquarum manifesto (vel vice versa) numerum circumscriberet. ac definire differentiam possibilem in valoribus limitibus et petitionibus
ResourceQuotas - restrictionem consilium generatim pro omnibus vasis in ns describendis et ad modum fere adhibitis opibus circumscriptionibus circumscribendis (utilis est cum ambitus non stricte demarcantur in gradu nodi)
Sunt exempla manifestorum quae limites rescriptorum constituunt;
Illae. hoc in casu, ut vas cum nginx curras, saltem 1G gratuitae RAM et 0.2 CPU in nodo indigebis, cum ad summum continens 0.2 CPU et omnia RAM in nodo praesto consumat.
Illae. summa omnium continentiae petitionis in defectu ns excedere non potest 300m pro CPU et 1G pro OP, summa autem omnis limitis est 700m pro CPU et 2G pro OP.
Illae. in spatio nominali defectu pro omnibus vasis, petitio ad 100m pro CPU et 1G pro OP, limitatio - 1 CPU et 2G apponetur. Eodem tempore etiam limes in valores possibilis ponendus est in petitione / limes pro CPU (50m < x < 2) et RAM (500M < x < 4G).
Illae. pro quolibet vasculo in defectu ns erit terminus 4 vCPU et 1G.
Nunc velim tibi dicere quanta haec restrictiones commoda nobis dare possint.
Compensans onus mechanism inter lymphaticorum
Ut nostis, pars k8s pertinet ad siliquas in nodo distribuendas, qualia sunt schedulerquae ad certum algorithmum operatur. Hoc algorithmus per duos gradus percurrit cum eligendo nodi optimam deducendi:
eliquare
Ranging
Illae. secundum descriptum consilium, nodi initio delecti sunt quibus fieri potest ut vasculum ex statuto emittat praedicatorum (including an nodi satis facultates habeat ad currendum vasculum - PodFitsResources), ac deinde pro singulis nodi, secundum. bonorumque puncta admittuntur (including, quo liberiores facultates nodi habet, eo plura assignatur - LeastResourceAllocation/LeastRequestedPriority/BalancedResourceAllocation) et legumen in nodi cum punctis pluribus immittitur (si plures nodi hanc condicionem simul satisfaciunt, deinde temere seligitur).
Eodem tempore, debes intelligere quod schedula, cum perpendendis facultatibus nodi suppetentibus, regitur notitia quae in etc - i.e. ad quantitatem petitae/terminis subsidii cuiusque vasculi in hoc nodo currit, non tamen ad ipsam consumptionem subsidii. Haec notitia ex imperio obtineri potest output kubectl describe node $NODEFor example:
Hic videmus omnes siliquas nodi certae currentes, tum facultates quas singulas podagras petunt. Et hic est quid tigna cedularum simile cum cronjob-cron-events-1573793820-xt6q9 vasculum immissum est (haec informationes in schedula apparebit cum 10th logging gradu in argumentis praecepti -v=10 pones);
log
I1115 07:57:21.637791 1 scheduling_queue.go:908] About to try and schedule pod nxs-stage/cronjob-cron-events-1573793820-xt6q9
I1115 07:57:21.637804 1 scheduler.go:453] Attempting to schedule pod: nxs-stage/cronjob-cron-events-1573793820-xt6q9
I1115 07:57:21.638285 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s5 is allowed, Node is running only 16 out of 110 Pods.
I1115 07:57:21.638300 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s6 is allowed, Node is running only 20 out of 110 Pods.
I1115 07:57:21.638322 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s3 is allowed, Node is running only 20 out of 110 Pods.
I1115 07:57:21.638322 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s4 is allowed, Node is running only 17 out of 110 Pods.
I1115 07:57:21.638334 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, Node is running only 16 out of 110 Pods.
I1115 07:57:21.638365 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s12 is allowed, Node is running only 9 out of 110 Pods.
I1115 07:57:21.638334 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s11 is allowed, Node is running only 11 out of 110 Pods.
I1115 07:57:21.638385 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s1 is allowed, Node is running only 19 out of 110 Pods.
I1115 07:57:21.638402 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s2 is allowed, Node is running only 21 out of 110 Pods.
I1115 07:57:21.638383 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, Node is running only 16 out of 110 Pods.
I1115 07:57:21.638335 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, Node is running only 18 out of 110 Pods.
I1115 07:57:21.638408 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s13 is allowed, Node is running only 8 out of 110 Pods.
I1115 07:57:21.638478 1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, existing pods anti-affinity terms satisfied.
I1115 07:57:21.638505 1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, existing pods anti-affinity terms satisfied.
I1115 07:57:21.638577 1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, existing pods anti-affinity terms satisfied.
I1115 07:57:21.638583 1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s7 is allowed, Node is running only 25 out of 110 Pods.
I1115 07:57:21.638932 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 9
I1115 07:57:21.638946 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 8
I1115 07:57:21.638961 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: BalancedResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 9
I1115 07:57:21.638971 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7
I1115 07:57:21.638975 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: LeastResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 8
I1115 07:57:21.638990 1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7
I1115 07:57:21.639022 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: TaintTolerationPriority, Score: (10)
I1115 07:57:21.639030 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: TaintTolerationPriority, Score: (10)
I1115 07:57:21.639034 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: TaintTolerationPriority, Score: (10)
I1115 07:57:21.639041 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: NodeAffinityPriority, Score: (0)
I1115 07:57:21.639053 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: NodeAffinityPriority, Score: (0)
I1115 07:57:21.639059 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: NodeAffinityPriority, Score: (0)
I1115 07:57:21.639061 1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: InterPodAffinityPriority, Score: (0)
I1115 07:57:21.639063 1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639073 1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: InterPodAffinityPriority, Score: (0)
I1115 07:57:21.639077 1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639085 1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: InterPodAffinityPriority, Score: (0)
I1115 07:57:21.639088 1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639103 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639109 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639114 1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)
I1115 07:57:21.639127 1 generic_scheduler.go:781] Host nxs-k8s-s10 => Score 100037
I1115 07:57:21.639150 1 generic_scheduler.go:781] Host nxs-k8s-s8 => Score 100034
I1115 07:57:21.639154 1 generic_scheduler.go:781] Host nxs-k8s-s9 => Score 100037
I1115 07:57:21.639267 1 scheduler_binder.go:269] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10"
I1115 07:57:21.639286 1 scheduler_binder.go:279] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10": all PVCs bound and nothing to do
I1115 07:57:21.639333 1 factory.go:733] Attempting to bind cronjob-cron-events-1573793820-xt6q9 to nxs-k8s-s10
Hic perspicimus initio filtra cedularum ac elenchum 3 nodum generare in quos deduci potest (nxs-k8s-s8, nxs-k8s-s9, nxs-k8s-s10). Deinde ustulos computat in pluribus parametris (including BalancedResourceAllocation, LeastResourceAllocation) pro singulis his nodis ad nodi aptissimum determinandum. Ultimo vasculum in nodo accedendum cum punctis supremis (hic duo nodi totidem punctorum 100037 habent, ita temere unus eligitur - nxs-k8s-s10).
conclusio,si nodi decurrant siliquae pro quibus non sunt restrictiones, tunc pro k8s (ex parte phthisicis ope- rarum) hoc valebit ac si nulla talis siliqua in hac nodi omnino essent. Si ergo sub condicione habes vasculum cum processu guloso (exempli gratia wowza) nullisque restrictionibus appositis, tunc evenire potest, quando hoc vasculum vere comedit omnes facultates nodi, sed pro hoc nodo k8s expositae consideratur et totidem punctis aestimabitur cum ordo (praesertim in punctis perpendendis facultatibus promptis) sicut nodi qui non habet siliquas laborantes, quae tandem inaequalem oneris inter nodi distributionem ducere potest.
Vasculum evictionis
Ut nostis, singuli vasculum unum ex tribus QoS generibus assignatur;
praestatur - assignatur cum pro unoquoque vase in vasculo petitionem et terminum pro memoria et cpu specificantur, et hi valores inserere debent.
burstable - unum saltem continens in vasculo petitionem et modum habet, cum petitione < limit
optimus conatus - cum non sit unum vas in vasculum resource limitatum
Eodem tempore, cum nodi defectus facultatum experitur (disco, memoria), kubelet siliquas enucleare et evincere incipit secundum specificum algorithmum, quod in ratione prioritatis vasculi et eius QoS classis consideratur. Exempli gratia, si de RAM loquimur, tum in genere QoS fundatur, puncta promittuntur secundum hoc principium:
Illae. eadem prioritate, kubelet siliquas evincet primo optimo nisu QoS genus e nodi.
conclusio,: si verisimilitudinem vasculi desiderati e nodi in eventu defectus facultatum in eo reducere vis, tum cum prioritate, etiam postulandi modum pro eo constituendi curare debes.
Mechanismus ad horizontalem autoscaling applicationis leguminis (HPA)
Cum negotium est automatice augere et diminuere numerum siliquae secundum usum opum (ratio - CPU/RAM vel usor - rps), talis entitatis k8s ac HPA ( Pod Horizontalis Autoscaler). Cuius algorithmus talis est;
Valores desiderati pro subsidio determinatae sunt (desiderataeMetricValue), quae ad facultates systematicas postulationem utendam sunt
Praesens numerus replicationum determinatur (currentReplicas)
Sequens formula computat numerum desideratum replicas (desideredReplicas)
[ = [ currentReplicas * ( currentMetricValue / .
In hoc casu, scalis non occurret cum coΓ«fficiens (currentMetricValue/desiderataMetricValue) prope 1 (in hoc casu, errorem ipsi licitum apponere possumus; per defaltam est 0.1).
Intueamur quomodo opera hpa utentes exemplo applicationis app-testi (de instruere descripto), ubi necesse est numerum replicationum secundum CPU consumptionem mutare;
Illae. Videmus vasculum applicationis initio immissum in duobus instantias, quarum singulae binas nginx et nginx-exportantes continentes, quarum singulae definitae sint. petitiones ad CPU.
Illae. Hpa nos creavimus, qui app-tentationem instruere debebit et numerum leguminis moderari cum applicatione in indicator CPU (expectamus ut vasculum consumat 30% centesimas CPU quod petit), cum replicationum numero in latitudine 2-10.
Nunc inspiciamus machinam hpa operationis, si uni focorum onere applicemus;
# kubectl top pod
NAME CPU(cores) MEMORY(bytes)
app-test-78559f8f44-pgs58 101m 243Mi
app-test-78559f8f44-cj4jz 4m 240Mi
In summa sequentia habemus:
Valorem desideratum (desideratumMetricValue) - secundum hpa occasus, habemus 30%
Praesens valorem (currentMetricValue) - pro calculo, moderatoris procurator computat valorem mediocris consumptionis in %, i.e. sub condicione facit quae sequuntur;
Vasculum metri absolutum accipit a servo metrico, i.e. 101m et 4m*
Ad corrumpendam negativam ictum in systemate in onere aestuat, habens hpa figuratum non satis est. Exempli gratia, secundum unctiones procuratoris in hpa moderatoris, decernit numerum replicationum indigere per 2 vicibus augeri, sed nodi liberas facultates non habent ad currendum talem numerum siliquae (i.e. nodi praebere non possunt. facultates postulavit ad petitiones vasculi) et hae siliquae virgas in civitate pendenti.
Hoc in casu, si provisor respondet IaaS/PaaS (exempli gratia GKE/GCE, AKS, EKS, etc.), instrumentum simile Node Autoscaler. Permittit te ut maximum et minimum numerum nodis in botro constituas et sponte accommodas numerum nodis currentem (provisorem API nubem appellando ad nodi ordinem) cum defectus copiarum in botro et siliquis est. non accedant (in nibh.
conclusioni, Nodos autoscales posse, postulationes in vasculis emittere oportet ut k8s onus nodis recte aestimare possit ac proinde referre nullas facultates in botro ad proximum vasculum deducendum esse.
conclusio,
Animadvertendum est quod ambitus subsidii continentis limites non est exigentia applicationis ad utiliter currendum, sed adhuc melius est ob sequentes causas facere;
Ad accuratiorem operationem schedulae in terminis oneris inter k8s nodis aequante
Ad redigendum verisimilitudo "vasculum evictionis" eventum evenit
Pro horizontali autoscaling applicationis siliquae (HPA) ad opus
Pro horizontali autoscaling nodis (botrus Autoscaling) pro nubes aliqua