Kubernetes: nei zvakakosha kumisikidza sisitimu yekushandisa manejimendi?

Sezvo mutemo, pane nguva dzose kudiwa kwekupa dziva rakatsaurirwa rezviwanikwa kune chikumbiro chekushanda kwayo kwakarurama uye kwakagadzikana. Asi ko kana akati wandei maapplication ari kushanda pasimba rimwe chete? Nzira yekupa sei mumwe nemumwe wavo neiyo shoma inodiwa zviwanikwa? Unogona sei kuderedza kushandiswa kwezviwanikwa? Nzira yekugovera sei mutoro pakati pe node? Maitiro ekuita sei kuti yakatwasuka scaling mechanism ishande kana mutoro wekushandisa ukawedzera?

Kubernetes: nei zvakakosha kumisikidza sisitimu yekushandisa manejimendi?

Iwe unofanirwa kutanga nemhando huru dzezviwanikwa zviripo muhurongwa - izvi, hongu, inguva yeprosesa uye RAM. Mu k8s inoratidza aya marudzi ezvishandiso anoyerwa muzvikamu zvinotevera:

  • CPU - mumacores
  • RAM - mumabheti

Uyezve, kune imwe neimwe sosi zvinokwanisika kuseta marudzi maviri ezvinodiwa - chikumbiro ΠΈ nemiganhu. Zvikumbiro - inotsanangura zvidiki zvinodiwa zvemahara zviwanikwa zvenode yekumhanyisa mudziyo (uye pod yakazara), nepo miganhu inoisa muganho wakaoma pane zviwanikwa zviripo kune mudziyo.

Izvo zvakakosha kuti unzwisise kuti iyo manifest haifanire kutsanangura zvakajeka marudzi ese ari maviri, asi maitiro achave anotevera:

  • Kana chete miganho yesosi yakatsanangurwa zvakajeka, zvino zvikumbiro zvechishandiso ichi zvinongotora kukosha kwakaenzana nemiganhu (unogona kuratidza izvi nekufonera tsanangura masangano). Avo. kutaura zvazviri, mudziyo uchaganhurwa kune huwandu hwakafanana hwezviwanikwa hunoda kuti umhanye.
  • Kana zvikumbiro chete zvakatsanangurwa zvakajeka kune chiwanikwa, saka hapana zvirambidzo zvepamusoro zvinoiswa pane iyi sosi - i.e. mudziyo unoganhurwa chete nezviwanikwa zvenode pachayo.

Izvo zvakare zvinogoneka kugadzirisa zviwanikwa manejimendi kwete chete padanho reimwe mudziyo, asiwo padanho rezita uchishandisa zvinotevera masangano:

  • LimitRange - inotsanangura mutemo wekurambidza pamudziyo / pod level mu ns uye inodiwa kuti utsanangure miganho yekusarudzika pamudziyo / podhi, pamwe nekudzivirira kusikwa kwezviri pachena midziyo yemafuta / pod (kana zvinopesana), dzikamisa nhamba yavo. uye tarisa mutsauko unogona kuitika mumitengo mumiganho uye zvikumbiro
  • ResourceQuotas - tsanangura mutemo wekurambidza mune zvese midziyo mu ns uye inoshandiswa, semutemo, kuganhura zviwanikwa pakati penzvimbo (inobatsira kana nharaunda dzisina kunyatsotemwa padanho renode)

Iyi inotevera mienzaniso yezviratidziro zvinoisa miganhu yekushandisa:

  • Padanho chairo remudziyo:

    containers:
    - name: app-nginx
      image: nginx
      resources:
        requests:
          memory: 1Gi
        limits:
          cpu: 200m

    Avo. mune iyi kesi, kuti umhanye mudziyo une nginx, iwe unozoda inokwana 1G yemahara RAM uye 0.2 CPU pane node, nepo kazhinji mudziyo unogona kudya 0.2 CPU uye yese iripo RAM pane node.

  • Painteger level ns:

    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: nxs-test
    spec:
      hard:
        requests.cpu: 300m
        requests.memory: 1Gi
        limits.cpu: 700m
        limits.memory: 2Gi

    Avo. huwandu hwese midziyo yekukumbira mune default ns haigone kudarika mazana matatu eCPU uye 300G yeOP, uye huwandu hwese muganho i1m yeCPU uye 700G yeOP.

  • Default miganhu yemidziyo mu ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: nxs-limit-per-container
    spec:
     limits:
       - type: Container
         defaultRequest:
           cpu: 100m
           memory: 1Gi
         default:
           cpu: 1
           memory: 2Gi
         min:
           cpu: 50m
           memory: 500Mi
         max:
           cpu: 2
           memory: 4Gi

    Avo. munzvimbo yekusarudzika yemazita emidziyo yese, chikumbiro chichaiswa ku100m yeCPU uye 1G yeOP, muganho - 1 CPU uye 2G. Panguva imwecheteyo, muganho unoiswawo pane zvinokwanisika kukosha mukukumbira / muganho weCPU (50m <x <2) uye RAM (500M <x <4G).

  • Pod-level zvinorambidzwa ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
     name: nxs-limit-pod
    spec:
     limits:
     - type: Pod
       max:
         cpu: 4
         memory: 1Gi

    Avo. kune yega yega pod mune default ns pachava nemuganho we4 vCPU uye 1G.

Zvino ndinoda kukuudza zvakanakira kuisa zvirambidzo izvi zvinogona kutipa.

Load balancing mechanism pakati node

Sezvaunoziva, chikamu chek8s chinokonzera kugoverwa kwemapodhi pakati pemanode, akadai scheduler, iyo inoshanda maererano nealgorithm chaiyo. Iyi algorithm inopfuura nematanho maviri kana uchisarudza iyo yakakwana node yekutanga:

  1. mupise
  2. Ranging

Avo. maererano negwaro rakatsanangurwa, node dzinotanga dzakasarudzwa pane izvo zvinogoneka kuvhura pod zvichienderana neseti predicates (kusanganisira kutarisa kana node ine zviwanikwa zvakakwana zvekumhanyisa pod - PodFitsResources), uyezve kune imwe neimwe yeaya node, maererano priorities mapoinzi anopihwa (kusanganisira, iyo yakawanda yemahara zviwanikwa iyo node ine, iyo yakawanda mapoinzi yainopihwa - LeastResourceAllocation/LeastRequestedPriority/BalancedResourceAllocation) uye podhi inotangwa pane node nemapoinzi mazhinji (kana akati wandei achigutsa mamiriro aya kamwechete, ipapo imwe inosarudzwa inosarudzwa).

Panguva imwecheteyo, unofanirwa kunzwisisa kuti mugadziri, paanenge achiongorora zviwanikwa zviripo zve node, inotungamirirwa nedheta inochengetwa mu etcd - i.e. yehuwandu hweyakakumbirwa / muganho sosi yepodhi yega yega inomhanya pane ino node, asi kwete yeiyo chaiyo yekushandisa zviwanikwa. Ruzivo urwu runogona kuwanikwa kubva kune yekuraira kubectl describe node $NODE, somuenzaniso:

# kubectl describe nodes nxs-k8s-s1
..
Non-terminated Pods:         (9 in total)
  Namespace                  Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                         ------------  ----------  ---------------  -------------  ---
  ingress-nginx              nginx-ingress-controller-754b85bf44-qkt2t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-flannel-26bl4                           150m (0%)     300m (1%)   64M (0%)         500M (1%)      233d
  kube-system                kube-proxy-exporter-cb629                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-proxy-x9fsc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                nginx-proxy-k8s-worker-s1                    25m (0%)      300m (1%)   32M (0%)         512M (1%)      233d
  nxs-monitoring             alertmanager-main-1                          100m (0%)     100m (0%)   425Mi (1%)       25Mi (0%)      233d
  nxs-logging                filebeat-lmsmp                               100m (0%)     0 (0%)      100Mi (0%)       200Mi (0%)     233d
  nxs-monitoring             node-exporter-v4gdq                          112m (0%)     122m (0%)   200Mi (0%)       220Mi (0%)     233d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests           Limits
  --------           --------           ------
  cpu                487m (3%)          822m (5%)
  memory             15856217600 (2%)  749976320 (3%)
  ephemeral-storage  0 (0%)             0 (0%)

Pano tinoona mapodhi ese achimhanya pane imwe node, pamwe nezviwanikwa zvinokumbirwa neimwe pod. Uye hezvino izvo matanda ehurongwa anotaridzika kana iyo cronjob-cron-zviitiko-1573793820-xt6q9 pod yatangwa (idzi ruzivo ruchaonekwa muchirongwa chekuronga kana iwe uchiseta yegumi yekutema nhanho mukutanga kuraira nharo -v=10):

log

I1115 07:57:21.637791       1 scheduling_queue.go:908] About to try and schedule pod nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                           
I1115 07:57:21.637804       1 scheduler.go:453] Attempting to schedule pod: nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                                    
I1115 07:57:21.638285       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s5 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638300       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s6 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s3 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s4 is allowed, Node is running only 17 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, Node is running only 16 out of 110 Pods.                                                                              
I1115 07:57:21.638365       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s12 is allowed, Node is running only 9 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s11 is allowed, Node is running only 11 out of 110 Pods.                                                                              
I1115 07:57:21.638385       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s1 is allowed, Node is running only 19 out of 110 Pods.                                                                               
I1115 07:57:21.638402       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s2 is allowed, Node is running only 21 out of 110 Pods.                                                                               
I1115 07:57:21.638383       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638335       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, Node is running only 18 out of 110 Pods.                                                                               
I1115 07:57:21.638408       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s13 is allowed, Node is running only 8 out of 110 Pods.                                                                               
I1115 07:57:21.638478       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, existing pods anti-affinity terms satisfied.                                                                         
I1115 07:57:21.638505       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638577       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638583       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s7 is allowed, Node is running only 25 out of 110 Pods.                                                                               
I1115 07:57:21.638932       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 9        
I1115 07:57:21.638946       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 8           
I1115 07:57:21.638961       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: BalancedResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 9        
I1115 07:57:21.638971       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7        
I1115 07:57:21.638975       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: LeastResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 8           
I1115 07:57:21.638990       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7           
I1115 07:57:21.639022       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: TaintTolerationPriority, Score: (10)                                                                                                        
I1115 07:57:21.639030       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639034       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639041       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: NodeAffinityPriority, Score: (0)                                                                                                            
I1115 07:57:21.639053       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639059       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639061       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: InterPodAffinityPriority, Score: (0)                                                                                                                   
I1115 07:57:21.639063       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                                   
I1115 07:57:21.639073       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639077       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639085       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639088       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639103       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                         
I1115 07:57:21.639109       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639114       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639127       1 generic_scheduler.go:781] Host nxs-k8s-s10 => Score 100037                                                                                                                                                                            
I1115 07:57:21.639150       1 generic_scheduler.go:781] Host nxs-k8s-s8 => Score 100034                                                                                                                                                                             
I1115 07:57:21.639154       1 generic_scheduler.go:781] Host nxs-k8s-s9 => Score 100037                                                                                                                                                                             
I1115 07:57:21.639267       1 scheduler_binder.go:269] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10"                                                                                                               
I1115 07:57:21.639286       1 scheduler_binder.go:279] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10": all PVCs bound and nothing to do                                                                             
I1115 07:57:21.639333       1 factory.go:733] Attempting to bind cronjob-cron-events-1573793820-xt6q9 to nxs-k8s-s10

Pano tinoona kuti pakutanga iyo scheduler mafirita uye inogadzira runyoro rwemanodhi matatu paanogona kutangwa (nxs-k3s-s8, nxs-k8s-s8, nxs-k9s-s8). Inobva yaverenga zvibodzwa zvichibva pane akati wandei ma paramita (kusanganisira BalancedResourceAllocation, LeastResourceAllocation) kune imwe neimwe yeaya node kuitira kuona iyo inonyanya kukodzera node. Pakupedzisira, iyo pod inorongwa pane node ine nhamba yepamusoro yezvibodzwa (pano nodes mbiri kamwechete dzine nhamba imwechete yezvibodzwa 10, saka imwe yakasarudzwa inosarudzwa - nxs-k100037s-s8).

mhedziso: kana node inomhanya mapodhi ayo asina zvirambidzo zvinogadziriswa, zvino kune k8s (kubva pakuona kwekushandisa zviwanikwa) izvi zvichange zvakaenzana sekunge pakanga pasina mapodhi akadaro pane iyi node zvachose. Saka, kana iwe, nemamiriro ezvinhu, uine pod ine gluttonous process (somuenzaniso, wowza) uye pasina zvirambidzo zvakatemerwa, ipapo mamiriro ezvinhu anogona kuitika apo iyi podhi yakadya chaizvo zviwanikwa zvenode, asi kune k8s iyi node. inoonekwa seisina kutakurwa uye ichapihwa huwandu hwakafanana hwemapoinzi pakuiswa (chaizvo mumapoinzi ekuongorora zviwanikwa zviripo) senodhi isina mapodhi anoshanda, izvo zvinozopedzisira zvaita kuti kugovane kusaenzana kwemutoro pakati pemanodhi.

Kudzingwa kwePod

Sezvaunoziva, podhi yega yega inopihwa imwe yemakirasi matatu QoS:

  1. guaranuted - inopihwa kana kune imwe neimwe mudziyo mupodhi chikumbiro uye muganho unotsanangurwa ndangariro uye cpu, uye izvi zvakakosha zvinofanirwa kuenderana.
  2. kuputika - kanenge mudziyo mupodhi une chikumbiro uye muganho, nechikumbiro <muganho
  3. kuedza kwakanyanya - kana pasina mudziyo mupodhi usina zviwanikwa

Panguva imwecheteyo, kana node ikasangana nekushaikwa kwezviwanikwa (dhisiki, ndangariro), kubelet inotanga kukwirisa uye kudzinga mapodhi zvinoenderana neiyo algorithm iyo inofunga nezvekukosha kwepodhi uye kirasi yayo yeQoS. Semuenzaniso, kana tiri kutaura nezve RAM, zvino zvichibva paQoS kirasi, mapoinzi anopihwa zvinoenderana neinotevera musimboti:

  • Wakavimbiswa:-998
  • BestEffort: 1000
  • Burstable: min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)

Avo. neiyo yekutanga kukosha, iyo kubelet ichatanga kudzinga mapodhi neakanakisa kuedza QoS kirasi kubva node.

mhedziso: kana iwe uchida kuderedza mukana wepodhi yaidiwa ichidzingwa kubva pane node kana chiitiko chekushayikwa kwezvinhu pairi, ipapo pamwe chete nekutanga, iwe unodawo kutarisira kugadzirisa chikumbiro / muganhu wayo.

Mechanism ye horizontal autoscaling yemapodhi ekushandisa (HPA)

Kana basa racho nderekungowedzera nekudzikisa huwandu hwemapods zvichienderana nekushandiswa kwezviwanikwa (system - CPU/RAM kana mushandisi - rps), senge k8s entity se. HPA (Horizontal Pod Autoscaler). Iyo algorithm yeiyo inotevera:

  1. Iko kuverenga kwazvino kweiyo yakaonekwa sosi yakatemerwa (ikozvinoMetricValue)
  2. Iwo anodiwa kukosha kweiyo sosi anotemerwa (desiredMetricValue), iyo yehurongwa zviwanikwa inoiswa uchishandisa chikumbiro.
  3. Ikozvino nhamba ye replicas yakatemerwa (ikozvinoReplicas)
  4. Iyi fomula inotevera inoverenga nhamba yaidiwa yezvinyorwa (desiredReplicas)
    takaReplicas = [ currentReplicas * ( currentMetricValue / desiredMetricValue )]

Muchiitiko ichi, kuyera hakuzoitike kana iyo coefficient (ikozvinoMetricValue / inodiwaMetricValue) iri padyo ne1 (munyaya iyi, tinogona kuseta kukanganisa kunobvumidzwa isu pachedu; nekusarudzika ndeye 0.1).

Ngatitarisei kuti hpa inoshanda sei tichishandisa muenzaniso we-app-test application (inotsanangurwa seDeployment), pazvinenge zvakakodzera kushandura nhamba yezvinyorwa zvinoenderana neCPU mashandisirwo:

  • Application manifest

    kind: Deployment
    apiVersion: apps/v1beta2
    metadata:
    name: app-test
    spec:
    selector:
    matchLabels:
    app: app-test
    replicas: 2
    template:
    metadata:
    labels:
    app: app-test
    spec:
    containers:
    - name: nginx
    image: registry.nixys.ru/generic-images/nginx
    imagePullPolicy: Always
    resources:
    requests:
    cpu: 60m
    ports:
    - name: http
    containerPort: 80
    - name: nginx-exporter
    image: nginx/nginx-prometheus-exporter
    resources:
    requests:
    cpu: 30m
    ports:
    - name: nginx-exporter
    containerPort: 9113
    args:
    - -nginx.scrape-uri
    - http://127.0.0.1:80/nginx-status

    Avo. isu tinoona kuti application pod yakatanga kuvhurwa muzviitiko zviviri, imwe neimwe ine mbiri nginx uye nginx-exporter midziyo, yega yega yakatsanangurwa. chikumbiro kune CPU.

  • HPA Manifesto

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: app-test-hpa
    spec:
    maxReplicas: 10
    minReplicas: 2
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: app-test
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 30

    Avo. Isu takagadzira hpa inozoongorora iyo Deployment app-bvunzo uye kudzora huwandu hwepods nechishandiso zvichibva pane cpu chiratidzo (isu tinotarisira kuti pod inofanira kushandisa makumi matatu muzana muzana yeCPU yainokumbira), nehuwandu hwemakopi ari. muhuwandu hwe30-2.

    Zvino, ngatitarisei maitiro ekushanda kwehpa kana tikaisa mutoro kune imwe yemhare:

     # kubectl top pod
    NAME                                                   CPU(cores)   MEMORY(bytes)
    app-test-78559f8f44-pgs58            101m         243Mi
    app-test-78559f8f44-cj4jz            4m           240Mi

Pakazara tine zvinotevera:

  • Kukosha kwaidiwa (desiredMetricValue) - zvinoenderana neiyo hpa marongero, isu tine 30%
  • Ikozvino kukosha (ikozvinoMetricValue) - yekuverenga, mutongi-maneja anoverenga avhareji kukosha kwekushandisa zviwanikwa mu%, i.e. conditionally anoita zvinotevera:
    1. Inogamuchira akakwana kukosha kwepod metrics kubva kune metric server, i.e. 101m uye 4m
    2. Inoverengera avhareji chaiyo kukosha, i.e. (101m + 4m) / 2 = 53m
    3. Inowana iyo yakakwana kukosha kweinodiwa zviwanikwa zvekushandisa (nezveizvi, zvikumbiro zvemidziyo yese zvinopfupikiswa) 60m + 30m = 90m
    4. Inoverengera avhareji muzana yeCPU yekushandisa inoenderana nekukumbira pod, i.e. 53m / 90m * 100% = 59%

Iye zvino tine zvese zvatinoda kuti tione kana tichida kuchinja nhamba yezvinyorwa; kuti tiite izvi, tinoverenga coefficient:

ratio = 59% / 30% = 1.96

Avo. nhamba ye replicas inofanira kuwedzerwa ne ~ 2 nguva uye yakawanda kusvika ku [2 * 1.96] = 4.

Mhedziso: Sezvauri kuona, kuitira kuti iyi nzira ishande, chinhu chinodiwa kuvepo kwezvikumbiro zvemidziyo yese mupodhi yakacherechedzwa.

Mechanism yeyakachinjika autoscaling yemanode (Cluster Autoscaler)

Kuti uderedze kukanganisa kwakashata pane sisitimu panguva yekukwira kwemutoro, kuve neyakagadzirirwa hpa hakuna kukwana. Semuyenzaniso, zvinoenderana nezvirongwa zviri muhpa controller maneja, inosarudza kuti nhamba yekudzokorora inoda kuwedzerwa nekaviri, asi node hadzina zviwanikwa zvemahara zvekumhanyisa manhamba akadai (kureva kuti node haigone kupa iyo akakumbira zviwanikwa kune zvikumbiro pod) uye mapodhi aya anochinja kuenda kuPending state.

Muchiitiko ichi, kana mupi ane IaaS/PaaS inoenderana (semuenzaniso, GKE/GCE, AKS, EKS, nezvimwewo), chishandiso chakadai. Node Autoscaler. Inokubvumira kuti uise huwandu hwepamusoro uye hushoma hwemanodhi musumbu uye gadzirisa otomatiki nhamba iripo yemanodhi (nekudana API yemupi wegore kuti uronge / kubvisa node) kana paine kushomeka kwezviwanikwa musumbu uye mapodhi. haigone kurongwa (iri muPending state).

Mhedziso: Kuti ukwanise kuita autoscale node, zvinodikanwa kuseta zvikumbiro mumidziyo yepod kuitira kuti k8s ikwanise kunyatso ongorora mutoro pane node uye maererano nekutaura kuti hapana zviwanikwa musumbu kuvhura iyo inotevera pod.

mhedziso

Izvo zvinofanirwa kucherechedzwa kuti kuseta mudziyo wemidziyo miganho hachisi chinhu chinodiwa kuti application ishande zvinobudirira, asi zvichiri nani kuzviita nekuda kwezvikonzero zvinotevera:

  1. Kuti uwane kunyatsoshanda kwemugadziri maererano nekuremerwa kwekuenzanisa pakati pek8s node
  2. Kuderedza mukana we "pod eviction" chiitiko chiri kuitika
  3. Kune yakachinjika autoscaling yemapodhi ekushandisa (HPA) kushanda
  4. Kune yakachinjika autoscaling yemanode (Cluster Autoscaling) yevanopa makore

Verengawo zvimwe zvinyorwa pane yedu blog:

Source: www.habr.com

Voeg