Kubernetes: kungani kubaluleke kangaka ukusetha ukuphathwa kwensiza yesistimu?

Njengomthetho, kuhlale kunesidingo sokuhlinzeka ngemithombo yezinsiza ezizinikele esicelweni sokusebenza kwayo okulungile nokuzinzile. Kodwa kuthiwani uma izinhlelo zokusebenza ezimbalwa zisebenza ngamandla afanayo? Indlela yokuhlinzeka ngamunye wabo ngezinsiza ezincane ezidingekayo? Ungakukhawulela kanjani ukusetshenziswa kwezinsiza? Indlela yokusabalalisa kahle umthwalo phakathi kwama-node? Ungaqinisekisa kanjani ukuthi indlela yokukala evundlile iyasebenza uma umthwalo wohlelo ukhula?

Kubernetes: kungani kubaluleke kangaka ukusetha ukuphathwa kwensiza yesistimu?

Udinga ukuqala ngokuthi yiziphi izinhlobo eziyinhloko zezinsiza ezikhona ohlelweni - lokhu, yiqiniso, yisikhathi sokucubungula kanye ne-RAM. Ku-k8s kubonisa lezi zinhlobo zezinsiza zikalwa ngamayunithi alandelayo:

  • I-CPU - kuma-cores
  • RAM - ngamabhayithi

Ngaphezu kwalokho, ngensiza ngayinye kungenzeka ukusetha izinhlobo ezimbili zezidingo - izicelo ΠΈ imikhawulo. Izicelo - ichaza izimfuneko eziyisisekelo zezinsiza zamahhala ze-node yokusebenzisa isiqukathi (kanye ne-pod iyonke), kuyilapho imikhawulo ibeka umkhawulo oqinile ezinsizeni ezitholakalayo esitsheni.

Kubalulekile ukuqonda ukuthi i-manifest akudingeki ichaze ngokusobala zombili izinhlobo, kodwa ukuziphatha kuzoba ngale ndlela elandelayo:

  • Uma kuphela imikhawulo yesisetshenziswa icaciswe ngokusobala, izicelo zalesi sisetshenziswa zithatha ngokuzenzakalelayo inani elilingana nemikhawulo (ungaqinisekisa lokhu ngokushayela izinhlangano ezichazayo). Labo. empeleni, isiqukathi sizokhawulelwa esilinganisweni esifanayo sezinsiza esizidingayo ukuze sisebenze.
  • Uma kuphela izicelo ezicaciswe ngokusobala esisetshenziswa, khona-ke akukho mikhawulo ephezulu ebekiwe kulesi sisetshenziswa - i.e. isitsha sinqunyelwe kuphela izinsiza ze-node ngokwayo.

Kungenzeka futhi ukulungisa ukuphathwa kwezinsiza hhayi kuphela ezingeni lesiqukathi esithile, kodwa futhi ezingeni lendawo yegama usebenzisa lezi zinhlangano ezilandelayo:

  • LimitRange β€” ichaza inqubomgomo yokukhawulela ezingeni lesiqukathi/i-pod ku-ns futhi iyadingeka ukuze kuchazwe imikhawulo ezenzakalelayo esitsheni/i-pod, futhi kuvinjwe ukwakhiwa kweziqukathi/ama-pods okusobala (noma okuphambene nalokho), kukhawulelwe inani lazo futhi unqume umehluko ongaba khona kumanani emikhawulo nezicelo
  • I-ResourceQuotas - chaza inqubomgomo yokuvinjelwa ngokuvamile yazo zonke iziqukathi ku-ns futhi isetshenziswa, njengomthetho, ukuhlukanisa izinsiza phakathi kwezindawo (iwusizo lapho indawo inganqunywanga ngokuqinile ezingeni le-node)

Okulandelayo yizibonelo ze-manifest ezibeka imikhawulo yensiza:

  • Ezingeni elithile lesiqukathi:

    containers:
    - name: app-nginx
      image: nginx
      resources:
        requests:
          memory: 1Gi
        limits:
          cpu: 200m

    Labo. kulesi simo, ukuze usebenzise isiqukathi esine-nginx, uzodinga okungenani i-1G ye-RAM yamahhala kanye no-0.2 CPU endaweni, kuyilapho kakhulu isiqukathi singadla u-0.2 CPU nayo yonke i-RAM etholakalayo endaweni.

  • Ezingeni eliphelele ns:

    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: nxs-test
    spec:
      hard:
        requests.cpu: 300m
        requests.memory: 1Gi
        limits.cpu: 700m
        limits.memory: 2Gi

    Labo. isamba sazo zonke iziqukathi zesicelo eziku-ns okuzenzakalelayo azikwazi ukudlula u-300m we-CPU no-1G we-OP, futhi isamba sawo wonke umkhawulo singu-700m we-CPU kanye no-2G we-OP.

  • Imikhawulo ezenzakalelayo yeziqukathi ku-ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: nxs-limit-per-container
    spec:
     limits:
       - type: Container
         defaultRequest:
           cpu: 100m
           memory: 1Gi
         default:
           cpu: 1
           memory: 2Gi
         min:
           cpu: 50m
           memory: 500Mi
         max:
           cpu: 2
           memory: 4Gi

    Labo. endaweni yamagama emisiwe yazo zonke iziqukathi, isicelo sizosethwa sibe ngu-100m we-CPU kanye no-1G we-OP, umkhawulo - 1 CPU no-2G. Ngesikhathi esifanayo, umkhawulo uphinda usethwe kumanani angaba khona esicelweni/umkhawulo we-CPU (50m <x <2) kanye ne-RAM (500M <x <4G).

  • Imikhawulo yezinga le-Pod ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
     name: nxs-limit-pod
    spec:
     limits:
     - type: Pod
       max:
         cpu: 4
         memory: 1Gi

    Labo. ku-pod ngayinye ku-ns ezenzakalelayo kuzoba nomkhawulo ongu-4 vCPU no-1G.

Manje ngithanda ukukutshela ukuthi yiziphi izinzuzo ezingasinika le mikhawulo.

Layisha indlela yokulinganisa phakathi kwamanodi

Njengoba wazi, ingxenye ye-k8s inesibopho sokusatshalaliswa kwama-pods phakathi kwama-node, njenge umhleli, esebenza ngokuvumelana ne-algorithm ethile. Le algorithm idlula ezigabeni ezimbili lapho ukhetha i-node efanelekile okufanele uyiqalise:

  1. Ukuhlunga
  2. Iyahamba

Labo. ngokuya ngenqubomgomo echaziwe, ama-node akhethwa ekuqaleni lapho kungenzeka khona ukwethula i-pod ngokusekelwe kusethi izilandiso (kuhlanganise nokuhlola ukuthi i-node inezinsiza ezanele zokusebenzisa i-pod - PodFitsResources), bese kuba yinye yalezi zindawo, ngokusho izinto ezibalulekile amaphuzu anikezwa (okuhlanganisa, uma i-node inezinsiza zamahhala, inikezwa amaphuzu engeziwe - LeastResourceAllocation/LeastRequestedPriority/BalancedResourceAllocation) futhi i-pod yethulwa endaweni enamaphuzu amaningi (uma ama-node amaningana enelisa lesi simo ngesikhathi esisodwa, khona-ke kukhethwa okungahleliwe) .

Ngesikhathi esifanayo, udinga ukuqonda ukuthi umhleli, lapho ehlola izinsiza ezitholakalayo ze-node, iqondiswa idatha egcinwe ku-etcd - i.e. ngenani lensiza eceliwe/ekhawulelwe ye-pod ngayinye esebenza kule nodi, kodwa hhayi ukusetshenziswa kwangempela kwensiza. Lolu lwazi lungatholakala ekuphumeni komyalo kubectl describe node $NODEisibonelo:

# kubectl describe nodes nxs-k8s-s1
..
Non-terminated Pods:         (9 in total)
  Namespace                  Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                         ------------  ----------  ---------------  -------------  ---
  ingress-nginx              nginx-ingress-controller-754b85bf44-qkt2t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-flannel-26bl4                           150m (0%)     300m (1%)   64M (0%)         500M (1%)      233d
  kube-system                kube-proxy-exporter-cb629                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-proxy-x9fsc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                nginx-proxy-k8s-worker-s1                    25m (0%)      300m (1%)   32M (0%)         512M (1%)      233d
  nxs-monitoring             alertmanager-main-1                          100m (0%)     100m (0%)   425Mi (1%)       25Mi (0%)      233d
  nxs-logging                filebeat-lmsmp                               100m (0%)     0 (0%)      100Mi (0%)       200Mi (0%)     233d
  nxs-monitoring             node-exporter-v4gdq                          112m (0%)     122m (0%)   200Mi (0%)       220Mi (0%)     233d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests           Limits
  --------           --------           ------
  cpu                487m (3%)          822m (5%)
  memory             15856217600 (2%)  749976320 (3%)
  ephemeral-storage  0 (0%)             0 (0%)

Lapha sibona wonke ama-pods asebenza endaweni ethile, kanye nezinsiza ezicelwa i-pod ngayinye. Futhi nakhu ukuthi amalogi omhleli abukeka kanjani lapho i-cronjob-cron-events-1573793820-xt6q9 pod yethulwa (lolu lwazi luzovela kulogi lomhleli lapho usetha ileveli yokungena ye-10 kuma-agumenti womyalo wokuqalisa -v=10):

log

I1115 07:57:21.637791       1 scheduling_queue.go:908] About to try and schedule pod nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                           
I1115 07:57:21.637804       1 scheduler.go:453] Attempting to schedule pod: nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                                    
I1115 07:57:21.638285       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s5 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638300       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s6 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s3 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s4 is allowed, Node is running only 17 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, Node is running only 16 out of 110 Pods.                                                                              
I1115 07:57:21.638365       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s12 is allowed, Node is running only 9 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s11 is allowed, Node is running only 11 out of 110 Pods.                                                                              
I1115 07:57:21.638385       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s1 is allowed, Node is running only 19 out of 110 Pods.                                                                               
I1115 07:57:21.638402       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s2 is allowed, Node is running only 21 out of 110 Pods.                                                                               
I1115 07:57:21.638383       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638335       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, Node is running only 18 out of 110 Pods.                                                                               
I1115 07:57:21.638408       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s13 is allowed, Node is running only 8 out of 110 Pods.                                                                               
I1115 07:57:21.638478       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, existing pods anti-affinity terms satisfied.                                                                         
I1115 07:57:21.638505       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638577       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638583       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s7 is allowed, Node is running only 25 out of 110 Pods.                                                                               
I1115 07:57:21.638932       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 9        
I1115 07:57:21.638946       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 8           
I1115 07:57:21.638961       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: BalancedResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 9        
I1115 07:57:21.638971       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7        
I1115 07:57:21.638975       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: LeastResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 8           
I1115 07:57:21.638990       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7           
I1115 07:57:21.639022       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: TaintTolerationPriority, Score: (10)                                                                                                        
I1115 07:57:21.639030       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639034       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639041       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: NodeAffinityPriority, Score: (0)                                                                                                            
I1115 07:57:21.639053       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639059       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639061       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: InterPodAffinityPriority, Score: (0)                                                                                                                   
I1115 07:57:21.639063       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                                   
I1115 07:57:21.639073       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639077       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639085       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639088       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639103       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                         
I1115 07:57:21.639109       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639114       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639127       1 generic_scheduler.go:781] Host nxs-k8s-s10 => Score 100037                                                                                                                                                                            
I1115 07:57:21.639150       1 generic_scheduler.go:781] Host nxs-k8s-s8 => Score 100034                                                                                                                                                                             
I1115 07:57:21.639154       1 generic_scheduler.go:781] Host nxs-k8s-s9 => Score 100037                                                                                                                                                                             
I1115 07:57:21.639267       1 scheduler_binder.go:269] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10"                                                                                                               
I1115 07:57:21.639286       1 scheduler_binder.go:279] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10": all PVCs bound and nothing to do                                                                             
I1115 07:57:21.639333       1 factory.go:733] Attempting to bind cronjob-cron-events-1573793820-xt6q9 to nxs-k8s-s10

Lapha sibona ukuthi ekuqaleni isihlungi se-schedule futhi sikhiqize uhlu lwama-node angu-3 lapho lungasetshenziswa khona (nxs-k8s-s8, nxs-k8s-s9, nxs-k8s-s10). Bese ibala amaphuzu ngokusekelwe kumapharamitha ambalwa (okuhlanganisa i-BalancedResourceAllocation, LeastResourceAllocation) ku-node ngayinye kulawa ukuze kutholwe indawo efanelekile kakhulu. Ekugcineni, i-pod ihlelwe ku-node enenani eliphakeme kakhulu lamaphuzu (lapha ama-node amabili ngesikhathi esisodwa anenani elifanayo lamaphuzu 100037, ngakho-ke kukhethwa okungahleliwe - nxs-k8s-s10).

isiphetho: uma i-node igijima ama-pods okungekho imingcele ebekiwe, khona-ke ama-k8s (kusuka endaweni yokubuka ukusetshenziswa kwezinsiza) lokhu kuzolingana njengokuthi kwakungekho ama-pods anjalo kule node nhlobo. Ngakho-ke, uma wena, ngokwemibandela, une-pod enenqubo yokuminza (isibonelo, wowza) futhi kungekho mingcele ebekiwe, khona-ke isimo singase siphakame lapho le pod empeleni idle zonke izinsiza ze-node, kodwa nge-k8s le node. ibhekwa njengengalayishiwe futhi izonikezwa inani elifanayo lamaphuzu lapho ihlelwa (ngokuqondile emaphuzwini ahlola izinsiza ezitholakalayo) njengendawo engenawo ama-pods asebenzayo, okungagcina kuholele ekusabalaliseni okungalingani komthwalo phakathi kwamanodi.

Ukukhishwa kwePod

Njengoba wazi, i-pod ngayinye inikezwe amakilasi ama-QoS ama-3:

  1. okuqinisekisiwe - inikezwa lapho isiqukathi ngasinye ku-pod isicelo kanye nomkhawulo kushiwo inkumbulo kanye ne-cpu, futhi la manani kufanele afane
  2. kuqhuma - Okungenani isitsha esisodwa kuphod sinesicelo kanye nomkhawulo, ngesicelo < umkhawulo
  3. umzamo omuhle kakhulu - lapho kungekho nesisodwa isiqukathi ku-pod esilinganiselwe

Ngasikhathi sinye, lapho i-node ihlangabezana nokushoda kwezinsiza (idiski, inkumbulo), i-kubelet iqala ukukala futhi ikhiphe ama-pods ngokuya nge-algorithm ethile ecabangela ukubaluleka kwe-pod nesigaba sayo se-QoS. Isibonelo, uma sikhuluma nge-RAM, bese kusekelwe ekilasini le-QoS, amaphuzu anikezwa ngokulandela isimiso esilandelayo:

  • Kuqinisekisiwe:-998
  • BestEffort: 1000
  • Burstable: iminithi(ubukhulu(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)

Labo. ngokubaluleka okufanayo, i-kubelet izoqala ngokukhipha ama-pods ngomzamo omuhle kakhulu wekilasi le-QoS endaweni.

isiphetho: uma ufuna ukunciphisa amathuba okuba i-pod oyifunayo ikhishwe ku-node uma kwenzeka ukuntuleka kwezinsiza kuyo, khona-ke kanye nokubalulekile, udinga futhi ukunakekela ukusetha isicelo / umkhawulo kuso.

Indlela yokulinganisa okuzenzakalelayo kwe-application pods (HPA)

Lapho umsebenzi uwukukhulisa ngokuzenzakalelayo futhi wehlise inani lama-pods kuye ngokusetshenziswa kwezinsiza (uhlelo - CPU/RAM noma umsebenzisi - rps), i-k8s entity njenge HPA (I-Horizontal Pod Autoscaler). I-algorithm yayo ingokulandelayo:

  1. Ukufundwa kwamanje kwesisetshenziswa esibukiwe kuyanqunywa (currentMetricValue)
  2. Amanani afiswayo esisetshenziswa ayanqunywa (desiredMetricValue), okuthi ezinsizeni zesistimu asethwe kusetshenziswa isicelo
  3. Inombolo yamanje yezifaniso inqunywa (currentReplicas)
  4. Ifomula elandelayo ibala inombolo efiswayo ye-replicas (desiredReplicas)
    wishdReplicas = [ currentReplicas * ( currentMetricValue / desiredMetricValue )]

Kulesi simo, ukukala ngeke kwenzeke uma i-coefficient (currentMetricValue / desiredMetricValue) iseduze no-1 (kulesi simo, singasetha iphutha elivunyelwe ngokwethu; ngokuzenzakalelayo ngu-0.1).

Ake sibheke ukuthi i-hpa isebenza kanjani sisebenzisa isibonelo sohlelo lokusebenza lokuhlola uhlelo lokusebenza (elichazwa ngokuthi Ukuthunyelwa), lapho kudingeka khona ukuguqula inani lezifaniso kuye ngokusetshenziswa kwe-CPU:

  • I-manifest yohlelo lokusebenza

    kind: Deployment
    apiVersion: apps/v1beta2
    metadata:
    name: app-test
    spec:
    selector:
    matchLabels:
    app: app-test
    replicas: 2
    template:
    metadata:
    labels:
    app: app-test
    spec:
    containers:
    - name: nginx
    image: registry.nixys.ru/generic-images/nginx
    imagePullPolicy: Always
    resources:
    requests:
    cpu: 60m
    ports:
    - name: http
    containerPort: 80
    - name: nginx-exporter
    image: nginx/nginx-prometheus-exporter
    resources:
    requests:
    cpu: 30m
    ports:
    - name: nginx-exporter
    containerPort: 9113
    args:
    - -nginx.scrape-uri
    - http://127.0.0.1:80/nginx-status

    Labo. sibona ukuthi i-pod yesicelo yethulwa ekuqaleni ezimweni ezimbili, ngasinye esiqukethe iziqukathi ezimbili ze-nginx kanye ne-nginx-exporter, ngayinye yazo ecacisiwe. izicelo kwe-CPU.

  • I-HPA Manifesto

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: app-test-hpa
    spec:
    maxReplicas: 10
    minReplicas: 2
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: app-test
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 30

    Labo. Sidale i-hpa ezoqapha ukuhlolwa kohlelo lokusebenza Lokuthunyelwa futhi ilungise inani lama-pod ngohlelo lokusebenza ngokususelwe kunkomba ye-cpu (silindele ukuthi i-pod kufanele idle u-30% we-CPU eyicelayo), nenani lezifaniso ezikhona. ububanzi 2-10.

    Manje, ake sibheke indlela yokusebenza kwe-hpa uma sisebenzisa umthwalo kwenye yeziko:

     # kubectl top pod
    NAME                                                   CPU(cores)   MEMORY(bytes)
    app-test-78559f8f44-pgs58            101m         243Mi
    app-test-78559f8f44-cj4jz            4m           240Mi

Sekukonke sinokulandelayo:

  • Inani elifunekayo (desiredMetricValue) - ngokuya ngezilungiselelo ze-hpa, sino-30%
  • Inani lamanje (currentMetricValue) - ekubaleni, isilawuli-umphathi ubala inani elimaphakathi lokusetshenziswa kwensiza ku-%, i.e. ngokwemibandela yenza lokhu okulandelayo:
    1. Ithola amanani aphelele amamethrikhi e-pod kusuka kuseva yemethrikhi, i.e. 101m kanye 4m
    2. Ibala isilinganiso senani eliphelele, isb. (101m + 4m) / 2 = 53m
    3. Ithola inani eliphelele lokusetshenziswa kwensiza oyifunayo (kulokhu, izicelo zazo zonke iziqukathi zifingqwa) 60m + 30m = 90m
    4. Ibala iphesenti elimaphakathi lokusetshenziswa kwe-CPU ngokuhlobene ne-pod yesicelo, isb. 53m / 90m * 100% = 59%

Manje sinakho konke esikudingayo ukuze sinqume ukuthi sidinga yini ukushintsha inombolo ye-replicas; ukwenza lokhu, sibala i-coefficient:

ratio = 59% / 30% = 1.96

Labo. inani le-replicas kufanele lenyuswe ~ izikhathi ezi-2 kanye nenani libe ngu- [2 * 1.96] = 4.

Isiphetho: Njengoba ubona, ukuze lo mshini usebenze, isimo esidingekayo siwukuba khona kwezicelo zazo zonke iziqukathi ku-pod ephawuliwe.

Indlela yokulinganisa i-autoscaling yama-node (i-Cluster Autoscaler)

Ukuze kuncishiswe umthelela ongemuhle ohlelweni ngesikhathi sokukhuphuka komthwalo, ukuba ne-hpa emisiwe akwanele. Isibonelo, ngokuya ngezilungiselelo kumphathi wesilawuli se-hpa, inquma ukuthi inani le-replicas lidinga ukukhushulwa izikhathi ezi-2, kodwa ama-node awanazo izinsiza zamahhala zokusebenzisa inani elinjalo lama-pods (okungukuthi, i-node ayikwazi ukunikeza ucele izinsiza ku-pod yezicelo) futhi lawa ma-pods ashintshela kusimo esilindile.

Kulokhu, uma umhlinzeki ene-IaaS/PaaS ehambisanayo (isibonelo, i-GKE/GCE, i-AKS, i-EKS, njll.), ithuluzi elifana ne- I-Node Autoscaler. Ikuvumela ukuthi usethe inani eliphakeme nelincane lama-node ku-cluster futhi ulungise ngokuzenzakalelayo inombolo yamanje yama-node (ngokubiza i-API yomhlinzeki wamafu ukuze u-oda / ukhiphe i-node) lapho kuntuleka izinsiza kuqoqo kanye nama-pods. azikwazi ukuhlelwa (zisesimweni sokulinda).

Isiphetho: Ukuze ukwazi ukwenza ama-node okuzenzakalelayo, kuyadingeka ukusetha izicelo ezitsheni ze-pod ukuze ama-k8 akwazi ukuhlola kahle umthwalo kuma-node futhi ngokufanele abike ukuthi azikho izinsiza kuqoqo zokuqalisa i-pod elandelayo.

isiphetho

Kufanele kuqashelwe ukuthi ukusetha imikhawulo yensiza yesiqukathi akuyona imfuneko ukuze uhlelo lokusebenza lusebenze ngempumelelo, kodwa kusengcono ukwenza kanjalo ngenxa yalezi zizathu ezilandelayo:

  1. Ukuze uthole ukusebenza okunembe kakhudlwana kwesihleli mayelana nokulinganisa komthwalo phakathi kwama-k8s node
  2. Ukunciphisa amathuba okuthi kwenzeke "ukukhishwa kwe-pod".
  3. Ukuze i-autoscaling evundlile ye-application pods (HPA) isebenze
  4. Ukuze uthole ukulinganisa okuzenzakalelayo kwama-node (I-Cluster Autoscaling) kubahlinzeki bamafu

Funda nezinye izindatshana kubhulogi yethu:

Source: www.habr.com

Engeza amazwana