Kubernetes: he aha te mea nui ki te whirihora i te whakahaere rauemi punaha?

Ka rite ki te tikanga, he hiahia tonu ki te whakarato i te puna wai whakatapua ki tetahi tono mo tana mahi tika me te pumau. Engari ka pehea mena he maha nga tono kei te whakahaere i runga i te mana kotahi? Me pehea te whakarato ki ia o ratou nga rauemi iti rawa e tika ana? Me pehea e taea ai e koe te whakaiti i te kohi rawa? Me pehea te tohatoha tika i te kawenga i waenga i nga pona? Me pehea te whakapumau i te mahi a te tikanga whakatauine whakapae mena ka piki te uta tono?

Kubernetes: he aha te mea nui ki te whirihora i te whakahaere rauemi punaha?

Me timata koe me nga momo rauemi matua kei roto i te punaha - koinei, ko te waa tukatuka me te RAM. I roto i nga whakaaturanga k8 ka ine enei momo rauemi ki nga waahanga e whai ake nei:

  • PTM - i roto i nga matua
  • RAM - i roto i nga paita

I tua atu, mo ia rauemi ka taea te whakarite kia rua nga momo whakaritenga - Tono и rohe. Tono - he whakaahua i nga whakaritenga iti rawa mo nga rauemi kore utu o te node hei whakahaere i tetahi ipu (me te pona katoa), engari ka whakatauhia e nga rohe he tepe pakeke ki nga rauemi e waatea ana ki te ipu.

He mea nui ki te mohio ko te whakaaturanga ehara i te mea me whakamarama nga momo e rua, engari ko te whanonga e whai ake nei:

  • Mēnā ka āta whakapūtāhia ngā tepe o te rauemi, ka riro aunoa i nga tono mo tenei rauemi he uara e rite ana ki nga tepe (ka taea e koe te manatoko ma te waea ki nga hinonga whakaahua). Ko era. i roto i te meka, ka whakawhäitihia te ipu ki te nui o nga rauemi e hiahiatia ana kia rere.
  • Mēnā ko ngā tono anake ka āta whakapūtāhia mō tētahi rauemi, kāre he herenga o runga ake e whakatauhia ana ki tēnei rauemi - arā. he iti noa te ipu na nga rauemi o te node ake.

Ka taea hoki te whirihora i te whakahaere rauemi ehara i te taumata o te ipu motuhake, engari i te taumata mokowā ingoa ma te whakamahi i nga hinonga e whai ake nei:

  • Awhe Tepe — e whakaahua ana i te kaupapa here here i te taumata ipu/pod i roto i te ns me te hiahia ki te whakaahua i nga tepe taunoa i runga i te ipu/pod, me te aukati i te hanga o nga ipu/pods tino ngako (he rereke ranei), whakawhāitihia te maha. me te whakatau i te rereketanga pea o nga uara o nga rohe me nga tono
  • RauemiKuota — whakaahuahia te kaupapa here here mo nga ipu katoa i roto i te ns ka whakamahia, hei tikanga, ki te whakawhāiti i nga rauemi i waenga i nga taiao (ka whai hua ki te kore e tino whakawehea nga taiao ki te taumata node)

Ko nga tauira e whai ake nei o nga whakaaturanga e whakatakoto ana i nga rohenga rauemi:

  • I te taumata ipu motuhake:

    containers:
    - name: app-nginx
      image: nginx
      resources:
        requests:
          memory: 1Gi
        limits:
          cpu: 200m

    Ko era. i roto i tenei take, ki te whakahaere i te ipu ki te nginx, ka hiahia koe i te iti rawa 1G o RAM kore utu me te 0.2 PTM i runga i te node, i te nuinga o te ipu ka taea e te ipu 0.2 PTM me RAM katoa e wātea ana i runga i te node.

  • I te taumata tauoti ns:

    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: nxs-test
    spec:
      hard:
        requests.cpu: 300m
        requests.memory: 1Gi
        limits.cpu: 700m
        limits.memory: 2Gi

    Ko era. ko te tapeke o nga ipu tono katoa kei roto i nga ns taunoa kaua e neke ake i te 300m mo te PTM me te 1G mo te OP, a ko te tapeke o nga tepe katoa he 700m mo te PTM me te 2G mo te OP.

  • Tetepe taunoa mo nga ipu i ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: nxs-limit-per-container
    spec:
     limits:
       - type: Container
         defaultRequest:
           cpu: 100m
           memory: 1Gi
         default:
           cpu: 1
           memory: 2Gi
         min:
           cpu: 50m
           memory: 500Mi
         max:
           cpu: 2
           memory: 4Gi

    Ko era. i te waahi ingoa taunoa mo nga ipu katoa, ka whakatauhia te tono ki te 100m mo te PTM me te 1G mo te OP, te rohe - 1 PTM me te 2G. I te wa ano, ka whakatauhia he rohe mo nga uara ka taea te tono/tepe mo te PTM (50m < x <2) me te RAM (500M < x <4G).

  • Nga here taumata-pod ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
     name: nxs-limit-pod
    spec:
     limits:
     - type: Pod
       max:
         cpu: 4
         memory: 1Gi

    Ko era. mo ia pene i roto i te taunoa ns ka whai tepe o te 4 vCPU me te 1G.

Inaianei kei te hiahia ahau ki te korero ki a koe he aha nga painga ka taea e te whakatau i enei here ki a matou.

Tikanga whakataurite i waenga i nga pona

Kei te mohio koe, ko te waahanga k8s te kawenga mo te tohatoha o nga pene i waenga i nga pona, penei kaiwhakahaere, e mahi ana i runga i tetahi algorithm motuhake. E rua nga wahanga o tenei algorithm i te wa e whiriwhiri ana i te pona tino pai hei whakarewa:

  1. tātari
  2. Awheawhe

Ko era. e ai ki te kaupapa here kua whakaahuatia, ka kowhiria nga pona i te tuatahi ka taea te whakarewa i tetahi putunga i runga i te huinga whakatauira (tae atu ki te tirotiro mena he rawaka nga rauemi a te pona hei whakahaere i te pod - PodFitsResources), katahi ka mo ia o enei pona, e ai ki te kaupapa matua ka whakawhiwhia nga tohu (tae atu ki te nui ake o nga rauemi kore utu kei roto i tetahi pona, ka nui ake nga tohu kua tohua - LeastResourceAllocation/LeastRequestedPriority/BalancedResourceAllocation) ka whakarewahia te pod ki runga i te node me te nuinga o nga tohu (mehemea he maha nga pona ka makona i tenei ahuatanga i te wa kotahi, katahi ka kua tohua tetahi matapōkere).

I te wa ano, me mohio koe ko te kaitakataka, i te wa e aromatawai ana i nga rauemi e waatea ana o te node, ka arahina e nga raraunga kei te rongoa i roto i te etcd - i.e. mo te nui o te rauemi tono/tepe o ia pene e rere ana i runga i tenei pona, engari kaua mo te whakapau rawa. Ka taea enei korero mai i te putanga whakahau kubectl describe node $NODE, hei tauira:

# kubectl describe nodes nxs-k8s-s1
..
Non-terminated Pods:         (9 in total)
  Namespace                  Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                         ------------  ----------  ---------------  -------------  ---
  ingress-nginx              nginx-ingress-controller-754b85bf44-qkt2t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-flannel-26bl4                           150m (0%)     300m (1%)   64M (0%)         500M (1%)      233d
  kube-system                kube-proxy-exporter-cb629                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-proxy-x9fsc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                nginx-proxy-k8s-worker-s1                    25m (0%)      300m (1%)   32M (0%)         512M (1%)      233d
  nxs-monitoring             alertmanager-main-1                          100m (0%)     100m (0%)   425Mi (1%)       25Mi (0%)      233d
  nxs-logging                filebeat-lmsmp                               100m (0%)     0 (0%)      100Mi (0%)       200Mi (0%)     233d
  nxs-monitoring             node-exporter-v4gdq                          112m (0%)     122m (0%)   200Mi (0%)       220Mi (0%)     233d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests           Limits
  --------           --------           ------
  cpu                487m (3%)          822m (5%)
  memory             15856217600 (2%)  749976320 (3%)
  ephemeral-storage  0 (0%)             0 (0%)

I konei ka kite tatou i nga poti katoa e rere ana i runga i tetahi pona motuhake, me nga rauemi e tonohia ana e ia poti. A koinei te ahua o nga raarangi raarangi i te wa e whakarewahia ana te cronjob-cron-events-1573793820-xt6q9 pod (ka puta enei korero i roto i te raarangi raarangi ka whakatauhia e koe te taumata 10 o te rehitatanga i roto i nga tohenga whakahau whakaoho -v=10):

rangitaki

I1115 07:57:21.637791       1 scheduling_queue.go:908] About to try and schedule pod nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                           
I1115 07:57:21.637804       1 scheduler.go:453] Attempting to schedule pod: nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                                    
I1115 07:57:21.638285       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s5 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638300       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s6 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s3 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s4 is allowed, Node is running only 17 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, Node is running only 16 out of 110 Pods.                                                                              
I1115 07:57:21.638365       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s12 is allowed, Node is running only 9 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s11 is allowed, Node is running only 11 out of 110 Pods.                                                                              
I1115 07:57:21.638385       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s1 is allowed, Node is running only 19 out of 110 Pods.                                                                               
I1115 07:57:21.638402       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s2 is allowed, Node is running only 21 out of 110 Pods.                                                                               
I1115 07:57:21.638383       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638335       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, Node is running only 18 out of 110 Pods.                                                                               
I1115 07:57:21.638408       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s13 is allowed, Node is running only 8 out of 110 Pods.                                                                               
I1115 07:57:21.638478       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, existing pods anti-affinity terms satisfied.                                                                         
I1115 07:57:21.638505       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638577       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638583       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s7 is allowed, Node is running only 25 out of 110 Pods.                                                                               
I1115 07:57:21.638932       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 9        
I1115 07:57:21.638946       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 8           
I1115 07:57:21.638961       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: BalancedResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 9        
I1115 07:57:21.638971       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7        
I1115 07:57:21.638975       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: LeastResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 8           
I1115 07:57:21.638990       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7           
I1115 07:57:21.639022       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: TaintTolerationPriority, Score: (10)                                                                                                        
I1115 07:57:21.639030       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639034       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639041       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: NodeAffinityPriority, Score: (0)                                                                                                            
I1115 07:57:21.639053       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639059       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639061       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: InterPodAffinityPriority, Score: (0)                                                                                                                   
I1115 07:57:21.639063       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                                   
I1115 07:57:21.639073       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639077       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639085       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639088       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639103       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                         
I1115 07:57:21.639109       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639114       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639127       1 generic_scheduler.go:781] Host nxs-k8s-s10 => Score 100037                                                                                                                                                                            
I1115 07:57:21.639150       1 generic_scheduler.go:781] Host nxs-k8s-s8 => Score 100034                                                                                                                                                                             
I1115 07:57:21.639154       1 generic_scheduler.go:781] Host nxs-k8s-s9 => Score 100037                                                                                                                                                                             
I1115 07:57:21.639267       1 scheduler_binder.go:269] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10"                                                                                                               
I1115 07:57:21.639286       1 scheduler_binder.go:279] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10": all PVCs bound and nothing to do                                                                             
I1115 07:57:21.639333       1 factory.go:733] Attempting to bind cronjob-cron-events-1573793820-xt6q9 to nxs-k8s-s10

I konei ka kite tatou i te tuatahi ka whiriwhiria e te kaihōtaka me te whakaputa i te rarangi o nga pona e 3 ka taea te whakarewahia (nxs-k8s-s8, nxs-k8s-s9, nxs-k8s-s10). Kātahi ka tātai kaute i runga i te maha o ngā tawhā (tae atu ki te BalancedResourceAllocation, LeastResourceAllocation) mō ia kōpuku kia taea ai te whakatau i te kōpuku tino pai. I te mutunga, ka whakaritea te pona ki runga i te node me te maha o nga tohu (i konei e rua nga pona i te wa kotahi he rite tonu te maha o nga tohu 100037, na reira ka kowhiria tetahi matapōkere - nxs-k8s-s10).

mutunga: ki te whakahaerea tetahi pona ki te kore he herenga i whakaritea, na mo nga k8 (mai i te tirohanga o te kohi rawa) ka rite tenei ki te mea kaore he peera penei i runga i tenei pona. No reira, ki te mea kei a koe he pene me te mahi kakai (hei tauira, wowza) karekau he herenga mo taua mea, katahi ka puta ake he ahuatanga ka kainga e tenei pene nga rawa katoa o te node, engari mo nga k8s tenei node ka kiia kua wetekina, ka whakawhiwhia ki te rite tonu te maha o nga tohu ina whakatauhia (i roto i nga tohu e aromatawai ana i nga rauemi e waatea ana) hei node karekau he putunga mahi, ka mutu ka kore e rite te tohatoha o te kawenga i waenga i nga pona.

Te pananga o Pod

E mohio ana koe, ka whakawhiwhia ki ia pene tetahi o nga karaehe QoS e toru:

  1. whakamana — ka tohua i te wa mo ia ipu i roto i te pota ka tohua he tono me te rohe mo te mahara me te ppu, a me rite enei uara
  2. pakaru — kia kotahi ipu kei roto i te peera he tono me te rohe, me te tono < tepe
  3. nga mahi pai rawa atu — ina karekau he ipu kotahi i roto i te peera he iti te rauemi

I te wa ano, ka pa te kore o nga rauemi (kopae, mahara), ka timata te kubelet ki te whakarangatira me te pana i nga poti kia rite ki tetahi algorithm motuhake e whai whakaaro ana ki te kaupapa matua o te pod me tana karaehe QoS. Hei tauira, ki te korero tatou mo te RAM, na runga i te karaehe QoS, ka whakawhiwhia nga tohu i runga i te kaupapa e whai ake nei:

  • Kua whakamanahia:-998
  • BestEffort: 1000
  • Burstable: min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)

Ko era. me te kaupapa matua ano, ka peia tuatahitia e te kubelet nga pene me te kaha o te karaehe QoS mai i te node.

mutunga: ki te hiahia koe ki te whakaiti i te tupono ka peia te peera e hiahiatia ana mai i te node mena he kore rawa nga rauemi kei runga, na me te kaupapa matua, me tiaki ano koe ki te whakarite i te tono/tepe mo taua mea.

Ko te tikanga mo te whakatauira aunoa o nga pene tono (HPA)

Ina ko te mahi ki te whakanui aunoa me te whakaheke i te maha o nga pene i runga i te whakamahinga o nga rauemi (pūnaha - PTM/RAM, kaiwhakamahi ranei - rps), he momo k8s penei HPA (Kaiwhakapae Aunoa Pod). Ko te algorithm e whai ake nei:

  1. Ka whakatauhia nga panui o naianei o te rauemi kua tirohia (currentMetricValue)
  2. Ko nga uara e hiahiatia ana mo te rauemi kua whakatauhia (he hiahiaMetricValue), ka whakatauhia mo nga rauemi punaha ma te tono.
  3. Kua whakatauhia te maha o nga tauira (currentReplicas)
  4. Ko te tauira e whai ake nei ka tatau i te maha o nga tauira e hiahiatia ana
    desiredReplicas = [Replicas o naianei * ( currentMetricValue / hiahiaMetricValue )]

I tenei take, karekau te tauine e puta i te wa e tata ana te whakarea (currentMetricValue / desiredMetricValue) ki te 1 (i tenei keehi, ka taea e tatou te whakatau i te hapa e whakaaetia ana e tatou; ma te taunoa ko te 0.1).

Me titiro ki te mahi a te hpa ma te whakamahi i te tauira o te tono whakamatautau-taupānga (e kiia ana ko te Whakamahinga), me whakarereke te maha o nga tauira i runga i te kohi CPU:

  • Whakaaturanga tono

    kind: Deployment
    apiVersion: apps/v1beta2
    metadata:
    name: app-test
    spec:
    selector:
    matchLabels:
    app: app-test
    replicas: 2
    template:
    metadata:
    labels:
    app: app-test
    spec:
    containers:
    - name: nginx
    image: registry.nixys.ru/generic-images/nginx
    imagePullPolicy: Always
    resources:
    requests:
    cpu: 60m
    ports:
    - name: http
    containerPort: 80
    - name: nginx-exporter
    image: nginx/nginx-prometheus-exporter
    resources:
    requests:
    cpu: 30m
    ports:
    - name: nginx-exporter
    containerPort: 9113
    args:
    - -nginx.scrape-uri
    - http://127.0.0.1:80/nginx-status

    Ko era. ka kite tatou i te tuatahi ka whakarewahia te pod tono i roto i nga wa e rua, e rua nga ipu nginx me nga nginx-kaikawe i ia waahanga, mo ia waahanga kua tohua. Tono mo te PTM.

  • HPA Manifesto

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: app-test-hpa
    spec:
    maxReplicas: 10
    minReplicas: 2
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: app-test
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 30

    Ko era. I hangahia e matou he hpa hei tirotiro i te Whakamatau-taupānga Tukunga me te whakatika i te maha o nga pods me te tono i runga i te tohu cpu (e tumanako ana matou kia pau te 30% o te PTM ka tonohia e te pod), me te maha o nga tauira kei roto. te awhe 2-10.

    Inaianei, me titiro ki te tikanga o te mahi hpa mena ka utaina he uta ki tetahi o nga kaa ahi:

     # kubectl top pod
    NAME                                                   CPU(cores)   MEMORY(bytes)
    app-test-78559f8f44-pgs58            101m         243Mi
    app-test-78559f8f44-cj4jz            4m           240Mi

I te katoa kei a matou nga mea e whai ake nei:

  • Ko te uara e hiahiatia ana (he hiahiaMetricValue) - e ai ki nga tautuhinga hpa, kei a matou te 30%
  • Uara o nāianei (currentMetricValue) - mo te tātaitanga, ka tātaihia e te kaiwhakahaere-kaiwhakahaere te uara toharite o te kohi rauemi i roto i te %, ara. ka mahia e whai ake nei:
    1. Ka whiwhi uara tino o nga inenga pod mai i te tūmau metric, i.e. 101m me te 4m
    2. Ka tātai i te uara pūmau toharite, arā. (101m + 4m) / 2 = 53m
    3. Ka whiwhi i te uara tino mo te kohi rawa e hiahiatia ana (mo tenei, ka whakarapopotohia nga tono o nga ipu katoa) 60m + 30m = 90m
    4. Ka tātaihia te ōrau toharite o te kai PTM e pā ana ki te pākākano tono, i.e. 53m / 90m * 100% = 59%

Inaianei kei a maatau nga mea katoa hei whakatau mena me whakarereke te maha o nga tauira; ki te mahi i tenei, ka tatauhia e matou te whakarea:

ratio = 59% / 30% = 1.96

Ko era. me whakanui ake te maha o nga tauira kia ~2 nga wa ka nui ki te [2 * 1.96] = 4.

Whakamutunga: Ka taea e koe te kite, kia pai ai te mahi o tenei tikanga, ko te tikanga ko te tono mo nga ipu katoa kei roto i te peera kua kitea.

Hangahanga mo te whakaipai-aunoa whakapae o nga pona (ClusterAunoascaler)

Hei whakakore i nga paanga kino ki te punaha i te wa e piki haere ana te uta, kaore e ranea te whai i te hpa whirihora. Hei tauira, e ai ki nga tautuhinga kei roto i te kaiwhakahaere kaiwhakahaere hpa, ka whakatauhia me whakanui ake te maha o nga tauira kia 2 nga wa, engari karekau he rauemi kore utu a nga pona hei whakahaere i nga momo pods penei (arā, kaore e taea e te node te whakarato i te i tono rauemi ki nga tono pod) ka huri enei pods ki te ahua Tarewa.

I tenei keehi, mena he IaaS/PaaS e rite ana te kaiwhakarato (hei tauira, GKE/GCE, AKS, EKS, me etahi atu), he taputapu penei Node Autoscaler. Ka taea e koe te whakarite i te nui me te iti rawa o nga node i roto i te kohinga me te whakatika aunoa i te maha o nga node o naianei (ma te karanga i te API kaiwhakarato kapua ki te ota/tango i te node) ina he kore rawa nga rauemi i roto i te tautau me nga pokano. kaore e taea te whakarite (kei roto i te ahua Tarewa).

Whakamutunga: Kia taea ai e koe te whakatau aunoa i nga pona, he mea tika ki te whakatakoto tono i roto i nga ipu pod kia taea ai e nga k8s te aromatawai tika i te kawenga i runga i nga pona me te kii mai kaore he rauemi i roto i te roopu ki te whakarewa i te pod e whai ake nei.

mutunga

Me tohu ko te whakarite i nga tepe rauemi ipu ehara i te mea he whakaritenga kia pai te whakahaere o te tono, engari he pai ake te mahi mo enei take:

  1. Mo te mahi tika ake o te kaihōtaka i runga i te whakataurite kawenga i waenga i nga pona k8s
  2. Hei whakaiti i te tupono ka puta mai he kaupapa "whakarerenga pod".
  3. Mo te whakahiatotanga aunoa whakapae o nga pene tono (HPA) kia mahi
  4. Mo te whakatauira aunoa o nga pona (Cluster Autoscaling) mo nga kaiwhakarato kapua

Panuihia etahi atu tuhinga i runga i ta maatau blog:

Source: will.com

Tāpiri i te kōrero