Kubernetes: poukisa li tèlman enpòtan pou konfigirasyon jesyon resous sistèm?

Kòm yon règ, toujou gen yon bezwen bay yon pisin devwe resous nan yon aplikasyon pou operasyon kòrèk ak ki estab. Men, e si plizyè aplikasyon ap kouri sou menm pouvwa a? Ki jan yo bay chak nan yo ak resous ki nesesè yo minimòm? Ki jan ou ka limite konsomasyon resous? Ki jan yo kòrèkteman distribye chaj la ant nœuds? Ki jan asire mekanis dekale orizontal la ap travay si chaj aplikasyon an ogmante?

Kubernetes: poukisa li tèlman enpòtan pou konfigirasyon jesyon resous sistèm?

Ou bezwen kòmanse ak ki kalite prensipal resous ki egziste nan sistèm nan - sa a, nan kou, se tan processeur ak RAM. Nan manifest k8s yo mezire kalite resous sa yo nan inite sa yo:

  • CPU - nan nwayo
  • RAM - nan bytes

Anplis, pou chak resous li posib pou mete de kalite kondisyon - demann и limit. Demann - dekri kondisyon minimòm yo pou resous gratis nan yon ne pou kouri yon veso (ak gous an jeneral), pandan y ap limit fikse yon limit difisil sou resous ki disponib nan veso a.

Li enpòtan pou w konprann ke manifest la pa oblije defini tou de kalite yo klèman, men konpòtman an pral jan sa a:

  • Si sèlman limit yo nan yon resous espesifikman espesifye, Lè sa a, demann pou resous sa a otomatikman pran yon valè egal a limit (ou ka verifye sa a lè w rele antite dekri). Moun sa yo. an reyalite, veso a pral limite a menm kantite resous li mande pou kouri.
  • Si sèlman demann yo espesifye klèman pou yon resous, Lè sa a, pa gen okenn restriksyon anwo yo mete sou resous sa a - i.e. veso a limite sèlman pa resous yo nan ne nan tèt li.

Li posib tou pou konfigirasyon jesyon resous pa sèlman nan nivo yon veso espesifik, men tou nan nivo espas non lè l sèvi avèk antite sa yo:

  • LimitRange — dekri politik restriksyon nan nivo veso/gous la nan ns epi li nesesè pou dekri limit default sou veso/gous la, epi tou anpeche kreyasyon resipyan/gous ki evidan (oswa vise vèsa), limite kantite yo. epi detèmine diferans posib nan valè yo nan limit ak demann
  • ResousQuotas — dekri politik restriksyon an jeneral pou tout resipyan nan ns epi yo itilize, an jeneral, pou delimite resous nan mitan anviwònman yo (itil lè anviwònman yo pa delimite entèdi nan nivo ne)

Sa ki anba la yo se egzanp manifeste ki fikse limit resous:

  • Nan nivo veso espesifik:

    containers:
    - name: app-nginx
      image: nginx
      resources:
        requests:
          memory: 1Gi
        limits:
          cpu: 200m

    Moun sa yo. an ka sa a, pou kouri yon veso ak nginx, w ap bezwen omwen 1G nan RAM gratis ak 0.2 CPU sou ne a, pandan y ap pi plis veso a ka konsome 0.2 CPU ak tout RAM ki disponib sou ne la.

  • Nan nivo nonb antye relatif ns:

    apiVersion: v1
    kind: ResourceQuota
    metadata:
      name: nxs-test
    spec:
      hard:
        requests.cpu: 300m
        requests.memory: 1Gi
        limits.cpu: 700m
        limits.memory: 2Gi

    Moun sa yo. sòm tout resipyan demann nan ns yo default pa ka depase 300m pou CPU a ak 1G pou OP a, ak sòm total la nan tout limit se 700m pou CPU a ak 2G pou OP la.

  • Limit defo pou resipyan nan ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
      name: nxs-limit-per-container
    spec:
     limits:
       - type: Container
         defaultRequest:
           cpu: 100m
           memory: 1Gi
         default:
           cpu: 1
           memory: 2Gi
         min:
           cpu: 50m
           memory: 500Mi
         max:
           cpu: 2
           memory: 4Gi

    Moun sa yo. nan espas non default pou tout resipyan yo, demann yo pral mete sou 100m pou CPU ak 1G pou OP, limit - 1 CPU ak 2G. An menm tan an, yo mete yon limit tou sou valè posib yo nan demann / limit pou CPU (50m < x < 2) ak RAM (500M < x < 4G).

  • Restriksyon nan nivo pod ns:

    apiVersion: v1
    kind: LimitRange
    metadata:
     name: nxs-limit-pod
    spec:
     limits:
     - type: Pod
       max:
         cpu: 4
         memory: 1Gi

    Moun sa yo. pou chak gous nan ns yo default pral gen yon limit nan 4 vCPU ak 1G.

Koulye a, mwen ta renmen di w ki avantaj mete restriksyon sa yo ka ban nou.

Mekanis balanse chaj ant nœuds

Kòm ou konnen, eleman k8s la responsab pou distribisyon gous nan mitan nœuds, tankou planifikatè, ki travay selon yon algorithm espesifik. Algorithm sa a ale nan de etap lè w ap chwazi ne ki pi bon pou lanse:

  1. filtraj
  2. Ranje

Moun sa yo. dapre politik ki dekri a, nœuds yo okòmansman chwazi sou ki li posib pou lanse yon gous ki baze sou yon seri predi (ki gen ladan tcheke si ne a gen ase resous pou kouri gous la - PodFitsResources), ak Lè sa a, pou chak nan nœuds sa yo, dapre priyorite pwen yo akòde (ki gen ladan, plis resous gratis yon nod genyen, plis pwen yo asiyen li - LeastResourceAllocation/LeastRequestedPriority/BalancedResourceAllocation) epi yo lanse gous la sou ne ki gen plis pwen yo (si plizyè nœuds satisfè kondisyon sa a alafwa, lè sa a yo chwazi yon owaza).

An menm tan an, ou bezwen konprann ke orè a, lè w ap evalye resous ki disponib nan yon ne, gide pa done yo ki estoke nan etcd - i.e. pou kantite resous yo mande/limite chak gous kap kouri sou ne sa a, men se pa pou konsomasyon resous aktyèl la. Enfòmasyon sa a ka jwenn nan pwodiksyon an kòmand kubectl describe node $NODEpa egzanp:

# kubectl describe nodes nxs-k8s-s1
..
Non-terminated Pods:         (9 in total)
  Namespace                  Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                  ----                                         ------------  ----------  ---------------  -------------  ---
  ingress-nginx              nginx-ingress-controller-754b85bf44-qkt2t    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-flannel-26bl4                           150m (0%)     300m (1%)   64M (0%)         500M (1%)      233d
  kube-system                kube-proxy-exporter-cb629                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                kube-proxy-x9fsc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         233d
  kube-system                nginx-proxy-k8s-worker-s1                    25m (0%)      300m (1%)   32M (0%)         512M (1%)      233d
  nxs-monitoring             alertmanager-main-1                          100m (0%)     100m (0%)   425Mi (1%)       25Mi (0%)      233d
  nxs-logging                filebeat-lmsmp                               100m (0%)     0 (0%)      100Mi (0%)       200Mi (0%)     233d
  nxs-monitoring             node-exporter-v4gdq                          112m (0%)     122m (0%)   200Mi (0%)       220Mi (0%)     233d
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests           Limits
  --------           --------           ------
  cpu                487m (3%)          822m (5%)
  memory             15856217600 (2%)  749976320 (3%)
  ephemeral-storage  0 (0%)             0 (0%)

Isit la nou wè tout gous yo kouri sou yon ne espesifik, osi byen ke resous yo ke chak gous mande. Men kisa mòso planifikatè yo sanble lè cronjob-cron-events-1573793820-xt6q9 pod la te lanse (enfòmasyon sa a ap parèt nan mòso planifikatè a lè ou tabli 10yèm nivo anrejistreman nan agiman kòmand demaraj -v=10):

boutèy demi lit

I1115 07:57:21.637791       1 scheduling_queue.go:908] About to try and schedule pod nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                           
I1115 07:57:21.637804       1 scheduler.go:453] Attempting to schedule pod: nxs-stage/cronjob-cron-events-1573793820-xt6q9                                                                                                                                                    
I1115 07:57:21.638285       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s5 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638300       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s6 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s3 is allowed, Node is running only 20 out of 110 Pods.                                                                               
I1115 07:57:21.638322       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s4 is allowed, Node is running only 17 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, Node is running only 16 out of 110 Pods.                                                                              
I1115 07:57:21.638365       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s12 is allowed, Node is running only 9 out of 110 Pods.                                                                               
I1115 07:57:21.638334       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s11 is allowed, Node is running only 11 out of 110 Pods.                                                                              
I1115 07:57:21.638385       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s1 is allowed, Node is running only 19 out of 110 Pods.                                                                               
I1115 07:57:21.638402       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s2 is allowed, Node is running only 21 out of 110 Pods.                                                                               
I1115 07:57:21.638383       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, Node is running only 16 out of 110 Pods.                                                                               
I1115 07:57:21.638335       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, Node is running only 18 out of 110 Pods.                                                                               
I1115 07:57:21.638408       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s13 is allowed, Node is running only 8 out of 110 Pods.                                                                               
I1115 07:57:21.638478       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s10 is allowed, existing pods anti-affinity terms satisfied.                                                                         
I1115 07:57:21.638505       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s8 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638577       1 predicates.go:1369] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s9 is allowed, existing pods anti-affinity terms satisfied.                                                                          
I1115 07:57:21.638583       1 predicates.go:829] Schedule Pod nxs-stage/cronjob-cron-events-1573793820-xt6q9 on Node nxs-k8s-s7 is allowed, Node is running only 25 out of 110 Pods.                                                                               
I1115 07:57:21.638932       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 9        
I1115 07:57:21.638946       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 2343 millicores 9640186880 memory bytes, score 8           
I1115 07:57:21.638961       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: BalancedResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 9        
I1115 07:57:21.638971       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: BalancedResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7        
I1115 07:57:21.638975       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: LeastResourceAllocation, capacity 39900 millicores 66620170240 memory bytes, total request 4107 millicores 11307422720 memory bytes, score 8           
I1115 07:57:21.638990       1 resource_allocation.go:78] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: LeastResourceAllocation, capacity 39900 millicores 66620178432 memory bytes, total request 5847 millicores 24333637120 memory bytes, score 7           
I1115 07:57:21.639022       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: TaintTolerationPriority, Score: (10)                                                                                                        
I1115 07:57:21.639030       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639034       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: TaintTolerationPriority, Score: (10)                                                                                                         
I1115 07:57:21.639041       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: NodeAffinityPriority, Score: (0)                                                                                                            
I1115 07:57:21.639053       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639059       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: NodeAffinityPriority, Score: (0)                                                                                                             
I1115 07:57:21.639061       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: InterPodAffinityPriority, Score: (0)                                                                                                                   
I1115 07:57:21.639063       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                                   
I1115 07:57:21.639073       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639077       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639085       1 interpod_affinity.go:237] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: InterPodAffinityPriority, Score: (0)                                                                                                                    
I1115 07:57:21.639088       1 selector_spreading.go:146] cronjob-cron-events-1573793820-xt6q9 -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                                    
I1115 07:57:21.639103       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s10: SelectorSpreadPriority, Score: (10)                                                                                                         
I1115 07:57:21.639109       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s8: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639114       1 generic_scheduler.go:726] cronjob-cron-events-1573793820-xt6q9_nxs-stage -> nxs-k8s-s9: SelectorSpreadPriority, Score: (10)                                                                                                          
I1115 07:57:21.639127       1 generic_scheduler.go:781] Host nxs-k8s-s10 => Score 100037                                                                                                                                                                            
I1115 07:57:21.639150       1 generic_scheduler.go:781] Host nxs-k8s-s8 => Score 100034                                                                                                                                                                             
I1115 07:57:21.639154       1 generic_scheduler.go:781] Host nxs-k8s-s9 => Score 100037                                                                                                                                                                             
I1115 07:57:21.639267       1 scheduler_binder.go:269] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10"                                                                                                               
I1115 07:57:21.639286       1 scheduler_binder.go:279] AssumePodVolumes for pod "nxs-stage/cronjob-cron-events-1573793820-xt6q9", node "nxs-k8s-s10": all PVCs bound and nothing to do                                                                             
I1115 07:57:21.639333       1 factory.go:733] Attempting to bind cronjob-cron-events-1573793820-xt6q9 to nxs-k8s-s10

Isit la nou wè ke okòmansman pwogramè a filtre ak jenere yon lis 3 nœuds kote li ka lanse (nxs-k8s-s8, nxs-k8s-s9, nxs-k8s-s10). Lè sa a, li kalkile nòt ki baze sou plizyè paramèt (ki gen ladan BalancedResourceAllocation, LeastResourceAllocation) pou chak nan nœuds sa yo nan lòd yo detèmine ne ki pi apwopriye. Alafen, gous la pwograme sou ne ki gen pi gwo kantite pwen (isit la de nœuds alafwa gen menm kantite pwen 100037, kidonk yo chwazi yon sèl o aza - nxs-k8s-s10).

Sòti: si yon ne kouri gous pou ki pa gen okenn restriksyon yo mete, Lè sa a, pou k8s (soti nan pwen de vi nan konsomasyon resous) sa a pral ekivalan a kòm si pa te gen okenn gous sa yo sou ne sa a ditou. Se poutèt sa, si ou, kondisyonèl, gen yon gous ak yon pwosesis gourman (pa egzanp, wowza) epi pa gen okenn restriksyon yo mete pou li, Lè sa a, yon sitiyasyon ka rive lè gous sa a aktyèlman manje tout resous yo nan ne la, men pou k8s sa a ne. se konsidere kòm dechaje epi li pral akòde menm kantite pwen lè klasman (jis nan pwen evalye resous ki disponib) kòm yon ne ki pa gen gous k ap travay, ki finalman ka mennen nan distribisyon inegal nan chaj la ant nœuds.

Degèpisman Pod la

Kòm ou konnen, yo bay chak gous youn nan 3 klas QoS:

  1. garanti - yo asiyen lè pou chak veso ki nan gous la yon demann ak limit yo espesifye pou memwa ak CPU, ak valè sa yo dwe matche ak
  2. pete — omwen yon veso nan gous la gen yon demann ak yon limit, ak demann < limit
  3. pi bon efò — lè pa gen yon sèl veso nan gous la gen resous limite

An menm tan an, lè yon ne fè eksperyans yon mank de resous (disk, memwa), kubelet kòmanse klase ak degèpi gous dapre yon algorithm espesifik ki pran an kont priyorite nan gous la ak klas QoS li yo. Pou egzanp, si nou ap pale sou RAM, Lè sa a, ki baze sou klas la QoS, pwen yo akòde dapre prensip sa a:

  • Garanti: -998
  • BestEffort: 1000
  • Eklatman: min(max(2, 1000 - (1000 * memoryRequestBytes) / machineMemoryCapacityBytes), 999)

Moun sa yo. ak menm priyorite a, kubelet la pral premye degèpi gous ak pi bon efò klas QoS nan ne la.

Sòti: si ou vle diminye chans pou gous la vle yo te degèpi soti nan ne a nan evènman an nan yon mank de resous sou li, Lè sa a, ansanm ak priyorite a, ou bezwen tou pran swen nan mete demann lan / limit pou li.

Mekanis pou autoscaling orizontal nan gous aplikasyon (HPA)

Lè travay la se otomatikman ogmante ak diminye kantite gous depann sou itilizasyon resous (sistèm - CPU / RAM oswa itilizatè - rps), tankou yon antite k8s tankou HPA (Orizontal Pod Autoscaler). Algorithm nan ki se jan sa a:

  1. Yo detèmine lekti aktyèl yo nan resous obsève yo (currentMetricValue)
  2. Yo detèmine valè yo vle pou resous la (desiredMetricValue), ki pou resous sistèm yo mete lè l sèvi avèk demann.
  3. Yo detèmine kantite aktyèl la nan kopi (currentReplicas)
  4. Fòmil sa a kalkile kantite kopi ou vle (replik vle)
    wantReplicas = [currentReplicas * ( currentMetricValue / wantMetricValue )]

Nan ka sa a, dekale pa pral rive lè koyefisyan an (currentMetricValue / wantMetricValue) se tou pre 1 (nan ka sa a, nou ka mete erè akseptab la tèt nou; pa default li se 0.1).

Ann gade ki jan hpa ap travay lè l sèvi avèk egzanp aplikasyon tès aplikasyon an (ki dekri kòm Deplwaman), kote li nesesè pou chanje kantite kopi depann sou konsomasyon CPU:

  • Manifest aplikasyon an

    kind: Deployment
    apiVersion: apps/v1beta2
    metadata:
    name: app-test
    spec:
    selector:
    matchLabels:
    app: app-test
    replicas: 2
    template:
    metadata:
    labels:
    app: app-test
    spec:
    containers:
    - name: nginx
    image: registry.nixys.ru/generic-images/nginx
    imagePullPolicy: Always
    resources:
    requests:
    cpu: 60m
    ports:
    - name: http
    containerPort: 80
    - name: nginx-exporter
    image: nginx/nginx-prometheus-exporter
    resources:
    requests:
    cpu: 30m
    ports:
    - name: nginx-exporter
    containerPort: 9113
    args:
    - -nginx.scrape-uri
    - http://127.0.0.1:80/nginx-status

    Moun sa yo. nou wè gous aplikasyon an okòmansman te lanse nan de ka, chak nan yo ki gen de resipyan nginx ak nginx-ekspòtè, pou chak ladan yo yon espesifye. demann pou CPU.

  • HPA Manifesto

    apiVersion: autoscaling/v2beta2
    kind: HorizontalPodAutoscaler
    metadata:
    name: app-test-hpa
    spec:
    maxReplicas: 10
    minReplicas: 2
    scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: app-test
    metrics:
    - type: Resource
    resource:
    name: cpu
    target:
    type: Utilization
    averageUtilization: 30

    Moun sa yo. Nou te kreye yon hpa ki pral kontwole deplwaman aplikasyon-tès la epi ajiste kantite gous ak aplikasyon an ki baze sou endikatè CPU a (nou espere ke gous la ta dwe konsome 30% CPU li mande a), ak kantite kopi yo nan. seri a nan 2-10.

    Koulye a, ann gade nan mekanis nan operasyon hpa si nou aplike yon chaj nan youn nan fwaye yo:

     # kubectl top pod
    NAME                                                   CPU(cores)   MEMORY(bytes)
    app-test-78559f8f44-pgs58            101m         243Mi
    app-test-78559f8f44-cj4jz            4m           240Mi

An total nou gen bagay sa yo:

  • Valè vle a (desiredMetricValue) - dapre paramèt hpa yo, nou gen 30%
  • Valè aktyèl (currentMetricValue) - pou kalkil, kontwolè-manadjè kalkile valè an mwayèn nan konsomasyon resous an %, i.e. kondisyonèl fè bagay sa yo:
    1. Resevwa valè absoli mezi gous nan sèvè metrik la, sa vle di. 101m ak 4m
    2. Kalkile mwayèn valè absoli, i.e. (101m + 4m) / 2 = 53m
    3. Jwenn valè absoli pou konsomasyon resous vle a (pou sa a, demann yo nan tout resipyan yo adisyone moute) 60m + 30m = 90m
    4. Kalkile pousantaj mwayèn konsomasyon CPU parapò ak gous demann lan, sa vle di. 53m / 90m * 100% = 59%

Koulye a, nou gen tout sa nou bezwen pou detèmine si nou bezwen chanje kantite kopi; pou fè sa, nou kalkile koyefisyan an:

ratio = 59% / 30% = 1.96

Moun sa yo. kantite kopi yo ta dwe ogmante pa ~ 2 fwa ak kantite lajan a [2 * 1.96] = 4.

Konklizyon: Kòm ou ka wè, nan lòd pou mekanis sa a travay, yon kondisyon nesesè se prezans nan demann pou tout resipyan nan gous la obsève.

Mekanis pou otoscaling orizontal nan nœuds (Cluster Autoscaler)

Yo nan lòd yo netralize enpak negatif sou sistèm nan pandan vag chaj, gen yon hpa configuré pa ase. Pou egzanp, dapre anviwònman yo nan manadjè kontwolè hpa a, li deside ke kantite kopi yo bezwen ogmante pa 2 fwa, men nœuds yo pa gen resous gratis pou kouri tankou yon kantite gous (sa vle di nœud la pa ka bay la. resous mande yo nan gous demann yo) ak gous sa yo chanje nan eta an Antant.

Nan ka sa a, si founisè a gen yon IaaS/PaaS korespondan (pa egzanp, GKE/GCE, AKS, EKS, elatriye), yon zouti tankou Node Autoscaler. Li pèmèt ou mete maksimòm ak minimòm kantite nœuds nan gwoup la epi otomatikman ajiste kantite nœuds aktyèl la (pa rele API founisè nwaj la pou kòmande/retire yon nœud) lè gen yon mank de resous nan gwoup la ak gous yo. pa ka pwograme (yo nan eta an annatant).

Konklizyon: Pou kapab autoscale nœuds, li nesesè pou mete demann nan resipyan gous yo pou k8s ka kòrèkteman evalye chaj la sou nœuds yo epi kòmsadwa rapòte ke pa gen okenn resous nan gwoup la pou lanse pwochen gous la.

Konklizyon

Li ta dwe remake ke fikse limit resous veso a se pa yon kondisyon pou aplikasyon an fonksyone avèk siksè, men li toujou pi bon pou fè sa pou rezon sa yo:

  1. Pou operasyon pi egzak nan orè a an tèm de balans chaj ant nœuds k8s
  2. Pou diminye chans pou yon evènman "degèpisman gous" rive
  3. Pou orizontal autoscaling nan gous aplikasyon (HPA) nan travay
  4. Pou autoscaling orizontal nan nœuds (Cluster Autoscaling) pou founisè nwaj yo

Epitou li lòt atik sou blog nou an:

Sous: www.habr.com

Add nouvo kòmantè