Fausia se kube-scheduler faaopoopo ma se seti masani o tulafono faʻatulagaina

Fausia se kube-scheduler faaopoopo ma se seti masani o tulafono faʻatulagaina

Kube-scheduler o se vaega taua o Kubernetes, lea e nafa ma le faʻatulagaina o pods i nodes e tusa ai ma faiga faʻavae. E masani lava, i le taimi o le faʻaogaina o le Kubernetes cluster, tatou te le tau mafaufau po o fea faiga faʻavae e faʻaaogaina e faʻatulagaina ai pods, talu ai o le seti o faiga faʻavae o le kube-scheduler faaletonu e talafeagai mo le tele o galuega i aso uma. Ae ui i lea, ei ai tulaga e taua ai mo i tatou le faʻaleleia lelei o le faʻagasologa o le vaevaeina o pods, ma e lua auala e faʻataunuʻu ai lenei galuega:

  1. Fausia se kube-scheduler ma se seti masani o tulafono
  2. Tusi lau lava fa'atulagaina ma a'oa'o e galue ma talosaga a le server API

I totonu o lenei tusiga, o le a ou faʻamatalaina le faʻatinoina o le vaega muamua e foia ai le faʻafitauli o le faʻatulagaina o fale afi i se tasi oa tatou galuega faatino.

O se faʻamatalaga puupuu ile auala e galue ai kube-scheduler

E taua tele le matauina o le mea moni e le nafa le kube-scheduler mo le faʻatulagaina saʻo o pods - e naʻo le nafa ma le fuafuaina o le node e tuʻu ai le pod. I se isi faaupuga, o le taunuuga o le galuega a le kube-scheduler o le igoa o le node, lea e toe foʻi i le API server mo se talosaga faʻatulagaina, ma o iina e muta ai lana galuega.

Muamua, o le kube-scheduler e tuʻufaʻatasia se lisi o nodes e mafai ona faʻatulagaina le pod e tusa ai ma faiga faʻavae. Soso'o, o node ta'itasi mai lenei lisi e maua se numera o togi e tusa ai ma faiga fa'amuamua. O se taunuuga, ua filifilia le node ma le numera aupito maualuga o togi. Afai ei ai ni pona e tutusa le maualuga o le togi, e filifilia se tasi. E mafai ona maua se lisi ma fa'amatalaga o faiga fa'avae (filterina) ma fa'amuamua (sikoa). fa'amaumauga.

Faʻamatalaga o le tino faʻafitauli

E ui lava i le tele o vaega eseese o Kubernetes o loʻo tausia i Nixys, na matou feagai muamua ma le faʻafitauli o le faʻatulagaina o pods talu ai nei, pe a manaʻomia se tasi oa matou poloketi e faʻatautaia le tele o galuega faʻavaitaimi (~ 100 CronJob entities). Ina ia faʻafaigofie le faʻamatalaga o le faʻafitauli i le tele e mafai ai, o le a matou faia se faʻataʻitaʻiga se tasi microservice, i totonu lea e faʻalauiloa ai se galuega cron tasi i le minute, faia se uta i luga o le PPU. Mo le fa'atinoina o le galuega cron, e tolu nodes e matua'i tutusa uiga na fa'asoaina (24 vCPU i luga o le ta'itasi).

I le taimi lava e tasi, e le mafai ona fai atu ma le saʻo pe o le a le umi o le CronJob e faʻatino ai, talu ai o le tele o faʻamatalaga faʻamatalaga o loʻo suia pea. I le averesi, i le taimi masani o le kube-scheduler, o node taʻitasi e taʻavale 3-4 galuega, lea e fatuina ai le 20-30% o le uta i luga o le PPU o node taʻitasi:

Fausia se kube-scheduler faaopoopo ma se seti masani o tulafono faʻatulagaina

O le fa'afitauli lava ia o nisi taimi e taofia ai le fa'atulagaina o le cron task pods i se tasi o nodes e tolu. O lona uiga, i se taimi i le taimi, e leai se pusa e tasi na fuafuaina mo se tasi o nodes, ae i luga o isi lua nodes 6-8 kopi o le galuega o loʻo tamoe, fatuina ~ 40-60% o le uta CPU:

Fausia se kube-scheduler faaopoopo ma se seti masani o tulafono faʻatulagaina

O le faʻafitauli na toe tupu faʻatasi ma faʻalavelave faʻafuaseʻi ma o nisi taimi e fesoʻotaʻi ma le taimi na taʻavale ai se lomiga fou o le code.

E ala i le fa'ateleina o le kube-scheduler logging level i le tulaga 10 (-v=10), na amata ai ona matou fa'amauina pe fia ni togi na maua e node ta'itasi i le faagasologa o le iloiloga. I le taimi o fuafuaga masani, o faʻamatalaga nei e mafai ona iloa i totonu o ogalaau:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

O na. I le fa'amasinoina o fa'amatalaga na maua mai i ogalaau, na maua e nodes ta'itasi se numera tutusa o 'ai fa'ai'u ma na filifilia fa'afuase'i mo fuafuaga. I le taimi o fuafuaga faʻafitauli, o ogalaau e pei o lenei:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

O lea e mafai ai ona iloa o se tasi o pona na maua ni togi mulimuli e itiiti ifo nai lo isi, ma o le mea lea na faia ai fuafuaga mo na o pona e lua na maua le togi maualuga. O lea, na matou mautinoa lava o le faʻafitauli o loʻo taoto tonu lava i le faʻatulagaina o pusa.

O le isi algorithm mo le foia o le faafitauli na manino ia i matou - iloilo ogalaau, malamalama pe o le a le mea e ave i ai le faamuamua e leʻi togiina e le node ma, pe a manaʻomia, fetuutuunai faiga faʻavae a le kube-scheduler faaletonu. Ae ui i lea, o iinei tatou te feagai ai ma faigata taua e lua:

  1. I le maualuga o le logging level (10), o 'ai na maua mo na'o nisi o mea e ave i ai le faamuamua e atagia. I le vaega o loʻo i luga o ogalaau, e mafai ona e vaʻaia mo mea uma e ave i ai le faamuamua o loʻo atagia i totonu o ogalaau, nodes togi le numera tutusa o togi i le masani ma faʻatulagaina faʻafitauli, ae o le taunuuga mulimuli i le tulaga o fuafuaga faʻafitauli e ese. O le mea lea, e mafai ona tatou faʻaiʻuina mo nisi o mea e ave i ai le faamuamua, o le sikoa e tupu "i tua atu o vaaiga", ma e leai se auala tatou te malamalama ai po o le a le mea e ave i ai le faamuamua e leʻi maua e le node ni togi. Sa matou faamatalaina auiliili lenei faafitauli i mataupu Kubernetes faleoloa ile Github. I le taimi o le tusitusi, na maua mai se tali mai le au atiaʻe e faʻapea o le a faʻaopoopoina le lagolago logging i le Kubernetes v1.15,1.16, 1.17 ma XNUMX faʻafouga.
  2. E leai se auala faigofie e malamalama ai po o le fea seti o faiga faʻavae o loʻo galue nei kube-scheduler. Ioe, i totonu fa'amaumauga o loʻo lisiina lenei lisi, ae e le o iai faʻamatalaga e uiga i mea mamafa faʻapitoa e tuʻuina atu i faiga faʻavae taʻitasi. E mafai ona e vaʻai i le mamafa pe faʻasaʻo faiga faʻavae o le kube-scheduler faaletonu naʻo totonu fa'ailoga puna.

E taua le matauina o le taimi lava na mafai ai ona matou faʻamaumau e leʻi maua e se node ni togi e tusa ai ma le ImageLocalityPriority policy, lea e faʻailogaina ai se node pe afai ua i ai le ata e manaʻomia e faʻatino ai le talosaga. O lona uiga, i le taimi na taʻavale ai se faʻamatalaga fou o le talosaga, o le galuega cron na mafai ona faʻatautaia i luga o lua nodes, downloadina o se ata fou mai le docker registry ia i latou, ma o lea na maua ai e lua nodes se togi maualuga maualuga e faʻatatau i le lona tolu. .

E pei ona ou tusia i luga, i totonu o ogalaau matou te le o vaʻai i faʻamatalaga e uiga i le iloiloga o le ImageLocalityPriority policy, ina ia mafai ona siaki lo matou manatu, matou lafoina le ata ma le lomiga fou o le talosaga i luga o le node lona tolu, mulimuli ane saʻo le faʻatulagaina. . Na mafua ona o le ImageLocalityPriority policy na seasea matauina le faʻafitauli o le faʻatulagaina; sili atu ona fesoʻotaʻi ma se isi mea. Ona o le mea moni e le mafai ona matou faʻaaogaina uma faiga faʻavae i le lisi o mea e ave i ai le faamuamua o le kube-scheduler faaletonu, na matou manaʻomia ai le faʻatonutonuina o faiga faʻatulagaina o pod.

Fausiaina o le faʻafitauli

Matou te mananaʻo i le fofo i le faʻafitauli ia faʻapitoa e mafai, o lona uiga, o vaega autu a Kubernetes (o le uiga o le kube-scheduler faaletonu) e tatau ona tumau e le suia. Matou te leʻi manaʻo e foʻia se faʻafitauli i se tasi nofoaga ae faia i se isi nofoaga. O le mea lea, na matou oʻo mai i ni filifiliga se lua mo le foia o le faʻafitauli, lea na faʻasalalau i le folasaga o le tusiga - fatuina se faʻasologa faʻaopoopo poʻo le tusiaina o oe lava. O le mana'oga autu mo le fa'atulagaina o galuega cron o le tufatufa tutusa lea o le uta i nodes e tolu. O lenei manaʻoga e mafai ona faʻamalieina e faiga faʻavae kube-scheduler o loʻo iai, o lea e foia ai le matou faʻafitauli e leai se aoga e tusi ai lau lava faʻatulagaina.

O faʻatonuga mo le fatuina ma le faʻatulagaina o se kube-scheduler faaopoopo o loʻo faʻamatalaina i totonu fa'amaumauga. Ae ui i lea, na foliga mai ia i matou e le lava le faʻatulagaina o le faʻalapotopotoga e faʻamautinoa ai le faʻapalepale sese i le faʻatinoina o se auaunaga taua e pei o le kube-scheduler, o lea na matou filifili ai e faʻapipiʻi se kube-scheduler fou o se Static Pod, lea o le a mataʻituina saʻo. e Kubelet. O lea la, o loʻo i ai a matou manaʻoga nei mo le kube-scheduler fou:

  1. Ole 'au'aunaga e tatau ona fa'apipi'iina e fai ma Static Pod i luga ole matai fuifui uma
  2. E tatau ona tu'uina atu le fa'apalepale o fa'aletonu pe a le avanoa le pod ga'oi ma le kube-scheduler
  3. O le fa'amuamua autu pe a fuafuaina e tatau ona avea ma numera o punaoa avanoa i luga o le node (LeastRequestedPriority)

Fa'atinoina o le fofo

E taua le matauina i le taimi lava lena o le a matou faia uma galuega i Kubernetes v1.14.7, aua O le fa'aliliuga lea na fa'aaogaina i le poloketi. Tatou amata i le tusiaina o se fa'aaliga mo la tatou kube-scheduler fou. Se'i o tatou ave le fa'aaliga fa'aletonu (/etc/kubernetes/manifests/kube-scheduler.yaml) e fai ma faavae ma aumai i le fomu nei:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

Fa'apuupuu e uiga i suiga autu:

  1. Suia le igoa o le pod ma le container i le kube-scheduler-cron
  2. Fa'amaoti le fa'aogaina o ports 10151 ma le 10159 e pei ona fa'amatalaina le filifiliga hostNetwork: true ma e le mafai ona matou faʻaogaina pusa tutusa e pei o le kube-scheduler faaletonu (10251 ma le 10259)
  3. I le faʻaaogaina o le --config parameter, matou te faʻamaonia le faila faʻatulagaina lea e tatau ona amata ai le auaunaga
  4. Fa'atonu le fa'apipi'iina o le faila fa'atulagaina (scheduler-custom.conf) ma le fa'atulagaina o faiga fa'avae (scheduler-custom-policy-config.json) mai le talimalo

Aua nei galo o le a manaʻomia e le matou kube-scheduler aia tatau e tutusa ma le faaletonu. Fa'asa'o lana vaega fa'aopoopo:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Sei o tatou talanoa e uiga i mea e tatau ona i ai i totonu o le faila faʻatulagaina ma le faila o faiga faʻavae:

  • Fa'atonu faila (scheduler-custom.conf)
    Ina ia maua le faʻaogaina o le kube-scheduler configuration, e tatau ona e faʻaogaina le parakalafa --write-config-to mai fa'amaumauga. O le a matou tuʻuina le faʻatulagaina o taunuuga i le faila /etc/kubernetes/scheduler-custom.conf ma faʻaititia i le fomu nei:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

Fa'apuupuu e uiga i suiga autu:

  1. Matou te seti le schedulerName i le igoa o la matou auaunaga kube-scheduler-cron.
  2. I le faataʻitaʻiga lockObjectName e tatau foi ona e setiina le igoa o la matou auaunaga ma ia mautinoa o le parakalafa leaderElect seti i le mea moni (afai e tasi lau matai pona, e mafai ona e setiina i le sese).
  3. Fa'amaoti le ala ile faila fa'atasi ai ma fa'amatalaga o faiga fa'atulagaina ile fa'ailoga algorithmSource.

E taua le vaʻavaʻai totoʻa i le vaega lona lua, lea tatou te faʻasaʻo ai faʻamaufaʻailoga mo le ki leaderElection. Ina ia mautinoa le faapalepale o sese, ua matou mafaia (leaderElect) le faagasologa o le filifilia o se taʻitaʻi (matai) i le va o pods o la tatou kube-scheduler e faʻaaoga ai se tasi pito mo latou (resourceLock) faaigoa kube-scheduler-cron (lockObjectName) i le kube-system namespace (lockObjectNamespace). Fa'afefea ona mautinoa e Kubernetes le maualuga o le avanoa o vaega autu (e aofia ai le kube-scheduler) e mafai ona maua i totonu. tusiga.

  • Fa'atonu faila o faiga fa'avae (scheduler-custom-policy-config.json)
    E pei ona ou tusia muamua, e mafai ona tatou suʻeina poʻo fea faiga faʻavae faʻapitoa e galue ai le kube-scheduler e naʻo le suʻeina o lona code. O lona uiga, e le mafai ona matou mauaina se faila ma faiga faʻatulagaina mo le faaletonu kube-scheduler i le auala lava e tasi e pei o se faila faila. Se'i o tatou fa'amatala faiga fa'atulagaina tatou te fiafia i ai i le faila /etc/kubernetes/scheduler-custom-policy-config.json e fa'apea:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

O le mea lea, o le kube-scheduler muamua e tuʻufaʻatasia se lisi o nodes e mafai ona faʻatulagaina se pod e tusa ai ma le GeneralPredicates policy (lea e aofia ai se seti o PodFitsResources, PodFitsHostPorts, HostName, ma MatchNodeSelector policy). Ona iloilo lea o node taitasi e tusa ai ma le seti o faiga faavae i le faasologa o mea e ave i ai le faamuamua. Ina ia faʻataunuʻuina tulaga o la matou galuega, na matou manatu o sea seti o faiga faʻavae o le a sili ona lelei le fofo. Sei ou faamanatu atu ia te oe o se seti o faiga faavae ma a latou faamatalaga auiliili o loo maua i totonu fa'amaumauga. Ina ia faʻataunuʻuina lau galuega, e mafai ona e suia le seti o faiga faʻavae faʻaaogaina ma tuʻuina atu mamafa talafeagai ia i latou.

Sei o tatou taʻua le faʻaaliga o le kube-scheduler fou, lea na matou fatuina i le amataga o le mataupu, kube-scheduler-custom.yaml ma tuʻu i le ala lea /etc/kubernetes/manifests i luga o nodes matai e tolu. Afai e saʻo mea uma, o le a faʻalauiloa e Kubelet se pod i luga o node taʻitasi, ma i totonu o ogalaau o la matou kube-scheduler fou o le a matou vaʻai ai faʻamatalaga na faʻaogaina lelei a matou faila faila:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

O mea uma o loʻo totoe o le faʻaalia lea i le faʻamatalaga o la matou CronJob o talosaga uma mo le faʻatulagaina o ana pods e tatau ona faʻatautaia e le matou kube-scheduler fou:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

iʻuga

Mulimuli ane, na matou maua se kube-scheduler faaopoopo ma se seti tulaga ese o faiga faʻatulagaina, o le galuega e mataʻituina saʻo e le kubelet. E le gata i lea, ua matou faʻatulagaina le filifiliga o se taʻitaʻi fou i le va o pods o le matou kube-scheduler ina neʻi le avanoa le taʻitaʻi tuai ona o nisi mafuaaga.

O lo'o fa'aauau pea ona fa'atulaga talosaga ma au'aunaga masani e ala i le kube-scheduler fa'aletonu, ma o galuega uma cron ua uma ona fa'aliliuina i le mea fou. O le uta na faia e galuega cron ua fa'asoa tutusa i nodes uma. Mafaufau o le tele o galuega cron o loʻo faʻatinoina i luga o nodes tutusa ma faʻaoga autu o le poloketi, o lenei mea ua matua faʻaitiitia ai le lamatiaga o le fesiitaiga o pods ona o le le lava o punaoa. Ina ua uma ona faʻaofiina le kube-scheduler faaopoopo, ua le toe aliaʻe faʻafitauli i le le tutusa o le faʻatulagaina o galuega cron.

Faitau foi isi tala i la matou blog:

puna: www.habr.com

Faaopoopo i ai se faamatalaga