Ukudala i-kube-scheduler eyengeziwe enesethi yangokwezifiso yemithetho yokuhlela

Ukudala i-kube-scheduler eyengeziwe enesethi yangokwezifiso yemithetho yokuhlela

I-Kube-scheduler iyingxenye ebalulekile ye-Kubernetes, enesibopho sokuhlela ama-pods kuwo wonke ama-node ngokuvumelana nezinqubomgomo ezishiwo. Imvamisa, ngesikhathi sokusebenza kweqoqo le-Kubernetes, akudingeki sicabange ukuthi yiziphi izinqubomgomo ezisetshenziselwa ukuhlela ama-pods, njengoba isethi yezinqubomgomo ze-kube-scheduler ezenzakalelayo ifanele imisebenzi eminingi yansuku zonke. Kodwa-ke, kunezimo lapho kubalulekile ukuthi silungise kahle inqubo yokwaba ama-pods, futhi kunezindlela ezimbili zokufeza lo msebenzi:

  1. Dala i-kube-scheduler enesethi yangokwezifiso yemithetho
  2. Bhala eyakho ishejuli futhi uyifundise ukusebenza ngezicelo zeseva ye-API

Kulesi sihloko, ngizochaza ukuqaliswa kwephuzu lokuqala lokuxazulula inkinga yokuhlela okungalingani kwama-hearths kwelinye lamaphrojekthi ethu.

Isingeniso esifushane sokuthi i-kube-scheduler isebenza kanjani

Kuyafaneleka ukuqaphela ikakhulukazi iqiniso lokuthi i-kube-scheduler ayinasibopho sokuhlela ngokuqondile ama-pods - inesibopho kuphela sokunquma indawo okumele ibekwe kuyo i-pod. Ngamanye amazwi, umphumela womsebenzi we-kube-scheduler yigama le-node, elibuyisela kuseva ye-API ngesicelo sokuhlela, futhi yilapho umsebenzi wayo uphela khona.

Okokuqala, i-kube-scheduler ihlanganisa uhlu lwama-node lapho i-pod ingahlelwa khona ngokuhambisana nezinqubomgomo zezibikezelo. Okulandelayo, inodi ngayinye kulolu hlu ithola inombolo ethile yamaphoyinti ngokuhambisana nezinqubomgomo zezinto ezibalulekile. Ngenxa yalokho, i-node enenombolo enkulu yamaphoyinti ikhethiwe. Uma kukhona ama-node anamaphuzu aphezulu afanayo, kukhethwa okungahleliwe. Uhlu nencazelo yezilandiso (ukuhlunga) kanye nezinqubomgomo ezibalulekile (amaphuzu) zingatholakala ku imibhalo.

Incazelo yomzimba wenkinga

Naphezu kwenani elikhulu lamaqoqo e-Kubernetes ahlukene anakekelwa e-Nixys, siqale sahlangabezana nenkinga yokuhlela ama-pods muva nje, lapho enye yamaphrojekthi ethu idinga ukuqhuba inombolo enkulu yemisebenzi yezikhathi ezithile (~100 CronJob entities). Ukwenza incazelo yenkinga ibe lula ngangokunokwenzeka, sizothatha njengesibonelo i-microservice eyodwa, lapho umsebenzi we-cron wethulwa kanye ngomzuzu, udala umthwalo othile ku-CPU. Ukuze kuqhutshekwe nomsebenzi we-cron, ama-node amathathu anezici ezifanayo ngokuphelele abelwe (ama-vCPU angu-24 ngakunye).

Ngesikhathi esifanayo, akunakwenzeka ukusho ngokunemba ukuthi i-CronJob izothatha isikhathi esingakanani ukuze iqalise, ngoba ivolumu yedatha yokufaka ishintsha njalo. Ngokwesilinganiso, ngesikhathi sokusebenza okujwayelekile kwe-kube-scheduler, i-node ngayinye isebenzisa izimo zemisebenzi ezi-3-4, ezidala u-~20-30% womthwalo ku-CPU yenodi ngayinye:

Ukudala i-kube-scheduler eyengeziwe enesethi yangokwezifiso yemithetho yokuhlela

Inkinga ngokwayo ukuthi ngezinye izikhathi ama-cron task pods ayeka ukuhlelwa kwenye yezindawo ezintathu. Okusho ukuthi, ngesikhathi esithile, kwakungekho neyodwa i-pod eyayihlelelwe enye yama-node, kuyilapho kwamanye ama-node amabili amakhophi angu-6-8 omsebenzi ayesebenza, okwenza ~ 40-60% womthwalo we-CPU:

Ukudala i-kube-scheduler eyengeziwe enesethi yangokwezifiso yemithetho yokuhlela

Inkinga iphinde yavela ngokuvama okungahleliwe futhi ngezikhathi ezithile ihlotshaniswa nesikhathi lapho kukhishwa inguqulo entsha yekhodi.

Ngokwandisa izinga lokungena kwe-kube-scheduler liye kuleveli 10 (-v=10), siqale ukurekhoda ukuthi mangaki amaphuzu inodi ngayinye etholwe phakathi nenqubo yokuhlola. Ngesikhathi sokusebenza okujwayelekile kokuhlela, ulwazi olulandelayo lungabonakala kulogi:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

Labo. ukwahlulela ngolwazi olutholwe ezingodweni, indawo ngayinye ithole inani elilinganayo lamaphuzu okugcina futhi elilodwa lakhethwa ukuze kuhlelwe. Ngesikhathi sokuhlelwa kwezinkinga, izingodo zazibukeka kanjena:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

Okungabonwa ngakho ukuthi enye yamanodi ithole amaphuzu okugcina ambalwa kunamanye, ngakho-ke ukuhlela kwenziwa kuphela kumanodi amabili athole amaphuzu aphezulu. Ngakho-ke, sasiqiniseka ngokuqinisekile ukuthi inkinga ilele ekuhleleni ama-pods.

I-algorithm eyengeziwe yokuxazulula inkinga yayisobala kithi - hlaziya izingodo, uqonde ukuthi yikuphi okubaluleke kakhulu i-node engazange ithole amaphuzu futhi, uma kunesidingo, lungisa izinqubomgomo ze-kube-scheduler ezenzakalelayo. Nokho, lapha sibhekene nobunzima obubili:

  1. Ezingeni eliphezulu lokugawulwa kwemithi (10), amaphuzu azuzwe kuphela kwezinye izinto ezibalulekile ayaboniswa. Kule ngcaphuno engenhla yamalogi, ungabona ukuthi kuzo zonke izinto eziza kuqala ezikhonjiswe kulogi, ama-node athola inani elifanayo lamaphuzu ekuhleleni okujwayelekile kanye nezinkinga, kodwa umphumela wokugcina esimweni sokuhlela inkinga uhlukile. Ngakho-ke, singaphetha ngokuthi kwezinye izinto eziza kuqala, ukushaya amagoli kwenzeka "ngemuva kwezigcawu", futhi asinayo indlela yokuqonda ukuthi yikuphi okubaluleke kakhulu i-node engazange ithole amaphuzu. Sichaze le nkinga ngokuningiliziwe ku Ukukhishwa I-Kubernetes repository ku-Github. Ngesikhathi sokubhala, impendulo yatholwa kubathuthukisi ukuthi ukusekelwa kokugawula kuzokwengezwa kuzibuyekezo ze-Kubernetes v1.15,1.16, 1.17 kanye ne-XNUMX.
  2. Ayikho indlela elula yokuqonda ukuthi iyiphi isethi yezinqubomgomo i-kube-scheduler esebenza nayo njengamanje. Yebo, ku imibhalo lolu hlu lufakwe ohlwini, kodwa aluqukethe ulwazi mayelana nokuthi yiziphi izisindo ezithile ezinikezwa inqubomgomo ngayinye yezinto ezibalulekile. Ungabona izisindo noma uhlele izinqubomgomo ze-kube-scheduler ezenzakalelayo kuphela imithombo.

Kubalulekile ukuqaphela ukuthi lapho sesikwazile ukurekhoda ukuthi i-node ayizange iwathole amaphuzu ngokuvumelana nenqubomgomo ye-ImageLocarityPriority, enikeza amaphoyinti endaweni uma isivele inesithombe esidingekayo ukuze iqalise isicelo. Okusho ukuthi, ngesikhathi kukhishwa inguqulo entsha yohlelo lokusebenza, umsebenzi we-cron ukwazile ukusebenza ezindaweni ezimbili, ukulanda isithombe esisha kusuka kurejista ye-docker kuya kubo, futhi ngaleyo ndlela ama-node amabili athola amaphuzu aphezulu aphezulu uma kuqhathaniswa nesesithathu. .

Njengoba ngibhale ngenhla, ezingodweni asiboni ulwazi mayelana nokuhlolwa kwenqubomgomo ye-ImageLocarityPriority, ngakho-ke ukuze sihlole ukucabanga kwethu, silahle isithombe ngenguqulo entsha yohlelo lokusebenza endaweni yesithathu, ngemva kwalokho ukuhlela kusebenze kahle. . Kwakungenxa nje yenqubomgomo ye-ImageLocarityPriority lapho inkinga yokuhlela ibonwa kancane; ngokuvamile yayihlotshaniswa nenye into. Ngenxa yokuthi asikwazanga ukususa iphutha ngokuphelele kunqubomgomo ngayinye ohlwini lwezinto ezibalulekile ze-kube-shejula, sibe nesidingo sokuphatha okuguquguqukayo kwezinqubomgomo zokuhlela i-pod.

Ukwakheka kwenkinga

Besifuna isixazululo senkinga sicace ngangokunokwenzeka, okungukuthi, amabhizinisi amakhulu e-Kubernetes (lapha sisho isihleli se-kube-schedule) kufanele sihlale singashintshiwe. Besingafuni ukuxazulula inkinga endaweni eyodwa bese siyidala kwenye. Ngakho-ke, sithole izinketho ezimbili zokuxazulula inkinga, ezamenyezelwa esingenisweni sesihloko - ukudala i-schedule eyengeziwe noma ukubhala eyakho. Isidingo esiyinhloko sokuhlela imisebenzi ye-cron ukusabalalisa umthwalo ngokulinganayo kuwo wonke ama-node amathathu. Le mfuneko inganeliswa yizinqubomgomo ezikhona ze-kube-scheduler, ngakho-ke ukuxazulula inkinga yethu asikho isidingo sokubhala isihleli sakho.

Imiyalo yokudala kanye Nokuthunyelwa kwe-kube-scheduler ichazwe ku imibhalo. Kodwa-ke, kithina kwabonakala sengathi inhlangano yokuThumela ayanele ukuqinisekisa ukubekezelelana kwamaphutha ekusebenzeni kwensizakalo ebucayi njenge-kube-scheduler, ngakho-ke sanquma ukuhambisa i-kube-scheduler entsha njenge-Static Pod, ezogadwa ngqo. by Kubelet. Ngakho-ke, sinezidingo ezilandelayo ze-kube-scheduler entsha:

  1. Isevisi kufanele isetshenziswe njenge-Static Pod kuwo wonke ama-cluster masters
  2. Ukubekezelela amaphutha kufanele kuhlinzekwe uma ngabe i-pod esebenzayo ene-kube-scheduler ingatholakali
  3. Okubalulekile okuyinhloko lapho uhlela kufanele kube inani lezinsiza ezitholakalayo endaweni (LeastRequestedPriority)

Izixazululo zokuqalisa

Kuhle ukuqaphela ngaso leso sikhathi ukuthi sizokwenza wonke umsebenzi ku-Kubernetes v1.14.7, ngoba Lena inguqulo esetshenziswe kuphrojekthi. Ake siqale ngokubhala i-manifesto ye-kube-scheduler yethu entsha. Ake sithathe i-manifest ezenzakalelayo (/etc/kubernetes/manifests/kube-scheduler.yaml) njengesisekelo futhi siyilethe kuleli fomu elilandelayo:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

Kafushane mayelana nezinguquko eziyinhloko:

  1. Kushintshwe igama le-pod nesiqukathi kwaba yi-kube-scheduler-cron
  2. Kucaciswe ukusetshenziswa kwezimbobo 10151 kanye ne-10159 njengoba inketho ichazwe hostNetwork: true futhi asikwazi ukusebenzisa izimbobo ezifanayo njenge-kube-scheduler (10251 kanye ne-10259)
  3. Sisebenzisa ipharamitha --config, sicacise ifayela lokumisa okufanele kuqalwe ngalo isevisi
  4. Ukukhwezwa okulungiselelwe kwefayela lokulungiselela (i-scheduler-custom.conf) kanye nefayela lenqubomgomo yokuhlela (i-scheduler-custom-policy-config.json) kusukela kumsingathi

Ungakhohlwa ukuthi i-kube-scheduler yethu izodinga amalungelo afana nalawo azenzakalelayo. Hlela indima yayo yeqoqo:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Manje ake sikhulume ngalokho okufanele kuqukethwe kufayela lokumisa kanye nefayela lenqubomgomo yokuhlela:

  • Ifayela lokucushwa (scheduler-custom.conf)
    Ukuthola ukucushwa kwe-kube-scheduler okuzenzakalelayo, kufanele usebenzise ipharamitha --write-config-to kusuka ku imibhalo. Sizobeka ukucushwa okuwumphumela kufayela /etc/kubernetes/scheduler-custom.conf futhi sikunciphise kube ngale ndlela elandelayo:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

Kafushane mayelana nezinguquko eziyinhloko:

  1. Setha i-schedulerName egameni lesevisi yethu ye-kube-scheduler-cron.
  2. Kupharamitha lockObjectName udinga futhi ukusetha igama lenkonzo yethu futhi uqiniseke ukuthi ipharamitha leaderElect isethwe kuqiniso (uma unenodi eyodwa eyinhloko, ungayisethela kumanga).
  3. Icacise indlela eya kufayela ngencazelo yezinqubomgomo zokushejula kupharamitha algorithmSource.

Kuyafaneleka ukubhekisisa iphuzu lesibili, lapho sihlela khona imingcele yokhiye leaderElection. Ukuqinisekisa ukubekezelelwa kwamaphutha, senze (leaderElect) inqubo yokukhetha umholi (inkosi) phakathi kwe-pods yethu ye-kube-scheduler kusetshenziswa isiphetho esisodwa sabo (resourceLock) okuthiwa kube-scheduler-cron (lockObjectName) endaweni yegama ye-kube-system (lockObjectNamespace). I-Kubernetes iqinisekisa kanjani ukutholakala okuphezulu kwezingxenye eziyinhloko (kuhlanganise ne-kube-scheduler) ingatholakala ku- isihloko.

  • Ukuhlela ifayela lenqubomgomo (scheduler-custom-policy-config.json)
    Njengoba ngike ngabhala ngaphambili, singathola ukuthi yiziphi izinqubomgomo ezizenzakalelayo i-kube-scheduler esebenza ngazo kuphela ngokuhlaziya ikhodi yayo. Okusho ukuthi, asikwazi ukuthola ifayela elinezinqubomgomo zokuhlela ze-kube-scheduler ngendlela efanayo nefayela lokumisa. Ake sichaze izinqubomgomo zokuhlela esinentshisekelo kuzo kufayela /etc/kubernetes/scheduler-custom-policy-config.json ngendlela elandelayo:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

Ngakho-ke, i-kube-scheduler kuqala ihlanganisa uhlu lwama-node lapho i-pod ingahlelelwa khona ngokuvumelana nenqubomgomo ye-GeneralPredicates (ehlanganisa isethi yezinqubomgomo ze-PodFitsResources, PodFitsHostPorts, HostName, kanye ne-MatchNodeSelector). Bese-ke inodi ngayinye ihlolwa ngokuhambisana nesethi yezinqubomgomo kuhlu lwezinto ezibalulekile. Ukuze sifeze izimo zomsebenzi wethu, sicabange ukuthi isethi yezinqubomgomo izoba yisixazululo esilungile. Ake ngikukhumbuze ukuthi isethi yezinqubomgomo ezinezincazelo ezinemininingwane iyatholakala kuyo imibhalo. Ukuze ufeze umsebenzi wakho, ungamane uguqule isethi yezinqubomgomo ezisetshenzisiwe futhi unikeze izisindo ezifanele kuzo.

Sizobiza i-manifest ye-kube-scheduler entsha, esiyidalile ekuqaleni kwesahluko, kube-scheduler-custom.yaml futhi siyibeke endleleni elandelayo /etc/kubernetes/manifests kuma-master nodes amathathu. Uma konke kwenziwa ngendlela efanele, i-Kubelet izokwethula i-pod endaweni ngayinye, futhi kulogi ye-kube-scheduler yethu entsha sizobona imininingwane yokuthi ifayela lethu lenqubomgomo lisetshenziswe ngempumelelo:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

Manje okusele ukukhombisa ku-CronJob yethu ukuthi zonke izicelo zokuhlela ama-pods ayo kufanele zicutshungulwe ngumhleli wethu omusha we-kube-schedule:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

isiphetho

Ekugcineni, sithole enye i-kube-scheduler enesethi ehlukile yezinqubomgomo zokuhlela, umsebenzi wayo ugadwa ngqo yi-kubelet. Ngaphezu kwalokho, simise ukukhethwa komholi omusha phakathi kwama-pods we-kube-scheduler yethu uma kwenzeka umholi omdala engatholakali ngenxa yezizathu ezithile.

Izinhlelo zokusebenza ezivamile nezinsizakalo ziyaqhubeka nokuhlelelwa nge-kube-scheduler ezenzakalelayo, futhi yonke imisebenzi ye-cron idluliselwe kwentsha ngokuphelele. Umthwalo odalwe yimisebenzi ye-cron manje usatshalaliswa ngokulinganayo kuwo wonke ama-node. Uma kucatshangelwa ukuthi imisebenzi eminingi ye-cron yenziwa kuma-node afanayo njengezicelo eziyinhloko zephrojekthi, lokhu kuye kwanciphisa kakhulu ingozi yokuhambisa ama-pods ngenxa yokuntuleka kwezinsiza. Ngemva kokwethula i-kube-scheduler eyengeziwe, izinkinga ngohlelo olungalingani lwemisebenzi ye-cron azibange zisavela.

Funda nezinye izindatshana kubhulogi yethu:

Source: www.habr.com

Engeza amazwana