Nggawe kube-scheduler tambahan karo pesawat adat aturan jadwal

Nggawe kube-scheduler tambahan karo pesawat adat aturan jadwal

Kube-scheduler minangka komponèn integral saka Kubernetes, sing tanggung jawab kanggo njadwalake pod ing simpul miturut kabijakan sing ditemtokake. Asring, sajrone operasi kluster Kubernetes, kita ora kudu mikir babagan kabijakan sing digunakake kanggo jadwal pod, amarga set kabijakan kube-scheduler standar cocok kanggo tugas saben dina. Nanging, ana kahanan nalika penting kanggo kita nyempurnakake proses alokasi pod, lan ana rong cara kanggo ngrampungake tugas iki:

  1. Nggawe kube-scheduler karo pesawat adat saka aturan
  2. Tulis panjadwal sampeyan dhewe lan ajar nggarap panjaluk server API

Ing artikel iki, aku bakal njlèntrèhaké implementasine saka titik pisanan kanggo ngatasi masalah jadwal ora rata saka hearths ing salah siji proyek kita.

A introduksi singkat carane kube-scheduler dianggo

Wigati dicathet yen kube-scheduler ora tanggung jawab kanggo jadwal polong langsung - mung tanggung jawab kanggo nemtokake simpul sing bakal dipasang. Ing tembung liyane, asil karya kube-scheduler iku jeneng simpul, kang bali menyang server API kanggo panjalukan jadwal, lan ing ngendi karya rampung.

Kaping pisanan, kube-scheduler nyusun dhaptar simpul sing pod bisa dijadwalake miturut kabijakan predikat. Sabanjure, saben simpul saka dhaptar iki nampa sawetara poin sing cocog karo kabijakan prioritas. Akibaté, simpul kanthi jumlah maksimum titik dipilih. Yen ana simpul sing duwe skor maksimal sing padha, dipilih kanthi acak. Dhaptar lan katrangan babagan kabijakan predikat (penyaringan) lan prioritas (penilaian) bisa ditemokake ing dokumentasi.

Katrangan saka awak masalah

Senadyan akeh klompok Kubernetes sing beda-beda sing dikelola ing Nixys, kita pisanan nemoni masalah penjadwalan pods, nalika salah sawijining proyek kita kudu nglakokake tugas periodik sing akeh (~ 100 entitas CronJob). Kanggo nyederhanakake katrangan masalah sabisa, kita bakal njupuk minangka conto microservice, ing ngendi tugas cron dibukak sapisan menit, nggawe sawetara beban ing CPU. Kanggo nglakokake tugas cron, telung simpul kanthi karakteristik sing padha diparengake (24 vCPU saben).

Ing wektu sing padha, ora bisa ngomong kanthi akurasi suwene CronJob bakal ditindakake, amarga volume data input terus saya ganti. Rata-rata, sajrone operasi normal kube-scheduler, saben simpul mbukak 3-4 conto proyek, sing nggawe ~ 20-30% beban ing CPU saben simpul:

Nggawe kube-scheduler tambahan karo pesawat adat aturan jadwal

Masalahe dhewe yaiku kadhangkala pod tugas cron mandheg dijadwalake ing salah siji saka telung simpul. Yaiku, ing sawetara titik, ora ana pod siji sing direncanakake kanggo salah sawijining simpul, dene ing rong simpul liyane 6-8 salinan tugas mlaku, nggawe ~ 40-60% beban CPU:

Nggawe kube-scheduler tambahan karo pesawat adat aturan jadwal

Masalah kasebut bola-bali kanthi frekuensi acak lan sok-sok ana hubungane karo versi kode anyar sing diluncurake.

Kanthi nambah tingkat logging kube-scheduler kanggo tingkat 10 (-v=10), kita wiwit ngrekam pinten TCTerms saben simpul gained sak proses evaluasi. Sajrone operasi perencanaan normal, informasi ing ngisor iki bisa dideleng ing log:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

Sing. menehi kritik dening informasi sing dijupuk saka log, saben kelenjar ngetung nomer witjaksono TCTerms final lan acak dipilih kanggo planning. Ing wektu perencanaan masalah, log katon kaya iki:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

Saka iku bisa dideleng yen salah siji saka simpul ngetung TCTerms pungkasan kurang saka liyane, lan mulane planning digawa metu mung kanggo loro simpul sing ngetung skor maksimum. Mangkono, kita mesthi yakin manawa masalah kasebut ana ing jadwal polong.

Algoritma luwih lanjut kanggo ngrampungake masalah kasebut jelas kanggo kita - nganalisa log, ngerti apa prioritas simpul ora entuk poin lan, yen perlu, atur kabijakan kube-scheduler standar. Nanging, ing kene kita ngadhepi rong kesulitan sing signifikan:

  1. Ing tingkat logging maksimal (10), poin sing dipikolehi mung kanggo sawetara prioritas dibayangke. Ing kutipan ing ndhuwur log, sampeyan bisa ndeleng manawa kanggo kabeh prioritas sing dibayangke ing log, node ngetung angka sing padha ing jadwal normal lan masalah, nanging asil pungkasan ing kasus perencanaan masalah beda. Mangkono, kita bisa nyimpulake yen kanggo sawetara prioritas, nyetak ana ing "konco layar", lan kita ora duwe cara kanggo ngerti sing prioritas simpul ora entuk nilai. Kita nerangake masalah iki kanthi rinci ing Jeksa Agung bisa ngetokake Repositori Kubernetes ing Github. Nalika nulis, respon ditampa saka pangembang yen dhukungan logging bakal ditambahake ing nganyari Kubernetes v1.15,1.16, 1.17 lan XNUMX.
  2. Ora ana cara sing gampang kanggo mangertos set kabijakan tartamtu sing saiki digunakake kube-scheduler. Ya, ing dokumentasi dhaftar iki kadhaptar, nanging ora ngemot informasi bab apa bobot tartamtu diutus kanggo saben kawicaksanan priorites. Sampeyan bisa ndeleng bobot utawa ngowahi kawicaksanan kube-scheduler standar mung ing sumber.

Wigati dicathet yen yen kita bisa ngrekam yen simpul ora nampa poin miturut kabijakan ImageLocalityPriority, sing menehi poin menyang simpul yen wis duwe gambar sing dibutuhake kanggo mbukak aplikasi kasebut. Yaiku, nalika versi aplikasi anyar diluncurake, tugas cron bisa mlaku ing rong simpul, ndownload gambar anyar saka registri docker menyang wong-wong mau, lan kanthi mangkono loro simpul nampa skor pungkasan sing luwih dhuwur tinimbang sing katelu. .

Kaya sing dakcritakake ing ndhuwur, ing log kita ora bisa ndeleng informasi babagan evaluasi kabijakan ImageLocalityPriority, supaya kanggo mriksa asumsi kita, kita mbuwang gambar kasebut kanthi versi anyar aplikasi kasebut menyang simpul katelu, sawise jadwal kasebut bisa digunakake kanthi bener. . Iki amarga kabijakan ImageLocalityPriority manawa masalah jadwal jarang diamati; luwih asring digandhengake karo perkara liya. Amarga kasunyatan sing kita ora bisa kanthi debug saben kawicaksanan ing dhaftar prioritas saka standar kube-scheduler, kita kudu Manajemen fleksibel saka kawicaksanan jadwal pod.

Formulasi masalah

Kita pengin solusi kanggo masalah kasebut minangka spesifik, yaiku, entitas utama Kubernetes (kene tegese kube-scheduler standar) kudu tetep ora owah. Kita ora pengin ngrampungake masalah ing sak panggonan lan nggawe ing liyane. Mangkono, kita teka ing rong opsi kanggo ngrampungake masalah, sing diumumake ing introduksi artikel - nggawe panjadwal tambahan utawa nulis dhewe. Syarat utama kanggo njadwalake tugas cron yaiku nyebarake beban kanthi merata ing telung simpul. Persyaratan iki bisa ditindakake kanthi kabijakan kube-scheduler sing wis ana, mula kanggo ngatasi masalah kita ora ana gunane nulis jadwal sampeyan dhewe.

Pandhuan kanggo nggawe lan Deploying tambahan kube-scheduler diterangake ing dokumentasi. Nanging, misale jek entitas Deployment ora cukup kanggo njamin toleransi fault ing operasi layanan kritis kaya kube-scheduler, mula kita mutusake kanggo masang kube-scheduler anyar minangka Static Pod, sing bakal dipantau langsung. dening Kubelet. Dadi, kita duwe syarat ing ngisor iki kanggo kube-scheduler anyar:

  1. Layanan kasebut kudu disebarake minangka Pod Statis ing kabeh master kluster
  2. Toleransi kesalahan kudu diwenehake yen pod aktif karo kube-scheduler ora kasedhiya
  3. Prioritas utama nalika ngrancang yaiku jumlah sumber daya sing kasedhiya ing simpul (LeastRequestedPriority)

Solusi implementasine

Wigati dicathet yen kita bakal nindakake kabeh pakaryan ing Kubernetes v1.14.7, amarga Iki minangka versi sing digunakake ing proyek kasebut. Ayo diwiwiti kanthi nulis manifesto kanggo kube-scheduler anyar. Ayo njupuk manifest standar (/etc/kubernetes/manifests/kube-scheduler.yaml) minangka basis lan nggawa menyang formulir ing ngisor iki:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

Sedhela babagan owah-owahan utama:

  1. Ngganti jeneng pod lan wadhah dadi kube-scheduler-cron
  2. Nemtokake panggunaan port 10151 lan 10159 minangka pilihan sing ditetepake hostNetwork: true lan kita ora bisa nggunakake port sing padha karo kube-scheduler standar (10251 lan 10259)
  3. Nggunakake parameter --config, kita nemtokake file konfigurasi sing layanan kasebut kudu diwiwiti
  4. Pemasangan konfigurasi file konfigurasi (scheduler-custom.conf) lan file policy scheduling (scheduler-custom-policy-config.json) saka host

Aja lali yen kube-scheduler kita butuh hak sing padha karo standar. Sunting peran kluster:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Saiki ayo ngomong babagan apa sing kudu ana ing file konfigurasi lan file kabijakan jadwal:

  • File konfigurasi (scheduler-custom.conf)
    Kanggo entuk konfigurasi kube-scheduler standar, sampeyan kudu nggunakake parameter kasebut --write-config-to saka dokumentasi. Kita bakal nyelehake konfigurasi sing diasilake ing file /etc/kubernetes/scheduler-custom.conf lan nyuda dadi formulir ing ngisor iki:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

Sedhela babagan owah-owahan utama:

  1. Kita nyetel schedulerName kanggo jeneng layanan kube-scheduler-cron.
  2. Ing parameter lockObjectName sampeyan uga kudu nyetel jeneng layanan kita lan priksa manawa parameter kasebut leaderElect disetel kanggo bener (yen sampeyan duwe siji master simpul, sampeyan bisa nyetel iku palsu).
  3. Nemtokake path menyang file kanthi katrangan babagan kabijakan jadwal ing parameter kasebut algorithmSource.

Iku worth njupuk dipikir nyedhaki ing titik kapindho, ngendi kita ngowahi paramèter kanggo tombol leaderElection. Kanggo njamin toleransi kesalahan, kita wis ngaktifake (leaderElect) proses milih pimpinan (master) antarane pods saka kube-scheduler kita nggunakake titik pungkasan siji kanggo wong-wong mau (resourceLock) jenenge kube-scheduler-cron (lockObjectName) ing kube-system namespace (lockObjectNamespace). Carane Kubernetes njamin kasedhiyan dhuwur saka komponen utama (kalebu kube-scheduler) bisa ditemokake ing artikel.

  • File kebijakan penjadwalan (scheduler-custom-policy-config.json)
    Kaya sing dakcritakake sadurunge, kita bisa ngerteni kabijakan khusus sing digunakake dening kube-scheduler mung kanthi nganalisa kode kasebut. Tegese, kita ora bisa entuk file kanthi kabijakan jadwal kanggo kube-scheduler standar kanthi cara sing padha karo file konfigurasi. Ayo njlèntrèhaké kawicaksanan jadwal sing kita kasengsem ing file /etc/kubernetes/scheduler-custom-policy-config.json kaya ing ngisor iki:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

Mangkono, kube-scheduler pisanan nyusun dhaptar simpul sing pod bisa dijadwalake miturut kabijakan GeneralPredicates (sing kalebu sakumpulan PodFitsResources, PodFitsHostPorts, HostName, lan kabijakan MatchNodeSelector). Banjur saben simpul dievaluasi sesuai karo set kabijakan ing array prioritas. Kanggo ngrampungake kahanan tugas, kita nganggep manawa kabijakan kasebut bakal dadi solusi sing paling optimal. Ayo kula ngelingake sampeyan manawa sakumpulan kabijakan kanthi katrangan rinci kasedhiya ing dokumentasi. Kanggo ngrampungake tugas, sampeyan mung bisa ngganti set kabijakan sing digunakake lan nemtokake bobot sing cocog.

Ayo dadi nelpon manifest saka kube-scheduler anyar, kang kita digawe ing awal bab, kube-scheduler-custom.yaml lan manggonake ing path ing ngisor iki /etc/kubernetes/manifests ing telung kelenjar master. Yen kabeh wis rampung kanthi bener, Kubelet bakal mbukak pod ing saben simpul, lan ing log saka kube-scheduler anyar kita bakal weruh informasi yen file kabijakan kita wis kasil diterapake:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

Saiki sing isih ana yaiku nuduhake ing spek CronJob kita yen kabeh panjalukan kanggo jadwal pods kudu diproses dening kube-scheduler anyar:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

kesimpulan

Wekasanipun, kita tak kube-scheduler tambahan karo pesawat unik saka kawicaksanan jadwal, karya kang teliti langsung dening kubelet. Kajaba iku, kita wis nyiyapake pemilihan pimpinan anyar ing antarane polong kube-scheduler yen pimpinan lawas ora kasedhiya kanggo sawetara alasan.

Aplikasi lan layanan biasa terus dijadwal liwat kube-scheduler standar, lan kabeh tugas cron wis rampung ditransfer menyang anyar. Beban sing digawe dening tugas cron saiki disebarake ing kabeh simpul. Ngelingi manawa sebagian besar tugas cron ditindakake ing simpul sing padha karo aplikasi utama proyek kasebut, iki wis nyuda risiko mindhah pod amarga kekurangan sumber daya. Sawise ngenalake kube-scheduler tambahan, masalah karo jadwal ora rata saka tugas cron ora muncul maneh.

Uga maca artikel liyane ing blog kita:

Source: www.habr.com

Add a comment