Abuuritaanka kube-jadwal dheeraad ah oo leh xeerar jadwal u habaysan

Abuuritaanka kube-jadwal dheeraad ah oo leh xeerar jadwal u habaysan

Kube-scheduler waa qayb muhiim ah oo ka mid ah Kubernetes, kaas oo mas'uul ka ah jadwalka jadwalka galmoodka guud ahaan noodhka si waafaqsan siyaasadaha cayiman. Badanaa, inta lagu jiro hawlgalka Kubernetes cluster, ma aha inaan ka fikirno siyaasadaha loo isticmaalo in lagu jadwaleeyo baloogyada, maadaama jaangooyooyinka jadwalka kube-jadwalaha caadiga ah ay ku habboon yihiin inta badan hawlaha maalinlaha ah. Si kastaba ha ahaatee, waxaa jira xaalado marka ay muhiim noo tahay in aan hagaajino habka loo qoondeeyo cawska, waxaana jira laba siyaabood oo lagu fuliyo hawshan:

  1. Samee jadwal kube-jadwal leh xeerar gaar ah
  2. Qor jadwalkaaga oo bar si uu ula shaqeeyo codsiyada serverka API

Maqaalkan, waxaan ku tilmaami doonaa hirgelinta qodobka ugu horreeya si loo xalliyo dhibaatada jadwalka aan sinnayn ee foornooyinka mid ka mid ah mashaariicdayada.

Hordhac kooban oo ku saabsan sida uu u shaqeeyo jadwalka kube-kube

Waxaa habboon in si gaar ah loo xuso xaqiiqda ah in kube-jadwalaha uusan mas'uul ka ahayn jadwalka jadwalka tooska ah - waxay mas'uul ka tahay oo keliya go'aaminta noodhka meesha lagu dhejiyo. Si kale haddii loo dhigo, natiijada shaqada kube-scheduler waa magaca noodhka, kaas oo ku soo noqda server-ka API codsi jadwal ah, halkaasna shaqadeedu ku dhammaanayso.

Marka hore, kube-jadwalaha wuxuu ururiyaa liiska noodhka kaas oo boodhka lagu jadwalsan karo si waafaqsan siyaasadaha saadaaliya. Marka xigta, noodh kasta oo liiskan ka mid ah wuxuu helayaa tiro dhibcood oo gaar ah si waafaqsan siyaasadaha mudnaanta leh. Natiijo ahaan, noodhka leh tirada ugu badan ee dhibcaha ayaa la doortaa. Haddii ay jiraan qanjidhada leh dhibcaha ugu badan ee isku midka ah, mid random ah ayaa la doortaa. Liiska iyo sharraxaadaha saadaalayaasha (shaandhaynta) iyo mudnaanta (dhibcaha) siyaasadaha ayaa laga heli karaa dukumentiyo.

Sharaxaada jirka dhibaatada

In kasta oo tirada badan ee kooxaha Kubernetes ee kala duwan lagu hayo Nixys, waxaan marka hore la kulannay dhibaatada jadwalka jadwalka kaliya dhawaan, markii mid ka mid ah mashruucyadayada loo baahan yahay si loo qabto tiro badan oo hawlo xilliyeed ah (~ 100 CronJob). Si loo fududeeyo sharraxaadda dhibaatada sida ugu badan ee suurtogalka ah, waxaan soo qaadan doonaa tusaale ahaan hal adeeg-yaro ah, kaas oo hawl cron ah la bilaabo hal mar daqiiqadii, abuurista xoogaa culeys ah CPU. Si loo socodsiiyo hawsha cron, saddex nuux oo leh sifooyin isku mid ah ayaa loo qoondeeyay (24 vCPUs mid kasta).

Isla mar ahaantaana, suurtagal maaha in si sax ah loo sheego inta CronJob ay qaadan doonto in la fuliyo, maadaama mugga xogta gelinta ay si joogto ah isu beddesho. Celcelis ahaan, inta lagu jiro hawlgalka caadiga ah ee jadwalka kube-jadwalaha, noodh kastaa wuxuu wadaa 3-4 tusaale shaqo, kaas oo abuuraya ~ 20-30% culeyska CPU ee noode kasta:

Abuuritaanka kube-jadwal dheeraad ah oo leh xeerar jadwal u habaysan

Dhibka laftiisa ayaa ah in marmarka qaarkood ay istaagaan in lagu jadwaleeyo saddexda nood midkood. Taasi waa, wakhti wakhti ka mid ah, hal boodh oo keliya looma qorsheynin mid ka mid ah qanjidhada, halka labada nood ee kale 6-8 nuqul ka mid ah hawsha ay socdeen, abuurista ~ 40-60% ee culeyska CPU:

Abuuritaanka kube-jadwal dheeraad ah oo leh xeerar jadwal u habaysan

Dhibaatadu waxay ku soo noq-noqotay si aan kala sooc lahayn oo aan kala sooc lahayn oo marmarna xidhiidh la leh wakhtiga nooca cusub ee koodka la soo saaray.

Anagoo kordhinayna heerka jadwalaha kube-jadwalaha ilaa heerka 10 (-v=10), waxaanu bilownay inaanu diiwaan galino inta dhibcood ee noodh kastaa helay intii lagu jiray hawsha qiimaynta. Inta lagu jiro hawlgalka qorshaynta caadiga ah, macluumaadka soo socda ayaa lagu arki karaa diiwaanka:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

Kuwaas. iyadoo la eegayo xogta laga helay buugaagta, mid kasta oo ka mid ah noodhka ayaa keenay tiro is le'eg oo ah dhibcaha ugu dambeeya iyo mid aan kala sooc lahayn ayaa loo doortay qorshaynta. Wakhtiga qorshaynta dhibka leh, logudu waxay u ekaayeen sidan:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

Taas oo laga arki karo in mid ka mid ah noodhka uu dhaliyay dhibco ka yar kuwa kale, sidaas darteed qorshaynta waxaa loo sameeyay kaliya labada nood ee dhaliyay dhibcaha ugu badan. Sidaa darteed, waxaan xaqiiqdii ku qanacsanahay in dhibaatadu ay si sax ah ugu jirto jadwalka jadwalka.

Algorithm dheeraad ah oo lagu xallinayo dhibaatada ayaa noo caddayd - falanqeyso diiwaannada, fahamka mudnaanta noodu ma dhalin dhibco iyo, haddii loo baahdo, hagaajiso siyaasadaha jadwalka kube-jadwalka. Si kastaba ha ahaatee, halkan waxaa ina soo food saartay laba dhibaato oo muhiim ah:

  1. Heerkii ugu badnaa ee goynta (10), dhibcaha la helay oo kaliya ee mudnaanta qaar ayaa ka muuqda. Qaybta sare ee logus, waxaad arki kartaa in dhammaan mudnaanta leh ee ka muuqda diiwaannada, noodhadhku waxay keenaan tiro isku mid ah dhibco isku mid ah jadwalka caadiga ah iyo jadwalka dhibaatada, laakiin natiijada kama dambaysta ah ee kiiska qorsheynta dhibaatada way ka duwan tahay. Sidaa darteed, waxaan ku soo gabagabeyn karnaa in mudnaanta qaar ka mid ah, dhibcaha ay ka dhacaan "muuqaalka gadaashiisa", mana hayno si aan u fahamno mudnaanta noodu aysan helin dhibco. Waxaan dhibaatadan si faahfaahsan u sharaxnay arrin Kaydka Kubernetes ee Github. Wakhtiga qorista, jawaab ayaa laga helay soosaarayaasha in taageerada qorista lagu dari doono cusbooneysiinta Kubernetes v1.15,1.16, 1.17 iyo XNUMX.
  2. Ma jirto hab fudud oo lagu fahmi karo siyaasadaha gaarka ah ee kube-jadwalaha uu hadda la shaqaynayo. Haa, gudaha dukumentiyo Liiskan waa liis, laakiin kuma jiraan macluumaad ku saabsan waxa miisaannada gaarka ah loo qoondeeyey mid kasta oo ka mid ah siyaasadaha mudnaanta leh. Waxaad arki kartaa miisaanka ama aad wax ka beddeli kartaa siyaasadaha jadwalka kube-jadwalaha caadiga ah oo kaliya gudaha codes isha.

Waxaa xusid mudan in mar aan awoodnay inaan duubno in node uusan helin dhibco marka loo eego siyaasadda ImageLocaityPriority, kaas oo abaal-marin ku siinaya noode haddii ay hore u leedahay sawirka lagama maarmaanka u ah in lagu socodsiiyo codsiga. Taasi waa, markii nuqul cusub oo arjiga la soo saaray, hawsha cron waxay ku guulaysatay inay ku shaqeyso laba nood, iyaga oo soo dejinaya sawir cusub oo ka mid ah diiwaanka docker-ka, sidaas darteedna labada nodes waxay heleen dhibco kama dambays ah oo ka sarreeya kan saddexaad. .

Sida aan kor ku qoray, diiwaannada kuma aragno macluumaadka ku saabsan qiimeynta siyaasadda ImageLocarityPriority, si aan u hubinno malo-awaalkayaga, waxaan ku tuurnay sawirka nooca cusub ee codsiga galitaanka qanjirada saddexaad, ka dib markaa jadwalka ayaa si sax ah u shaqeeyay. . Waxay ahayd si sax ah sababta oo ah siyaasadda mudnaanta leh ee ImageLocaity in dhibka jadwalka la arkay marar dhif ah; inta badan waxaa lala xiriiriyay shay kale. Sababo la xiriira xaqiiqda ah in aynaan si buuxda u tirtiri karin mid kasta oo ka mid ah siyaasadaha ku jira liiska mudnaanta ee jadwalka kube-jadwalaha, waxaan u baahneyn maamul dabacsanaan ee siyaasadaha jadwalka jadwalka.

Abuurista dhibaatada

Waxaan rabnay in xalka dhibaatada uu ahaado mid gaar ah intii suurtagal ah, taas oo ah, hay'adaha ugu muhiimsan ee Kubernetes (halkan waxaan ula jeednaa jadwalka kube-jadwalaha) waa inuu ahaado mid aan isbeddelin. Ma rabnay inaan dhib meel ku xallino oo meel kale ka abuurno. Sidaa darteed, waxaan u nimid laba ikhtiyaar oo lagu xallinayo dhibaatada, kuwaas oo lagu dhawaaqay hordhaca maqaalka - abuurista jadwal dheeraad ah ama qoraal adiga kuu gaar ah. Baahida ugu weyn ee jadwalaynta hawlaha cron waa in loo qaybiyo culayska si siman saddexda nood. Shuruuddan waxaa lagu qanci karaa siyaasadaha kube-jadwalaha ee jira, marka si loo xalliyo dhibkeena ma jirto wax macno ah oo aad qorto jadwalkaaga.

Tilmaamaha abuurista iyo geynta jadwalaha kube-kube oo dheeraad ah ayaa lagu sifeeyay dukumentiyo. Si kastaba ha ahaatee, waxa ay noo muuqatay in kooxda la dirayo aanay ku filnayn in ay xaqiijiso u dulqaadashada cilada hawl-galka adeegga muhiimka ah sida kube-scheduler, markaa waxaanu go'aansanay in aanu hawlgelino jadwal cusub oo kube-kube ah sida Static Pod, kaas oo si toos ah loola socon doono. by Kubelet. Markaa, waxaanu haynaa shuruudaha soo socda ee jadwalaha kube-jadwalaha cusub:

  1. Adeegga waa in loo geeyaa sidii Static Pod ee dhammaan sayidyada kooxda
  2. Dulqaadka cilada waa in la bixiyaa haddii lacalla in boodhka firfircoon ee kube-jadwalaha aan la heli karin
  3. Mudnaanta ugu weyn marka la qorsheynayo waa in ay ahaataa tirada agabyada la heli karo ee ku yaal noodka (LeastRequestedPriority)

xalinta fulinta

Waxaa mudan in isla markiiba la ogaado inaan ku fulin doono dhammaan shaqada Kubernetes v1.14.7, sababtoo ah Kani waa nooca loo adeegsaday mashruuca. Aan ku bilowno inaan u qorno cadeynta jadwalkeena cusub ee kube-kube. Aynu soo qaadanno caddaynta caadiga ah (/etc/kubernetes/manifests/kube-scheduler.yaml) oo aan u keenno foomkan:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

Si kooban oo ku saabsan isbeddellada ugu waaweyn:

  1. U baddalay magaca boodhka iyo weelka kube-scheduler-cron
  2. Waxaa lagu qeexay isticmaalka dekedaha 10151 iyo 10159 sida ikhtiyaarka loo qeexay hostNetwork: true mana isticmaali karno dekedo la mid ah kan kube-jadwalaha caadiga ah (10251 iyo 10259)
  3. Anaga oo adeegsanayna --config parameter, waxaanu cayimay faylka qaabaynta ee adeega lagu bilaabayo
  4. Kordhinta habaysan ee faylka qaabaynta (scheduler-custom.conf) iyo faylka siyaasada jadwalaynta (scheduler-custom-policy-config.json) ee ka socda martida loo yahay

Ha iloobin in kube-jadwalahayagu uu u baahan doono xuquuq la mid ah kan caadiga ah. Wax ka beddel doorkeeda kooxdu:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Hadda aan ka hadalno waxa ku jira faylka qaabeynta iyo faylka siyaasada jadwalka:

  • Faylka habaynta (scheduler-custom.conf)
    Si aad u heshid habaynta kube-jadwalaha caadiga ah, waa inaad isticmaashaa cabbirka --write-config-to ka dukumentiyo. Waxaanu ku dhejin doonaa qaabaynta natiijada faylka /etc/kubernetes/scheduler-custom.conf oo u dhignaa qaabkan soo socda:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

Si kooban oo ku saabsan isbeddellada ugu waaweyn:

  1. Waxaan u dhignay jadwalaha Magaca magaca adeegga kube-scheduler-cron.
  2. Qiyaas ahaan lockObjectName sidoo kale waxaad u baahan tahay inaad dejiso magaca adeegayaga oo aad hubiso in cabbirka leaderElect loo dhigay run (haddii aad leedahay hal noode sayid, waxaad u dhigi kartaa been).
  3. Lagu cayimay dariiqa loo maro faylka oo wata sharraxaad ku saabsan siyaasadaha jadwalka ee cabbirka algorithmSource.

Waxaa habboon in si dhow loo eego qodobka labaad, halkaas oo aan tafatirno cabbirrada furaha leaderElection. Si loo hubiyo dulqaadka qaladka, waxaan awoodnay (leaderElect) habka loo dooranayo hogaamiyaha (master) inta u dhaxaysa boodhka jadwalkeena kube-hawlwadeenka iyadoo la adeegsanayo hal dhibic oo dhamaadka ah iyaga (resourceLock) lagu magacaabo kube-scheduler-cron (lockObjectName) kube-system namespace (lockObjectNamespace). Sida Kubernetes u hubinayo helitaanka sare ee qaybaha ugu muhiimsan (oo ay ku jiraan kube-scheduler) ayaa laga heli karaa maqaal.

  • Jadwalka nidaamka faylka (scheduler-custom-policy-config.json)
    Sidii aan hore u soo qoray, waxaan ogaan karnaa siyaasadaha gaarka ah ee kube-jadwalaha caadiga ah uu la shaqeeyo kaliya isagoo falanqeynaya koodkiisa. Taasi waa, ma heli karno fayl leh siyaasadaha jadwalka jadwalka kube-jadwalaha caadiga ah si la mid ah faylka qaabeynta. Aynu sharaxno siyaasadaha jadwalka ee aan xiisayno faylka /etc/kubernetes/scheduler-custom-policy-config.json sida soo socota:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

Haddaba, kube-jadwalaha marka hore waxa uu ururiyaa liiska qanjidhada kaas oo boodhka loo jadwalsan karo si waafaqsan siyaasadda GeneralPredicates (kaas oo ay ku jiraan set of PodFitsResources, PodFitsHostPorts, HostName, iyo siyaasadaha MatchNodeSelector). Kadibna noodh kasta ayaa lagu qiimeeyaa si waafaqsan nidaamka dejisan ee diyaarinta mudnaanta. Si aan u buuxino shuruudaha hawsheenna, waxaan u qaadanay in siyaasadaha noocaan ah uu yahay xalka ugu fiican. Aan ku xasuusiyo in siyaasado kala duwan oo sifooyinkooda faahfaahsan ay ku jiraan dukumentiyo. Si aad hawshaada u fuliso, waxaad si fudud u beddeli kartaa siyaasadaha la isticmaalay oo aad u qoondayn kartaa miisaan ku habboon.

Aynu u yeedhno muujinta jadwal-haye-kube-cusub, oo aynu ku abuurnay bilawgii cutubka, kube-scheduler-custom.yaml oo aynu dhigno dariiqa soo socda /etc/kubernetes/ka muuqda saddexda qanjidhada sare. Haddii wax walba si sax ah loo sameeyo, Kubelet wuxuu soo saari doonaa boodh kasta oo ka mid ah nood kasta, iyo diiwaanka jadwalka kube-jadwalaha cusub waxaan ku arki doonaa macluumaadka in faylka siyaasaddayada si guul leh loo dabaqay:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

Hadda waxa hadhay oo dhan waa in aan ku muujino tafatirka CronJob in dhammaan codsiyada jadwalka gawaadhida ay tahay in uu habeeyo jadwalkeena cusub ee kube:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

gunaanad

Ugu dambayntii, waxaanu helnay kube-jawle dheeraad ah oo leh siyaasad jadwal gaar ah, kaas oo shaqadeeda si toos ah ula socoto kubelet. Intaa waxa dheer, waxaanu samaynay doorashada hogaamiyaha cusub inta u dhaxaysa gogosha jadwalkeena kube-jadwalaha haddii hogaamiyihii hore uu noqdo mid la heli waayo sababo jira awgood.

Codsiyada iyo adeegyada joogtada ah waxay sii wadaan in lagu jadwaleeyo jadwalka kube-jadwalaha, iyo dhammaan hawlihii cron-ga ayaa si buuxda loogu wareejiyay kan cusub. Culayska ay abuureen hawlaha cron ayaa hadda si siman loogu qaybiyay dhammaan qanjidhada. Iyadoo la tixgelinayo in inta badan hawlaha cron ay ku fuliyaan qanjidhada isku midka ah ee codsiyada ugu muhiimsan ee mashruuca, tani waxay si weyn u yaraysay khatarta ah in la guuro boogaha sababtoo ah ilo la'aan. Ka dib markii la soo bandhigay jadwalka kube-kube-ga dheeriga ah, dhibaatooyinka jadwalka aan sinnayn ee hawlaha cron mar dambe ma soo kicin.

Sidoo kale akhri maqaallo kale oo ku jira blog-keena:

Source: www.habr.com

Add a comment