Waihangatia he kube-kaiwhakarite me te huinga ritenga o nga ture whakahōtaka

Waihangatia he kube-kaiwhakarite me te huinga ritenga o nga ture whakahōtaka

Ko te Kube-scheduler tetahi waahanga nui o Kubernetes, ko ia te kawenga mo te whakarite i nga putunga puta noa i nga pona kia rite ki nga kaupapa here kua tohua. I te nuinga o te wa, i te wa e mahi ana te roopu Kubernetes, kaore e tika kia whakaarohia kohea nga kaupapa here e whakamahia ana ki te whakarite i nga pene, i te mea ko te huinga kaupapa here o te kube-scheduler taunoa e tika ana mo te nuinga o nga mahi o ia ra. Heoi ano, tera ano etahi ahuatanga he mea nui ki a tatou ki te whakatikatika i te tikanga o te tohatoha poti, a e rua nga huarahi hei whakatutuki i tenei mahi:

  1. Waihangatia he kube-scheduler me te huinga ritenga o nga ture
  2. Tuhia to ake kai whakarite me te ako ki te mahi me nga tono a te tūmau API

I roto i tenei tuhinga, ka whakaahuahia e au te whakatinanatanga o te waahi tuatahi ki te whakaoti i te raru o te whakatakotoranga koretake o nga kaaahi i runga i tetahi o a maatau kaupapa.

He kupu whakataki poto mo te mahi kube-scheduler

He mea tika kia mahara ko te kube-scheduler ehara i te kawenga mo te whakarite tika i nga poti - ko ia anake te kawenga mo te whakatau i te node hei tuu i te poti. I etahi atu kupu, ko te hua o te mahi a kube-scheduler ko te ingoa o te node, ka hoki mai ki te tūmau API mo te tono whakaritenga, ka mutu ana mahi.

Tuatahi, ka whakahiatohia e te kube-scheduler he rarangi o nga pona ka taea te whakarite i te pod i runga i nga kaupapa here. Whai muri, ka whiwhi ia node mai i tenei rarangi etahi tohu i runga i nga kaupapa here matua. Ko te mutunga, ka kowhiria te node me te nui o nga tohu. Mēnā he kōpuku he ōrite te kaute mōrahi, ka tohua he kōpuku matapōkere. Ka kitea he rarangi me te whakamaarama mo nga kaupapa here (tatari) me nga kaupapa matua (tohu) i roto tuhinga.

Whakaahuatanga o te tinana raruraru

Ahakoa te maha o nga momo tautau Kubernetes e pupuritia ana i Nixys, i te tuatahi ka pa ki a matou te raruraru o te whakarite i nga poti no na tata tonu nei, i te wa i hiahia ai tetahi o a maatau kaupapa ki te whakahaere i nga mahi o ia waa (~100 hinonga CronJob). Hei whakamaarama i te whakamaaramatanga o te raru ka taea, ka tangohia e matou hei tauira tetahi microservice, i roto i te waa ka whakarewahia he mahi cron kotahi i te meneti, ka hanga etahi kawenga ki runga i te PTM. Hei whakahaere i te mahi cron, e toru nga pona he tino rite nga ahuatanga i tohaina (24 vCPU ki ia waahanga).

I te wa ano, kaore e taea te kii tika kia pehea te roa o te CronJob ki te mahi, na te mea kei te huri haere tonu te rahi o nga raraunga whakauru. I te toharite, i te wa e mahi noa ana te kube-scheduler, e 3-4 nga wa mahi e whakahaerehia ana e ia node, e hanga ana ~20-30% o te uta ki te PTM o ia node:

Waihangatia he kube-kaiwhakarite me te huinga ritenga o nga ture whakahōtaka

Ko te raru ano i etahi wa ka mutu te whakarite i nga putunga mahi cron ki tetahi o nga pona e toru. Arā, i etahi wa, karekau he pota kotahi i whakamaherehia mo tetahi o nga pona, i runga i era atu pona e rua e 6-8 nga kape o te mahi e rere ana, e hanga ana ~ 40-60% o te uta CPU:

Waihangatia he kube-kaiwhakarite me te huinga ritenga o nga ture whakahōtaka

I hoki mai ano te raruraru me te auau matapōkere, ā, i etahi wa ka hono ki te wa i tukuna ai he putanga hou o te waehere.

Na te whakanui ake i te taumata takiuru kube-scheduler ki te taumata 10 (-v=10), i timata matou ki te tuhi e hia nga piro i riro i ia node i te wa o te mahi arotake. I roto i nga mahi whakamahere noa, ka kitea nga korero e whai ake nei i roto i nga raarangi:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

Ko era. i runga i nga korero i puta mai i nga raarangi, i whiwhi ia o nga pona he rite tonu te maha o nga piro whakamutunga ka kowhiria tetahi matapōkere hei whakamahere. I te wa o te whakamahere raruraru, he penei te ahua o nga rakau:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

Ka kitea he iti ake nga piro whakamutunga o tetahi o nga pona i era atu, na reira i whakahaerea te whakamahere mo nga pona e rua anake i whiwhi i te kaute morahi. No reira, i tino whakapono matou ko te raru kei roto tonu i te whakaritenga o nga pene.

Ko etahi atu algorithm mo te whakaoti rapanga i kitea ki a matou - te tātari i nga raarangi, me maarama he aha te kaupapa matua kaore i whiwhi tohu te pona, a, ki te tika, whakatikahia nga kaupapa here a te kube-scheduler taunoa. Heoi, i konei ka raru tatou e rua nga raru nui:

  1. I te taumata morahi o te takiuru (10), ka kitea nga piro kua riro mo etahi kaupapa matua. I roto i te waahanga o runga ake nei o nga raarangi, ka kite koe mo nga kaupapa matua katoa e whakaatuhia ana i roto i nga raarangi, he rite tonu te tatau o nga pona i roto i te waa noa me te whakarite rapanga, engari he rereke te hua whakamutunga mo te whakamahere raruraru. No reira, ka taea e tatou te whakatau mo etahi kaupapa matua, ka puta te tohu "kei muri o nga tirohanga", kaore he huarahi ki te maarama mo tehea kaupapa matua kaore i whiwhi tohu. I whakaahuahia e matou tenei raruraru i roto i nga taipitopito take Putanga Kubernetes i Github. I te wa e tuhi ana, i tae mai he whakautu mai i nga kaiwhakawhanake ka taapirihia te tautoko takiuru ki nga whakahoutanga Kubernetes v1.15,1.16, 1.17 me XNUMX.
  2. Karekau he huarahi ngawari ki te mohio ko tehea huinga kaupapa here e mahihia ana e kube-scheduler. Ae, kei roto tuhinga kua whakarārangitia tēnei rārangi, engari karekau he pārongo mō ngā taumahatanga motuhake ka tohua ki ia kaupapa here matua. Ka taea e koe te kite i nga taumahatanga, te whakatika ranei i nga kaupapa here o te kube-scheduler taunoa anake i roto waehere puna.

He mea tika kia mohiohia i te wa i taea ai e matou te tuhi kaore i whiwhi tohu tetahi node i runga i te kaupapa here ImageLocalityPriority, e tohu ana ki tetahi node mena kei a ia te ahua e tika ana hei whakahaere i te tono. Arā, i te wa i tukuna ai he putanga hou o te tono, ka whakahaere te mahi cron i runga i nga pona e rua, me te tango i tetahi ahua hou mai i te rehita docker ki a ratou, na e rua nga pona i whiwhi tohu whakamutunga teitei ake mo te tuatoru. .

Ka rite ki taku i tuhi i runga ake nei, i roto i nga raarangi kaore matou e kite i nga korero mo te arotakenga o te kaupapa here ImageLocalityPriority, no reira ki te tirotiro i to maatau whakaaro, ka makahia e matou te ahua me te putanga hou o te tono ki runga i te toru o nga pona, i muri i te mahi tika o te whakarite. . Na te kaupapa here ImageLocalityPriority i tino kitea te raru whakahōtaka; i te nuinga o te waa i hono atu ki tetahi atu mea. Na te mea kaore e taea e matou te patuiro i ia kaupapa here i roto i te rarangi o nga kaupapa matua o te kube-scheduler taunoa, i hiahia matou ki te whakahaere ngawari o nga kaupapa here whakahōtaka pod.

Kaupapa raru

Ko ta matou i hiahia ko te otinga ki te raru kia tino motuhake, ara, ko nga hinonga matua o Kubernetes (ko te tikanga ko te kube-scheduler taunoa) me noho tonu. Kaore matou i hiahia ki te whakaoti rapanga ki tetahi waahi ka hanga ki tetahi atu. No reira, e rua nga whiringa mo te whakatau i te raru, i panuitia i te timatanga o te tuhinga - te hanga i tetahi atu raarangi, te tuhi ranei i a koe ake. Ko te whakaritenga matua mo te whakarite i nga mahi cron ko te tohatoha i te kawenga ki nga waahanga e toru. Ka taea e tenei whakaritenga te whakatutuki i nga kaupapa here kube-scheduler o naianei, no reira ki te whakaoti i to maatau rapanga kaore he take ki te tuhi i to ake raarangi.

Ko nga tohutohu mo te hanga me te Hoatu i tetahi atu kube-scheduler kua whakaahuatia i roto tuhinga. Heoi, ko te ahua ki a matou kaore i ranea te hinonga Whakamahi ki te whakarite i nga hapa i roto i te whakahaeretanga o tetahi ratonga tino nui penei i te kube-scheduler, no reira i whakatau matou ki te tuku i tetahi kube-scheduler hou hei Static Pod, ka aro turukihia. na Kubelet. No reira, kei a matou nga whakaritenga e whai ake nei mo te kube-kaiwhakataka hou:

  1. Me tuku te ratonga hei Static Pod ki runga i nga rangatira tautau katoa
  2. Me whakarato te kare o te he mena karekau te putunga hohe me te kube-scheduler i te waatea
  3. Ko te kaupapa matua i te wa e whakamahere ana ko te maha o nga rauemi e waatea ana i te pona (LeastRequestedPriority)

Whakatinana Rongoa

Me mahara tonu ka mahia e matou nga mahi katoa i Kubernetes v1.14.7, na te mea Koinei te putanga i whakamahia i roto i te kaupapa. Me timata ma te tuhi whakaaturanga mo ta tatou kube-scheduler hou. Me tango te whakaaturanga taunoa (/etc/kubernetes/manifests/kube-scheduler.yaml) hei putake ka kawea ki te ahua e whai ake nei:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

He poto mo nga huringa matua:

  1. I hurihia te ingoa o te peera me te ipu ki te kube-scheduler-cron
  2. I tohua te whakamahinga o nga tauranga 10151 me 10159 i te mea kua tautuhia te whiringa hostNetwork: true a kaore e taea te whakamahi i nga tauranga rite ki te kube-kaiwhakataka taunoa (10251 me 10259)
  3. Ma te whakamahi i te --config tawhā, i tohua e matou te konae whirihoranga me timata te ratonga
  4. Kua whirihorahia te whakaurunga o te konae whirihoranga (scheduler-custom.conf) me te konae kaupapa here whakahōtaka (scheduler-custom-policy-config.json) mai i te kaihautu

Kaua e wareware ka hiahia to tatou kube-scheduler motika rite ki te mea taunoa. Whakatikahia tana mahi tautau:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Inaianei me korero tatou mo nga mea hei whakauru i roto i te konae whirihoranga me te konae kaupapa here whakarite:

  • Kōnae whirihora (scheduler-custom.conf)
    Ki te whiwhi i te whirihoranga kube-kaiwhakataka taunoa, me whakamahi koe i te tawhā --write-config-to Tuhinga ka whai mai tuhinga. Ka tukuna e matou te whirihoranga hua ki te konae /etc/kubernetes/scheduler-custom.conf ka whakaheke ki te ahua e whai ake nei:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

He poto mo nga huringa matua:

  1. I tautuhia e matou te schedulerName ki te ingoa o ta matou ratonga kube-scheduler-cron.
  2. I roto i te tawhā lockObjectName Me tautuhi ano koe i te ingoa o ta maatau ratonga me te whakarite kei te tawhā leaderElect whakaturia ki te pono (mehemea he kotahi to node matua, ka taea e koe te tautuhi ki te teka).
  3. I tauwhāitihia te ara ki te konae me te whakaahuatanga o nga kaupapa here whakahōtaka i te tawhā algorithmSource.

He mea tika kia ata titiro ki te waahi tuarua, i reira ka whakatikahia e matou nga tawhā mo te matua leaderElection. Hei whakapumau i te kawa o te he, kua whakahohea e matou (leaderElect) te tukanga o te kowhiri i tetahi kaiarahi (rangatira) i waenga i nga putunga o to tatou kube-kaiwhakarite ma te whakamahi i tetahi pito mutunga mo ratou (resourceLock) whakaingoatia kube-scheduler-cron (lockObjectName) kei te mokowāingoa kube-system (lockObjectNamespace). Me pehea e whakarite ai a Kubernetes i te waatea o nga waahanga matua (tae atu ki te kube-scheduler) ka kitea i roto Tuhinga.

  • Whakahōtaka kōnae kaupapa here (scheduler-custom-policy-config.json)
    Ka rite ki taku tuhituhi i mua ake nei, ka kitea he aha nga kaupapa here motuhake e mahi ana te kube-scheduler taunoa ma te wetewete i tana waehere. Arā, kaore e taea e matou te tiki he konae me nga kaupapa here whakahōtaka mo te kube-scheduler taunoa kia rite ki te konae whirihoranga. Me whakamaarama i nga kaupapa here whakarite e hiahia ana matou ki te konae /etc/kubernetes/scheduler-custom-policy-config.json e whai ake nei:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

No reira, ko te kube-scheduler te tuatahi ka whakahiato i te rarangi o nga pona ka taea te whakarite i tetahi pod i runga i te kaupapa here GeneralPredicates (kei roto he huinga PodFitsResources, PodFitsHostPorts, HostName, me MatchNodeSelector kaupapa here). Na ka arotakehia ia node i runga i nga huinga kaupapa here i roto i te raupapa kaupapa matua. Hei whakatutuki i nga tikanga o ta matou mahi, i whakaaro matou ko tera huinga kaupapa here te otinga tino pai. Me whakamahara ahau ki a koe kei te waatea he huinga kaupapa here me o raatau whakamaarama taipitopito tuhinga. Hei whakatutuki i to mahi, ka taea e koe te whakarereke noa i te huinga kaupapa here e whakamahia ana me te whakatau i nga taumahatanga tika ki a raatau.

Karangahia te whakaaturanga o te kube-scheduler hou, i hanga e matou i te timatanga o te upoko, kube-scheduler-custom.yaml ka tuu ki te ara e whai ake nei /etc/kubernetes/manifests i runga i nga waahanga matua e toru. Ki te mahi tika nga mea katoa, ka whakarewahia e Kubelet tetahi pona ki ia node, a, i roto i nga raarangi o ta maatau kube-kaiwhakataka hou ka kite matou i nga korero i tutuki pai ta matou konae kaupapa here:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

Ko nga mea katoa e toe ana ko te tohu i roto i te waahanga o to tatou CronJob ko nga tono katoa mo te whakarite i ana poti me mahi e to tatou kube-scheduler hou:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

mutunga

I te mutunga, ka whiwhi matou i tetahi atu kube-scheduler me te huinga kaupapa here whakahōtaka motuhake, ko te mahi e aroturukihia ana e te kubelet. I tua atu, kua whakatauhia e matou te pooti mo te kaihautu hou i waenganui i nga putunga o to tatou kube-scheduler mena ka kore te rangatira tawhito i te waatea mo etahi take.

Ko nga tono me nga ratonga i ia wa ka whakaritea ma te kube-scheduler taunoa, a kua whakawhiti katoa nga mahi cron ki te mea hou. Ko te kawenga i hangaia e nga mahi cron kua tohatoha noa ki nga waahanga katoa. Ki te whakaaro ko te nuinga o nga mahi cron e mahia ana i runga i nga pona rite tonu ki nga tono matua o te kaupapa, na tenei i tino heke te tupono ki te neke i nga pona na te kore rawa o nga rauemi. Whai muri i te whakaurunga atu o te kube-scheduler, kua kore e ara ake nga raru o te whakarite i nga mahi cron.

Panuihia etahi atu tuhinga i runga i ta maatau blog:

Source: will.com

Tāpiri i te kōrero