Kreye yon lòt kube-scheduler ak yon seri règ orè koutim

Kreye yon lòt kube-scheduler ak yon seri règ orè koutim

Kube-scheduler se yon eleman entegral nan Kubernetes, ki responsab pou orè gous atravè nœuds an akò ak règleman espesifye. Souvan, pandan operasyon an nan yon gwoup Kubernetes, nou pa oblije reflechi sou ki règleman yo itilize pou pwograme gous, paske seri règleman nan kube-scheduler default la apwopriye pou pifò travay chak jou. Sepandan, gen sitiyasyon kote li enpòtan pou nou afine pwosesis pou alokasyon gous yo, epi gen de fason pou akonpli travay sa a:

  1. Kreye yon kube-scheduler ak yon seri règ koutim
  2. Ekri pwòp pwogramasyon ou epi anseye li pou travay avèk demann sèvè API

Nan atik sa a, mwen pral dekri aplikasyon an nan pwen an premye yo rezoud pwoblèm nan nan orè inegal nan fwaye sou youn nan pwojè nou yo.

Yon entwodiksyon tou kout sou kouman kube-scheduler travay

Li se vo espesyalman sonje lefèt ke kube-scheduler se pa responsab pou dirèkteman orè gous - li se sèlman responsab pou detèmine ne ki kote yo mete gous la. Nan lòt mo, rezilta travay kube-scheduler a se non ne a, ki li retounen nan sèvè API a pou yon demann orè, e se la kote travay li fini.

Premyèman, kube-scheduler konpile yon lis nœuds kote yo ka pwograme gous la an akò ak règleman yo. Apre sa, chak ne nan lis sa a resevwa yon sèten kantite pwen an akò ak politik priyorite yo. Kòm yon rezilta, yo chwazi ne ki gen kantite maksimòm pwen. Si gen nœuds ki gen menm nòt maksimòm, yo chwazi yon sèl o aza. Ou ka jwenn yon lis ak yon deskripsyon politik predikate (filtraj) ak priyorite (nòt). dokiman.

Deskripsyon kò pwoblèm nan

Malgre gwo kantite diferan gwoup Kubernetes yo te kenbe nan Nixys, nou te rankontre premye pwoblèm nan nan orè gous sèlman dènyèman, lè youn nan pwojè nou yo te bezwen kouri yon gwo kantite travay peryodik (~100 antite CronJob). Pou senplifye deskripsyon pwoblèm nan otank posib, nou pral pran kòm egzanp yon sèl mikwosèvis, nan ki yon travay cron lanse yon fwa pa minit, kreye kèk chaj sou CPU a. Pou kouri travay la cron, twa nœuds ak karakteristik absoliman idantik yo te atribye ba (24 vCPU sou chak).

An menm tan an, li enposib pou di ak presizyon konbyen tan CronJob la pral pran pou egzekite, paske volim done antre yo toujou ap chanje. An mwayèn, pandan operasyon nòmal nan kube-scheduler, chak ne kouri 3-4 ka travay, ki kreye ~ 20-30% nan chaj la sou CPU a nan chak ne:

Kreye yon lòt kube-scheduler ak yon seri règ orè koutim

Pwoblèm nan tèt li se ke pafwa gous travay cron sispann pwograme sou youn nan twa nœuds yo. Sa vle di, nan kèk pwen nan tan, pa gen yon sèl gous te planifye pou youn nan nœuds yo, pandan y ap sou de lòt nœuds yo 6-8 kopi travay la te kouri, kreye ~ 40-60% nan chaj CPU a:

Kreye yon lòt kube-scheduler ak yon seri règ orè koutim

Pwoblèm nan te repete ak frekans absoliman o aza ak detanzantan Koehle ak moman sa a yon nouvo vèsyon nan kòd la te woule soti.

Lè nou ogmante nivo anrejistreman kube-scheduler nan nivo 10 (-v = 10), nou te kòmanse anrejistre konbyen pwen chak ne te genyen pandan pwosesis evalyasyon an. Pandan operasyon planifikasyon nòmal, enfòmasyon sa yo ka wè nan mòso bwa yo:

resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node01 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100043                                                                                                                                                                        
generic_scheduler.go:781] Host Node03 => Score 100043

Moun sa yo. jije dapre enfòmasyon yo jwenn nan mòso bwa yo, chak nan nœuds yo bay nòt yon kantite egal nan pwen final ak yon sèl o aza te chwazi pou planifikasyon. Nan moman planifikasyon pwoblèm, mòso bwa yo te sanble ak sa a:

resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9 
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)                                                                                                        
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)                                                                                                        
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)                                                                                   
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)                                                                                                        
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)                                                                                   
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)                                                                                       
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)                                                                                    
generic_scheduler.go:781] Host Node03 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node02 => Score 100041                                                                                                                                                                        
generic_scheduler.go:781] Host Node01 => Score 100038

Soti nan ki li ka wè ke youn nan nœuds yo bay nòt mwens pwen final pase lòt yo, ak Se poutèt sa planifikasyon te pote soti sèlman pou de nœuds yo ki bay nòt maksimòm nòt la. Kidonk, nou te definitivman konvenki ke pwoblèm nan manti jisteman nan orè a nan gous yo.

Algorithm nan plis pou rezoud pwoblèm nan te evidan pou nou - analize mòso bwa yo, konprann pa ki priyorite ne a pa t 'fè nòt pwen epi, si sa nesesè, ajiste règleman yo nan default kube-scheduler la. Sepandan, isit la nou fè fas ak de difikilte enpòtan:

  1. Nan nivo maksimòm anrejistreman (10), pwen yo te genyen sèlman pou kèk priyorite yo reflete. Nan ekstrè ki pi wo a nan mòso bwa, ou ka wè ke pou tout priyorite ki reflete nan mòso bwa yo, nœuds nòt menm kantite pwen nan orè nòmal ak pwoblèm, men rezilta final la nan ka a nan planifikasyon pwoblèm se diferan. Kidonk, nou ka konkli ke pou kèk priyorite, nòt fèt "dèyè sèn nan", epi nou pa gen okenn fason yo konprann pou ki priyorite ne a pa t 'jwenn pwen. Nou dekri pwoblèm sa a an detay nan pwoblèm Kubernetes depo sou Github. Nan moman sa a, yo te resevwa yon repons nan men devlopè yo ki pral ajoute sipò pou antre nan Kubernetes v1.15,1.16, 1.17 ak XNUMX mizajou.
  2. Pa gen okenn fason fasil pou konprann ki seri règleman espesifik kube-scheduler ap travay kounye a. Wi, nan dokiman lis sa a ki nan lis, men li pa genyen enfòmasyon sou ki pwa espesifik yo asiyen nan chak nan politik priyorite yo. Ou ka wè pwa yo oswa modifye règleman yo nan default kube-scheduler la sèlman nan kòd sous.

Li se vo anyen ke yon fwa nou te kapab anrejistre ke yon ne pa t 'resevwa pwen dapre politik la ImageLocalityPriority, ki akòde pwen nan yon ne si li deja gen imaj ki nesesè yo kouri aplikasyon an. Sa vle di, nan moman sa a yon nouvo vèsyon aplikasyon an te woule soti, travay la cron jere yo kouri sou de nœuds, telechaje yon nouvo imaj ki soti nan rejis la Docker yo, e konsa de nœuds te resevwa yon pi gwo nòt final parapò ak twazyèm lan. .

Kòm mwen te ekri pi wo a, nan mòso bwa yo nou pa wè enfòmasyon sou evalyasyon an nan politik ImageLocalityPriority, kidonk yo nan lòd yo tcheke sipozisyon nou an, nou jete imaj la ak nouvo vèsyon aplikasyon an sou twazyèm ne a, apre sa orè te travay kòrèkteman. . Se jisteman akòz politik ImageLocalityPriority la ke pwoblèm nan orè yo te obsève byen raman; pi souvan li te asosye ak yon lòt bagay. Akòz lefèt ke nou pa t 'kapab konplètman debuge chak nan règleman yo nan lis la nan priyorite nan kube-scheduler default la, nou te gen yon bezwen pou jesyon fleksib nan politik orè gous yo.

Deklarasyon sou pwoblèm nan

Nou te vle solisyon an nan pwoblèm nan yo dwe espesifik ke posib, se sa ki, antite prensipal yo nan Kubernetes (isit la nou vle di default kube-scheduler la) ta dwe rete san okenn chanjman. Nou pa t vle rezoud yon pwoblèm nan yon kote epi kreye li nan yon lòt. Se konsa, nou te rive jwenn de opsyon pou rezoud pwoblèm nan, ki te anonse nan entwodiksyon atik la - kreye yon pwogramasyon adisyonèl oswa ekri pwòp ou a. Kondisyon prensipal la pou pwograme travay cron se distribye chaj la respire atravè twa nœuds. Egzijans sa a ka satisfè ak règleman kube-scheduler ki egziste deja, kidonk pou rezoud pwoblèm nou an pa gen okenn pwen nan ekri pwòp orè ou.

Enstriksyon pou kreye ak deplwaye yon kube-scheduler adisyonèl yo dekri nan dokiman. Sepandan, li te sanble nou ke antite Deplwaman an pa t ase pou asire tolerans fay nan operasyon an nan yon sèvis enpòtan tankou kube-scheduler, kidonk nou te deside deplwaye yon nouvo kube-scheduler kòm yon Pod estatik, ki ta dwe kontwole dirèkteman. pa Kubelet. Kidonk, nou gen kondisyon sa yo pou nouvo kube-scheduler la:

  1. Sèvis la dwe deplwaye kòm yon Pod estatik sou tout gwoup mèt
  2. Yo dwe bay tolerans pou defo nan ka gous aktif ak kube-scheduler pa disponib
  3. Priyorite prensipal la lè planifikasyon yo ta dwe kantite resous ki disponib sou ne a (LeastRequestedPriority)

Solisyon aplikasyon yo

Li vo sonje touswit ke nou pral fè tout travay nan Kubernetes v1.14.7, paske Sa a se vèsyon an ki te itilize nan pwojè a. Ann kòmanse ekri yon manifès pou nouvo kube-scheduler nou an. Ann pran manifest default (/etc/kubernetes/manifests/kube-scheduler.yaml) kòm yon baz epi pote li nan fòm sa a:

kind: Pod
metadata:
  labels:
    component: scheduler
    tier: control-plane
  name: kube-scheduler-cron
  namespace: kube-system
spec:
      containers:
      - command:
        - /usr/local/bin/kube-scheduler
        - --address=0.0.0.0
        - --port=10151
        - --secure-port=10159
        - --config=/etc/kubernetes/scheduler-custom.conf
        - --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
        - --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
        - --v=2
        image: gcr.io/google-containers/kube-scheduler:v1.14.7
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 8
          httpGet:
            host: 127.0.0.1
            path: /healthz
            port: 10151
            scheme: HTTP
          initialDelaySeconds: 15
          timeoutSeconds: 15
        name: kube-scheduler-cron-container
        resources:
          requests:
            cpu: '0.1'
        volumeMounts:
        - mountPath: /etc/kubernetes/scheduler.conf
          name: kube-config
          readOnly: true
        - mountPath: /etc/localtime
          name: localtime
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom.conf
          name: scheduler-config
          readOnly: true
        - mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
          name: policy-config
          readOnly: true
      hostNetwork: true
      priorityClassName: system-cluster-critical
      volumes:
      - hostPath:
          path: /etc/kubernetes/scheduler.conf
          type: FileOrCreate
        name: kube-config
      - hostPath:
          path: /etc/localtime
        name: localtime
      - hostPath:
          path: /etc/kubernetes/scheduler-custom.conf
          type: FileOrCreate
        name: scheduler-config
      - hostPath:
          path: /etc/kubernetes/scheduler-custom-policy-config.json
          type: FileOrCreate
        name: policy-config

Yon ti tan sou chanjman prensipal yo:

  1. Chanje non gous la ak veso a nan kube-scheduler-cron
  2. Espesifye itilizasyon pò 10151 ak 10159 jan opsyon an te defini hostNetwork: true epi nou pa ka sèvi ak pò yo menm jan ak default kube-scheduler (10251 ak 10259)
  3. Sèvi ak paramèt --config la, nou espesifye fichye konfigirasyon an ak ki sèvis la ta dwe kòmanse
  4. Konfigirasyon aliye fichye konfigirasyon an (scheduler-custom.conf) ak dosye politik orè (scheduler-custom-policy-config.json) soti nan lame a

Pa bliye ke kube-scheduler nou an ap bezwen dwa ki sanble ak youn nan default. Edit wòl gwoup li yo:

kubectl edit clusterrole system:kube-scheduler

...
   resourceNames:
    - kube-scheduler
    - kube-scheduler-cron
...

Koulye a, ann pale sou sa ki ta dwe genyen nan dosye konfigirasyon an ak dosye politik orè a:

  • Fichye konfigirasyon (scheduler-custom.conf)
    Pou jwenn konfigirasyon default kube-scheduler la, ou dwe itilize paramèt la --write-config-to nan dokiman. Nou pral mete konfigirasyon ki kapab lakòz la nan dosye /etc/kubernetes/scheduler-custom.conf epi redwi li nan fòm sa a:

apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
  acceptContentTypes: ""
  burst: 100
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/scheduler.conf
  qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
  leaderElect: true
  leaseDuration: 15s
  lockObjectName: kube-scheduler-cron
  lockObjectNamespace: kube-system
  renewDeadline: 10s
  resourceLock: endpoints
  retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
   policy:
     file:
       path: "/etc/kubernetes/scheduler-custom-policy-config.json"

Yon ti tan sou chanjman prensipal yo:

  1. Nou mete schedulerName sou non sèvis kube-scheduler-cron nou an.
  2. Nan paramèt lockObjectName ou bezwen tou mete non an nan sèvis nou an epi asire w ke paramèt la leaderElect mete nan vre (si ou gen yon sèl ne mèt, ou ka mete li nan fo).
  3. Espesifye chemen an nan dosye a ak yon deskripsyon règleman yo orè nan paramèt la algorithmSource.

Li vo pran yon gade pi pre nan dezyèm pwen an, kote nou edite paramèt yo pou kle a leaderElection. Pou asire tolerans fòt, nou te pèmèt (leaderElect) pwosesis la nan chwazi yon lidè (mèt) ant gous yo nan kube-scheduler nou an lè l sèvi avèk yon sèl pwen final pou yo (resourceLock) yo te rele kube-scheduler-cron (lockObjectName) nan espas non sistèm kube (lockObjectNamespace). Ki jan Kubernetes asire disponiblite segondè nan eleman prensipal yo (ki gen ladan kube-scheduler) ka jwenn nan Atik.

  • Fichye politik orè (scheduler-custom-policy-config.json)
    Kòm mwen te ekri pi bonè, nou ka chèche konnen ki règleman espesifik kube-scheduler default la ap travay sèlman lè nou analize kòd li yo. Sa vle di, nou pa ka jwenn yon dosye ki gen règleman orè pou kube-scheduler default la menm jan ak yon dosye konfigirasyon. Ann dekri règleman orè nou enterese nan dosye /etc/kubernetes/scheduler-custom-policy-config.json jan sa a:

{
  "kind": "Policy",
  "apiVersion": "v1",
  "predicates": [
    {
      "name": "GeneralPredicates"
    }
  ],
  "priorities": [
    {
      "name": "ServiceSpreadingPriority",
      "weight": 1
    },
    {
      "name": "EqualPriority",
      "weight": 1
    },
    {
      "name": "LeastRequestedPriority",
      "weight": 1
    },
    {
      "name": "NodePreferAvoidPodsPriority",
      "weight": 10000
    },
    {
      "name": "NodeAffinityPriority",
      "weight": 1
    }
  ],
  "hardPodAffinitySymmetricWeight" : 10,
  "alwaysCheckAllPredicates" : false
}

Kidonk, kube-scheduler premye konpile yon lis nœuds kote yo ka pwograme yon gous dapre règleman GeneralPredicates (ki gen ladan yon seri règleman PodFitsResources, PodFitsHostPorts, HostName ak MatchNodeSelector). Lè sa a, chak ne evalye an akò ak seri a nan politik nan etalaj la priyorite. Pou ranpli kondisyon yo nan travay nou an, nou te konsidere ke yon seri politik sa yo ta solisyon an pi bon. Kite m fè w sonje ke yon seri règleman ak deskripsyon detaye yo disponib nan dokiman. Pou akonpli travay ou a, ou ka tou senpleman chanje seri règleman yo itilize epi bay pwa ki apwopriye yo.

Ann rele manifest nouvo kube-scheduler, ke nou te kreye nan kòmansman chapit la, kube-scheduler-custom.yaml epi mete l nan chemen sa a /etc/kubernetes/manifests sou twa nœuds mèt. Si tout bagay fèt kòrèkteman, Kubelet pral lanse yon gous sou chak node, epi nan mòso bwa nouvo kube-scheduler nou an nou pral wè enfòmasyon ke dosye politik nou an te aplike avèk siksè:

Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'

Koulye a, tout sa ki rete se endike nan espèk CronJob nou an ke tout demann pou pwograme gous li yo ta dwe trete pa nouvo kube-scheduler nou an:

...
 jobTemplate:
    spec:
      template:
        spec:
          schedulerName: kube-scheduler-cron
...

Konklizyon

Alafen, nou te resevwa yon lòt kube-scheduler ak yon seri inik nan règleman orè, travay la nan ki kontwole dirèkteman pa kubelet la. Anplis de sa, nou te mete kanpe eleksyon an nan yon nouvo lidè ant gous yo nan kube-scheduler nou an nan ka ansyen lidè a vin pa disponib pou kèk rezon.

Aplikasyon ak sèvis regilye yo kontinye ap pwograme nan pwogram kube-defo a, epi tout travay cron yo te konplètman transfere nan nouvo a. Chaj la te kreye pa travay cron kounye a respire distribye atravè tout nœuds. Lè nou konsidere ke pi fò nan travay yo cron yo egzekite sou nœuds yo menm jan ak aplikasyon prensipal yo nan pwojè a, sa a te siyifikativman redwi risk pou yo deplase gous akòz mank de resous. Apre entwodwi plis kube-scheduler la, pwoblèm ak orè inegal nan travay cron pa leve ankò.

Epitou li lòt atik sou blog nou an:

Sous: www.habr.com

Add nouvo kòmantè