Kube-scheduler chinhu chakakosha cheKubernetes, iyo ine basa rekuronga mapodhi munzvimbo dzese zvinoenderana nemitemo yakatarwa. Kazhinji, panguva yekushanda kweKubernetes cluster, isu hatifanirwe kufunga nezve marongero anoshandiswa kuronga pods, sezvo seti yemitemo yeiyo default kube-scheduler yakakodzera kune mazhinji mabasa ezuva nezuva. Zvisinei, pane mamiriro ezvinhu apo zvakakosha kuti isu tinyatsogadzirisa maitiro ekugovera pods, uye kune nzira mbiri dzekuzadzisa basa iri:
- Gadzira kube-scheduler ine tsika seti yemitemo
- Nyora yako wega scheduler uye idzidzise kushanda neAPI server zvikumbiro
Muchinyorwa chino, ini ndichatsanangura kuitwa kweiyo yekutanga poindi kugadzirisa dambudziko rekusaenzana kurongeka kwemaheti pane imwe yemapurojekiti edu.
Sumo pfupi yekuti kube-scheduler inoshanda sei
Izvo zvakakosha kunyanya kucherechedza chokwadi chekuti kube-scheduler haina mhosva yekuronga zvakananga pods - inongori nebasa rekuona iyo node yekuisa iyo pod. Mune mamwe mazwi, mhedzisiro yebasa rekube-scheduler ndiro zita reiyo node, iyo inodzokera kune API sevha yechikumbiro chekuronga, uye ndiko kunopera basa rayo.
Chekutanga, kube-scheduler inonyora runyoro rwemanodhi pane iyo pod inogona kurongwa zvinoenderana neinofanotaura. Tevere, node yega yega kubva pane iyi runyorwa inogamuchira imwe nhamba yemapoinzi zvinoenderana neinotungamira marongero. Nekuda kweizvozvo, iyo node ine huwandu hwehuwandu hwemapoinzi inosarudzwa. Kana paine zvibodzwa zvine zvibodzwa zvakaenzana, chinosarudzwa chinosarudzwa. Rondedzero uye tsananguro yezvirevo (kusefa) uye zvekutanga (zvibodzwa) marongero anogona kuwanikwa mu.
Tsanangudzo yemuviri wedambudziko
Kunyangwe huwandu hukuru hwemasumbu akasiyana eKubernetes ari kuchengetwa paNixys, takatanga kusangana nedambudziko rekuronga mapodhi nguva pfupi yadarika, apo imwe yemapurojekiti edu yaida kumhanyisa nhamba huru yenguva nenguva (~ 100 CronJob masangano). Kurerutsa tsananguro yedambudziko zvakanyanya sezvinobvira, isu tichatora semuenzaniso imwe microservice, mukati meiyo cron basa rinotangwa kamwechete paminiti, kugadzira imwe mutoro paCPU. Kumhanyisa iro basa recron, node nhatu dzine hunhu hwakafanana dzakapihwa (24 vCPUs pane imwe neimwe).
Panguva imwecheteyo, hazvibviri kutaura nenzira kwayo kuti CronJob ichatora nguva yakareba sei kuti iite, sezvo vhoriyamu yedata rekuisa iri kuramba ichichinja. Paavhareji, panguva yakajairika yekushanda kwekube-scheduler, imwe neimwe node inomhanya 3-4 zviitiko zvebasa, izvo zvinogadzira ~ 20-30% yemutoro paCPU yeimwe node:
Dambudziko pacharo nderekuti dzimwe nguva cron task pods yakamira kurongwa pane imwe yemanodhi matatu. Kureva kuti, pane imwe nguva nekufamba kwenguva, hapana kana podhi imwe yakarongerwa imwe yemanodhi, nepo kune mamwe maviri makopi 6-8 makopi ebasa aimhanya, achigadzira ~ 40-60% yemutoro weCPU:
Dambudziko rakadzokororwa neasina kurongeka frequency uye dzimwe nguva yakabatana nenguva iyo vhezheni itsva yekodhi yakaburitswa.
Nekuwedzera iyo kube-scheduler yekutema nhanho kusvika padanho gumi (-v = 10), takatanga kurekodha mapoinzi mangani imwe neimwe node yakawana panguva yekuongorora. Panguva yekuronga kwakajairika, ruzivo runotevera runogona kuoneka mumatanda:
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1387 millicores 4161694720 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1347 millicores 4444810240 memory bytes, score 9
resource_allocation.go:78] cronjob-1574828880-mn7m4 -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1687 millicores 4790840320 memory bytes, score 9
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: NodeAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: NodeAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: NodeAffinityPriority, Score: (0)
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node01: InterPodAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: TaintTolerationPriority, Score: (10)
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node02: InterPodAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node01: SelectorSpreadPriority, Score: (10)
interpod_affinity.go:237] cronjob-1574828880-mn7m4 -> Node03: InterPodAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node02: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] cronjob-1574828880-mn7m4 -> Node03: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node01: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node02: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574828880-mn7m4_project-stage -> Node03: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:781] Host Node01 => Score 100043
generic_scheduler.go:781] Host Node02 => Score 100043
generic_scheduler.go:781] Host Node03 => Score 100043
Avo. tichitonga neruzivo rwakawanikwa kubva mumatanda, imwe neimwe yemanodhi yakawana nhamba yakaenzana yemapoinzi ekupedzisira uye imwe yakasarudzika yakasarudzwa kuronga. Panguva yekunetsa kuronga, matanda aitaridzika seizvi:
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node02: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1587 millicores 4581125120 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: BalancedResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node01: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 987 millicores 3322833920 memory bytes, score 9
resource_allocation.go:78] cronjob-1574211360-bzfkr -> Node03: LeastResourceAllocation, capacity 23900 millicores 67167186944 memory bytes, total request 1087 millicores 3532549120 memory bytes, score 9
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node03: InterPodAffinityPriority, Score: (0)
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node02: InterPodAffinityPriority, Score: (0)
interpod_affinity.go:237] cronjob-1574211360-bzfkr -> Node01: InterPodAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node03: SelectorSpreadPriority, Score: (10)
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node02: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: TaintTolerationPriority, Score: (10)
selector_spreading.go:146] cronjob-1574211360-bzfkr -> Node01: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: NodeAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node03: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: TaintTolerationPriority, Score: (10)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node02: NodeAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: NodeAffinityPriority, Score: (0)
generic_scheduler.go:726] cronjob-1574211360-bzfkr_project-stage -> Node01: SelectorSpreadPriority, Score: (10)
generic_scheduler.go:781] Host Node03 => Score 100041
generic_scheduler.go:781] Host Node02 => Score 100041
generic_scheduler.go:781] Host Node01 => Score 100038
Kubva pane izvo zvinogona kuoneka kuti imwe yemanodhi yakawana mapoinzi mashoma ekupedzisira pane mamwe, uye nekudaro kuronga kwaiitwa chete kune mbiri node dzakahwina zvibodzwa zvakanyanya. Nekudaro, isu takave nechokwadi chekuti dambudziko riri mukurongeka kwemapods.
Iyo imwezve algorithm yekugadzirisa dambudziko yaive pachena kwatiri - ongorora matanda, nzwisisa kuti chii chakakosha iyo node haina kuwana mapoinzi uye, kana zvichidikanwa, gadzirisa marongero eiyo default kube-scheduler. Nekudaro, pano takatarisana nematambudziko maviri akakosha:
- Padanho repamusoro rekutema miti (10), mapoinzi anowanikwa chete kune zvimwe zvakakosha anoratidzwa. Muchikamu chiri pamusoro apa chematanda, unogona kuona kuti kune zvese zvinonyanya kukosha zvinoratidzwa mumatanda, node dzinoita nhamba yakafanana yemapoinzi mune yakajairwa uye dambudziko kuronga, asi mugumisiro wekupedzisira munyaya yekuronga dambudziko yakasiyana. Nekudaro, isu tinogona kugumisa kuti kune zvimwe zvakakosha, kugova kunoitika "kuseri kwezviitiko", uye isu hatina nzira yekunzwisisa kuti ndezvipi zvakakosha iyo node haina kuwana mapoinzi. Takatsanangura dambudziko iri zvakadzama
nyaya Kubernetes repository paGithub. Panguva yekunyora, mhinduro yakagamuchirwa kubva kuvagadziri kuti tsigiro yekutema miti ichawedzerwa muKubernetes v1.15,1.16, 1.17 uye XNUMX inogadziridza. - Iko hakuna nzira iri nyore yekunzwisisa kuti ndeipi seti yemitemo kube-scheduler iri kushanda nayo parizvino. Hongu, mukati
zvinyorwa runyoro urwu rwakanyorwa, asi haruna ruzivo rwekuti ndeapi huremu hwakapihwa kune yega yega yezvinokosha marongero. Iwe unogona kuona huremu kana kugadzirisa marongero eiyo default kube-scheduler chete mukatisource codes .
Zvakakosha kucherechedza kuti kana takwanisa kurekodha kuti node haina kugamuchira mapoinzi zvinoenderana neImageLocarityPriority policy, iyo mibairo inonongedzera kune node kana ichitova nemufananidzo unodiwa kumhanya kunyorera. Ndokunge, panguva iyo vhezheni itsva yechishandiso yakaburitswa, iro basa recron rakakwanisa kumhanya pane maviri node, kurodha chifananidzo chitsva kubva kune docker registry kwavari, uye nekudaro node mbiri dzakagamuchira yakakwira yekupedzisira mamaki zvichienderana neyechitatu. .
Sezvandanyora pamusoro, mumatanda hatione ruzivo nezve kuongororwa kweiyo ImageLocarityPriority policy, saka kuti titarise fungidziro yedu, takakanda mufananidzo neshanduro itsva yechishandiso panzvimbo yechitatu, mushure mezvo kuronga kwakashanda nemazvo. . Yakanga iri chaiyo nekuda kweiyo ImageLocarityPriority iyo dambudziko rekuronga rakaonekwa kashoma; kazhinji raibatanidzwa nechimwe chinhu. Nekuda kwekuti isu hatina kukwanisa kunyatsogadzirisa imwe neimwe yemapuratifomu mune rondedzero yezvakakosha yeiyo default kube-scheduler, isu taida kuchinjika manejimendi epod kuronga marongero.
Kugadzirwa kwedambudziko
Isu taida kuti mhinduro yedambudziko ive yakananga sezvinobvira, kureva kuti, iwo makuru masangano eKubernetes (pano isu tinoreva iyo default kube-scheduler) inofanira kuramba isina kuchinjika. Hatina kuda kugadzirisa dambudziko pane imwe nzvimbo torigadzira mune imwe. Nokudaro, takasvika kune zvingasarudzwa zviviri zvekugadzirisa dambudziko, izvo zvakaziviswa mukutanga kwechinyorwa - kugadzira imwe purogiramu kana kunyora yako. Chinhu chikuru chinodiwa pakuronga cron mabasa ndeyekugovera mutoro zvakaenzana munzvimbo nhatu nhatu. Ichi chinodiwa chinogona kugutsikana nearipo kube-scheduler marongero, saka kugadzirisa dambudziko redu hapana chikonzero chekunyora yako wega scheduler.
Mirayiridzo yekugadzira uye Kuendesa imwe kube-scheduler inotsanangurwa mukati
- Iyo sevhisi inofanirwa kuiswa seStatic Pod pane ese macluster masters
- Kutadza kushivirira kunofanirwa kupihwa kana iyo inoshanda pod ine kube-scheduler isipo
- Chinonyanya kukosha pakuronga chinofanira kunge chiri nhamba yezviwanikwa zviripo pane node (LeastRequestedPriority)
Implementation solutions
Zvakakosha kucherechedza ipapo kuti tichaita basa rese muKubernetes v1.14.7, nekuti Iyi ndiyo shanduro yakashandiswa muprojekti. Ngatitange nekunyora manifesto kune yedu itsva kube-scheduler. Ngatitorei iyo default manifest (/etc/kubernetes/manifests/kube-scheduler.yaml) sehwaro uye tiuye nayo kune inotevera fomu:
kind: Pod
metadata:
labels:
component: scheduler
tier: control-plane
name: kube-scheduler-cron
namespace: kube-system
spec:
containers:
- command:
- /usr/local/bin/kube-scheduler
- --address=0.0.0.0
- --port=10151
- --secure-port=10159
- --config=/etc/kubernetes/scheduler-custom.conf
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --v=2
image: gcr.io/google-containers/kube-scheduler:v1.14.7
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10151
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler-cron-container
resources:
requests:
cpu: '0.1'
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kube-config
readOnly: true
- mountPath: /etc/localtime
name: localtime
readOnly: true
- mountPath: /etc/kubernetes/scheduler-custom.conf
name: scheduler-config
readOnly: true
- mountPath: /etc/kubernetes/scheduler-custom-policy-config.json
name: policy-config
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kube-config
- hostPath:
path: /etc/localtime
name: localtime
- hostPath:
path: /etc/kubernetes/scheduler-custom.conf
type: FileOrCreate
name: scheduler-config
- hostPath:
path: /etc/kubernetes/scheduler-custom-policy-config.json
type: FileOrCreate
name: policy-config
Muchidimbu nezve shanduko huru:
- Yakashandura zita repodhi uye mudziyo kuita kube-scheduler-cron
- Yakatsanangura kushandiswa kwezviteshi 10151 uye 10159 sekutsanangurwa kwaiitwa sarudzo
hostNetwork: true
uye isu hatigone kushandisa zviteshi zvakafanana seyekutanga kube-scheduler (10251 uye 10259) - Tichishandisa iyo --config parameter, isu takatsanangura iyo faira yekumisikidza iyo iyo sevhisi inofanira kutangwa
- Kuiswa kwakagadziriswa kwefaira rekugadzirisa (scheduler-custom.conf) uye kuronga mutemo faira (scheduler-custom-policy-config.json) kubva kumugadziri.
Usakanganwe kuti yedu kube-scheduler ichada kodzero dzakafanana neiyo yakasarudzika. Rongedza basa rayo remasumbu:
kubectl edit clusterrole system:kube-scheduler
...
resourceNames:
- kube-scheduler
- kube-scheduler-cron
...
Zvino ngatitaure nezve izvo zvinofanirwa kunge zvirimo mufaira rekugadzirisa uye kuronga faira remutemo:
- Kugadzirisa faira (scheduler-custom.conf)
Kuti uwane iyo yakasarudzika kube-scheduler kumisikidzwa, unofanirwa kushandisa iyo parameter--write-config-to
kubvazvinyorwa . Isu tichaisa iyo inokonzeresa gadziriso mufaira /etc/kubernetes/scheduler-custom.conf uye toidzikisa kune inotevera fomu:
apiVersion: kubescheduler.config.k8s.io/v1alpha1
kind: KubeSchedulerConfiguration
schedulerName: kube-scheduler-cron
bindTimeoutSeconds: 600
clientConnection:
acceptContentTypes: ""
burst: 100
contentType: application/vnd.kubernetes.protobuf
kubeconfig: /etc/kubernetes/scheduler.conf
qps: 50
disablePreemption: false
enableContentionProfiling: false
enableProfiling: false
failureDomains: kubernetes.io/hostname,failure-domain.beta.kubernetes.io/zone,failure-domain.beta.kubernetes.io/region
hardPodAffinitySymmetricWeight: 1
healthzBindAddress: 0.0.0.0:10151
leaderElection:
leaderElect: true
leaseDuration: 15s
lockObjectName: kube-scheduler-cron
lockObjectNamespace: kube-system
renewDeadline: 10s
resourceLock: endpoints
retryPeriod: 2s
metricsBindAddress: 0.0.0.0:10151
percentageOfNodesToScore: 0
algorithmSource:
policy:
file:
path: "/etc/kubernetes/scheduler-custom-policy-config.json"
Muchidimbu nezve shanduko huru:
- Isu tinoisa schedulerName kuzita rekube-scheduler-cron sevhisi.
- Mune parameter
lockObjectName
iwe zvakare unofanirwa kuseta zita rebasa redu uye ita shuwa kuti parameterleaderElect
set to true (kana uine imwe master node, unogona kuiisa kuva nhema). - Yakatsanangura nzira yefaira ine tsananguro yezvirongwa zvekuronga muparameter
algorithmSource
.
Izvo zvakakosha kuti titarise zvakanyanya pane yechipiri poindi, kwatinogadzirisa iyo parameter yekiyi leaderElection
. Kuti tive nechokwadi chekuregerera kukanganisa, isu takagonesa (leaderElect
) maitiro ekusarudza mutungamiri (tenzi) pakati pemapods edu kube-scheduler achishandisa imwe chete yekupedzisira kwavari (resourceLock
) anonzi kube-scheduler-cron (lockObjectName
) mune kube-system namespace (lockObjectNamespace
) Kubernetes inovimbisa sei kuwanikwa kwepamusoro kwezvikamu zvikuru (kusanganisira kube-scheduler) inogona kuwanikwa mu.
- Kuronga mutemo faira (scheduler-custom-policy-config.json)
Sezvandakanyora pakutanga, tinogona kuona kuti ndeapi marongero ayo default kube-scheduler anoshanda nawo chete nekuongorora kodhi yayo. Ndiko kuti, isu hatigone kuwana faira ine marongero emitemo yeiyo default kube-scheduler nenzira imwechete sefaira rekugadzirisa. Ngatitsanangurirei marongero ehurongwa hwatiri kufarira mu /etc/kubernetes/scheduler-custom-policy-config.json faira sezvinotevera:
{
"kind": "Policy",
"apiVersion": "v1",
"predicates": [
{
"name": "GeneralPredicates"
}
],
"priorities": [
{
"name": "ServiceSpreadingPriority",
"weight": 1
},
{
"name": "EqualPriority",
"weight": 1
},
{
"name": "LeastRequestedPriority",
"weight": 1
},
{
"name": "NodePreferAvoidPodsPriority",
"weight": 10000
},
{
"name": "NodeAffinityPriority",
"weight": 1
}
],
"hardPodAffinitySymmetricWeight" : 10,
"alwaysCheckAllPredicates" : false
}
Saka, kube-scheduler inotanga kuunganidza runyoro rwemanodhi ayo pod inogona kurongwa zvinoenderana neGeneralPredicates mutemo (uyo unosanganisira seti yePodFitsResources, PodFitsHostPorts, HostName, uye MatchNodeSelector mitemo). Uye ipapo node yega yega inoongororwa zvinoenderana neyakagadzikwa yemitemo mune yekutanga array. Kuti tizadzise mamiriro ebasa redu, takaona kuti seti yakadai yemitemo ndiyo yaizove mhinduro yakakwana. Rega ndikuyeuchidze kuti seti yemitemo ine tsananguro yavo yakadzama inowanikwa mukati
Ngatishevedzei kuratidzwa kweiyo kube-scheduler nyowani, yatakagadzira pakutanga kwechitsauko, kube-scheduler-custom.yaml toiisa munzira inotevera /etc/kubernetes/manifests pane matatu master node. Kana zvese zvikaitwa nemazvo, Kubelet achavhura podhi pane imwe neimwe node, uye mumatanda eyedu kube-scheduler tichaona ruzivo rwekuti faira redu repolicy rakashandiswa zvinobudirira:
Creating scheduler from configuration: {{ } [{GeneralPredicates <nil>}] [{ServiceSpreadingPriority 1 <nil>} {EqualPriority 1 <nil>} {LeastRequestedPriority 1 <nil>} {NodePreferAvoidPodsPriority 10000 <nil>} {NodeAffinityPriority 1 <nil>}] [] 10 false}
Registering predicate: GeneralPredicates
Predicate type GeneralPredicates already registered, reusing.
Registering priority: ServiceSpreadingPriority
Priority type ServiceSpreadingPriority already registered, reusing.
Registering priority: EqualPriority
Priority type EqualPriority already registered, reusing.
Registering priority: LeastRequestedPriority
Priority type LeastRequestedPriority already registered, reusing.
Registering priority: NodePreferAvoidPodsPriority
Priority type NodePreferAvoidPodsPriority already registered, reusing.
Registering priority: NodeAffinityPriority
Priority type NodeAffinityPriority already registered, reusing.
Creating scheduler with fit predicates 'map[GeneralPredicates:{}]' and priority functions 'map[EqualPriority:{} LeastRequestedPriority:{} NodeAffinityPriority:{} NodePreferAvoidPodsPriority:{} ServiceSpreadingPriority:{}]'
Ikozvino chasara kuratidza mune fungidziro yeCronJob yedu kuti zvikumbiro zvese zvekuronga mapodhi ayo zvinofanirwa kugadziridzwa neyedu kube-scheduler:
...
jobTemplate:
spec:
template:
spec:
schedulerName: kube-scheduler-cron
...
mhedziso
Pakupedzisira, takawana imwe kube-scheduler ine yakasarudzika seti yekuronga marongero, iro basa rinotariswa zvakananga nekubelet. Uye zvakare, isu takamisa kusarudzwa kwemutungamiri mutsva pakati pemapodhi e-kube-scheduler yedu kana mutungamiri wekare akasavapo nekuda kwechimwe chikonzero.
Nguva dzose zvikumbiro uye masevhisi anoenderera mberi kurongwa kuburikidza neiyo default kube-scheduler, uye ese cron mabasa akaendeswa zvachose kune itsva. Mutoro wakagadzirwa necron mabasa ikozvino wakagovaniswa zvakaenzana munzvimbo dzese. Tichifunga kuti mazhinji emabasa e-cron anoitwa pane imwechete node semakumbiro makuru epurojekiti, izvi zvakaderedza zvakanyanya njodzi yekufambisa pods nekuda kwekushaikwa kwezvinhu. Mushure mekuzivisa iyo yekuwedzera kube-scheduler, matambudziko ane kusaenzana kuronga kwecron mabasa haachamuka.
Verengawo zvimwe zvinyorwa pane yedu blog:
Traffic manejimendi muKubernetes cluster neCalico Kunzwisisa sarudzo dzemitemo yetiweki neCalico Kana Linux contrack isisiri shamwari yako 4 mienzaniso yekutamisa iota Kukwidziridza Kubernetes Cluster Pasina Nguva Yekuzorora Zero Downtime Deployment uye Databases Kubernetes: nei zvakakosha kumisikidza sisitimu yekushandisa manejimendi?
Source: www.habr.com