6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Salila mangtaun-taun ngagunakeun Kubernetes dina produksi, kami parantos ngumpulkeun seueur carita anu pikaresepeun ngeunaan kumaha bug dina sababaraha komponén sistem nyababkeun akibat anu teu pikaresepeun sareng / atanapi teu kaharti mangaruhan kana operasi wadah sareng pods. Dina artikel ieu kami geus dijieun pilihan tina sababaraha paling umum atawa metot. Sanaos anjeun henteu pernah cukup untung pikeun nyanghareupan kaayaan sapertos kitu, maca ngeunaan carita detektif pondok sapertos kitu - khususna "tangan munggaran" - sok pikaresepeun, sanés?..

Carita 1. Supercronic na Docker ngagantung

Dina salah sahiji klaster, kami périodik nampi Docker beku, anu ngaganggu fungsi normal kluster. Dina waktos anu sami, ieu ditingali dina log Docker:

level=error msg="containerd: start init process" error="exit status 2: "runtime/cgo: pthread_create failed: No space left on device
SIGABRT: abort
PC=0x7f31b811a428 m=0

goroutine 0 [idle]:

goroutine 1 [running]:
runtime.systemstack_switch() /usr/local/go/src/runtime/asm_amd64.s:252 fp=0xc420026768 sp=0xc420026760
runtime.main() /usr/local/go/src/runtime/proc.go:127 +0x6c fp=0xc4200267c0 sp=0xc420026768
runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200267c8 sp=0xc4200267c0

goroutine 17 [syscall, locked to thread]:
runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

…

Anu paling dipikaresep ku urang ngeunaan kasalahan ieu nyaéta pesen: pthread_create failed: No space left on device. Ulikan Gancang dokuméntasi ngécéskeun yén Docker henteu tiasa ngiringan prosés, naha éta périodik beku.

Dina ngawaskeun, gambar di handap ieu pakait sareng naon anu lumangsung:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Kaayaan anu sami dititénan dina titik anu sanés:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Dina titik anu sami urang tingali:

root@kube-node-1 ~ # ps auxfww | grep curl -c
19782
root@kube-node-1 ~ # ps auxfww | grep curl | head
root     16688  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     17398  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     16852  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      9473  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      4664  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     30571  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     24113  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     16475  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      7176  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      1090  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>

Tétéla yén kabiasaan ieu téh konsekuensi tina pod gawé bareng supercronic (Utilitas Go anu kami anggo pikeun ngajalankeun padamelan cron dina pods):

 _ docker-containerd-shim 833b60bb9ff4c669bb413b898a5fd142a57a21695e5dc42684235df907825567 /var/run/docker/libcontainerd/833b60bb9ff4c669bb413b898a5fd142a57a21695e5dc42684235df907825567 docker-runc
|   _ /usr/local/bin/supercronic -json /crontabs/cron
|       _ /usr/bin/newrelic-daemon --agent --pidfile /var/run/newrelic-daemon.pid --logfile /dev/stderr --port /run/newrelic.sock --tls --define utilization.detect_aws=true --define utilization.detect_azure=true --define utilization.detect_gcp=true --define utilization.detect_pcf=true --define utilization.detect_docker=true
|       |   _ /usr/bin/newrelic-daemon --agent --pidfile /var/run/newrelic-daemon.pid --logfile /dev/stderr --port /run/newrelic.sock --tls --define utilization.detect_aws=true --define utilization.detect_azure=true --define utilization.detect_gcp=true --define utilization.detect_pcf=true --define utilization.detect_docker=true -no-pidfile
|       _ [newrelic-daemon] <defunct>
|       _ [curl] <defunct>
|       _ [curl] <defunct>
|       _ [curl] <defunct>
…

Masalahna nyaéta ieu: nalika tugas dijalankeun dina supercronic, prosésna diturunkeun ku éta teu tiasa mungkas leres, robah jadi zombie.

nyarios: Janten langkung tepatna, prosés ditimbulkeun ku tugas cron, tapi supercronic sanés sistem init sareng teu tiasa "ngadopsi" prosés anu diturunkeun ku barudakna. Nalika sinyal SIGHUP atanapi SIGTERM diangkat, aranjeunna henteu dikirimkeun kana prosés anak, nyababkeun prosés anak henteu terminating sareng tetep dina status zombie. Anjeun tiasa maca langkung seueur ngeunaan sadayana ieu, contona, dina artikel saperti.

Aya sababaraha cara pikeun ngajawab masalah:

  1. Salaku workaround samentara - nambahan jumlah PIDs dina sistem dina hiji titik waktu:
           /proc/sys/kernel/pid_max (since Linux 2.5.34)
                  This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).  PIDs greater than this  value  are  not  allo‐
                  cated;  thus, the value in this file also acts as a system-wide limit on the total number of processes and threads.  The default value for this file, 32768, results in the
                  same range of PIDs as on earlier kernels
  2. Atawa ngajalankeun tugas di supercronic teu langsung, tapi ngagunakeun sarua tini, anu tiasa ngeureunkeun prosés kalayan leres sareng henteu ngahasilkeun zombie.

Carita 2. "Zombies" nalika ngahapus cgroup

Kubelet mimiti ngonsumsi seueur CPU:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Teu aya anu bakal resep ieu, janten urang angkatan diri parfum sarta mimiti nungkulan masalah. Hasil panalungtikan nya éta saperti kieu:

  • Kubelet nyéépkeun langkung ti sapertilu waktos CPU na pikeun narik data mémori tina sadaya cgroup:

    6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

  • Dina milis pamekar kernel anjeun tiasa mendakan diskusi masalah. Pondokna, titik turun ka ieu: rupa-rupa file tmpfs sareng hal-hal anu sanés anu sanés teu dipiceun tina sistem nalika ngahapus cgroup, anu disebut memcg zombie. Sooner atanapi engké aranjeunna bakal dihapus tina cache kaca, tapi aya loba memori dina server na kernel teu ningali titik di wasting waktu dina ngahapus aranjeunna. Éta naha maranéhna tetep tumpukan up. Naha ieu malah lumangsung? Ieu mangrupikeun server sareng padamelan cron anu terus-terusan nyiptakeun padamelan énggal, sareng sareng aranjeunna pods énggal. Ku kituna, cgroups anyar dijieun pikeun wadahna di antarana, nu geura-giru dihapus.
  • Naha cAdvisor di kubelet miceunan waktos? Ieu gampang katingal ku palaksanaan pangbasajanna time cat /sys/fs/cgroup/memory/memory.stat. Upami dina mesin sehat operasi nyandak 0,01 detik, teras dina cron02 masalah butuh 1,2 detik. Hal éta cAdvisor, nu maca data tina sysfs lambat pisan, nyoba tumut kana akun memori dipaké dina cgroups zombie.
  • Pikeun ngahapus zombie sacara paksa, kami nyobian ngabersihan cache sapertos anu disarankeun dina LKML: sync; echo 3 > /proc/sys/vm/drop_caches, - tapi kernel tétéla jadi leuwih pajeulit jeung nabrak mobil.

Naon anu kedah dilakukeun? Masalahna dibenerkeun (komitmen, sarta pikeun déskripsi tingali pesen ngaleupaskeun) ngamutahirkeun kernel Linux kana versi 4.16.

Sajarah 3. Systemd na Gunung

Sakali deui, kubelet nyéépkeun seueur teuing sumber daya dina sababaraha titik, tapi waktos ieu nyéépkeun teuing mémori:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Tétéla aya masalah dina systemd dipaké dina Ubuntu 16.04, sarta eta lumangsung nalika ngatur mounts nu dijieun pikeun sambungan. subPath ti ConfigMap urang atanapi rusiah urang. Sanggeus pod geus réngsé gawéna jasa systemd sareng palayanan na tetep dina sistem. Kana waktu, sajumlah ageung aranjeunna ngumpulkeun. Malah aya masalah dina topik ieu:

  1. #5916;
  2. kubernetes #57345.

... anu terakhir ngarujuk kana PR dina systemd: #7811 (masalah dina systemd- #7798).

Masalahna henteu aya deui dina Ubuntu 18.04, tapi upami anjeun hoyong neraskeun nganggo Ubuntu 16.04, anjeun tiasa mendakan solusi kami dina topik ieu mangpaat.

Janten urang ngadamel DaemonSet ieu:

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: systemd-slices-cleaner
  name: systemd-slices-cleaner
  namespace: kube-system
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: systemd-slices-cleaner
  template:
    metadata:
      labels:
        app: systemd-slices-cleaner
    spec:
      containers:
      - command:
        - /usr/local/bin/supercronic
        - -json
        - /app/crontab
        Image: private-registry.org/systemd-slices-cleaner/systemd-slices-cleaner:v0.1.0
        imagePullPolicy: Always
        name: systemd-slices-cleaner
        resources: {}
        securityContext:
          privileged: true
        volumeMounts:
        - name: systemd
          mountPath: /run/systemd/private
        - name: docker
          mountPath: /run/docker.sock
        - name: systemd-etc
          mountPath: /etc/systemd
        - name: systemd-run
          mountPath: /run/systemd/system/
        - name: lsb-release
          mountPath: /etc/lsb-release-host
      imagePullSecrets:
      - name: antiopa-registry
      priorityClassName: cluster-low
      tolerations:
      - operator: Exists
      volumes:
      - name: systemd
        hostPath:
          path: /run/systemd/private
      - name: docker
        hostPath:
          path: /run/docker.sock
      - name: systemd-etc
        hostPath:
          path: /etc/systemd
      - name: systemd-run
        hostPath:
          path: /run/systemd/system/
      - name: lsb-release
        hostPath:
          path: /etc/lsb-release

... sareng nganggo skrip ieu:

#!/bin/bash

# we will work only on xenial
hostrelease="/etc/lsb-release-host"
test -f ${hostrelease} && grep xenial ${hostrelease} > /dev/null || exit 0

# sleeping max 30 minutes to dispense load on kube-nodes
sleep $((RANDOM % 1800))

stoppedCount=0
# counting actual subpath units in systemd
countBefore=$(systemctl list-units | grep subpath | grep "run-" | wc -l)
# let's go check each unit
for unit in $(systemctl list-units | grep subpath | grep "run-" | awk '{print $1}'); do
  # finding description file for unit (to find out docker container, who born this unit)
  DropFile=$(systemctl status ${unit} | grep Drop | awk -F': ' '{print $2}')
  # reading uuid for docker container from description file
  DockerContainerId=$(cat ${DropFile}/50-Description.conf | awk '{print $5}' | cut -d/ -f6)
  # checking container status (running or not)
  checkFlag=$(docker ps | grep -c ${DockerContainerId})
  # if container not running, we will stop unit
  if [[ ${checkFlag} -eq 0 ]]; then
    echo "Stopping unit ${unit}"
    # stoping unit in action
    systemctl stop $unit
    # just counter for logs
    ((stoppedCount++))
    # logging current progress
    echo "Stopped ${stoppedCount} systemd units out of ${countBefore}"
  fi
done

... sarta ngajalankeun unggal 5 menit ngagunakeun supercronic disebutkeun saméméhna. Dockerfile na sapertos kieu:

FROM ubuntu:16.04
COPY rootfs /
WORKDIR /app
RUN apt-get update && 
    apt-get upgrade -y && 
    apt-get install -y gnupg curl apt-transport-https software-properties-common wget
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" && 
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && 
    apt-get update && 
    apt-get install -y docker-ce=17.03.0*
RUN wget https://github.com/aptible/supercronic/releases/download/v0.1.6/supercronic-linux-amd64 -O 
    /usr/local/bin/supercronic && chmod +x /usr/local/bin/supercronic
ENTRYPOINT ["/bin/bash", "-c", "/usr/local/bin/supercronic -json /app/crontab"]

Carita 4. Daya saing nalika ngajadwalkeun pods

Perhatikeun yén: upami urang ngagaduhan pod disimpen dina titik sareng gambarna dipompa kaluar pikeun waktos anu lami, maka pod sanés anu "pencét" titik anu sami ngan saukur bakal. teu mimiti narik gambar tina pod anyar. Gantina, ngantosan dugi gambar tina pod saméméhna ditarik. Hasilna, pod anu parantos dijadwalkeun sareng gambarna tiasa diunduh dina ngan samenit bakal dugi ka statusna containerCreating.

Kajadian bakal katingali sapertos kieu:

Normal  Pulling    8m    kubelet, ip-10-241-44-128.ap-northeast-1.compute.internal  pulling image "registry.example.com/infra/openvpn/openvpn:master"

Tétéla éta gambar tunggal ti pendaptaran slow bisa meungpeuk deployment per node.

Hanjakalna, teu aya seueur jalan kaluar tina kaayaan:

  1. Coba nganggo Docker Registry anjeun langsung dina kluster atanapi langsung sareng kluster (contona, GitLab Registry, Nexus, jsb.);
  2. Anggo utilitas sapertos kraken.

Carita 5. Node ngagantung kusabab kurang ingetan

Salila operasi rupa-rupa aplikasi, urang oge encountered kaayaan dimana titik hiji lengkep ceases bisa diasupan: SSH teu ngabales, kabéh daemons ngawaskeun ragrag kaluar, lajeng aya nanaon (atawa ampir euweuh) anomali dina log.

Kuring bakal nyarioskeun ka anjeun dina gambar nganggo conto hiji titik dimana MongoDB dianggo.

Ieu naon rupa atop ka kacilakaan:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Sareng sapertos kieu - после kacilakaan:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Dina ngawaskeun, aya ogé luncat seukeut, dimana titik ceases sadia:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

Ku kituna, tina screenshot jelas yén:

  1. RAM dina mesin deukeut tungtung;
  2. Aya luncat seukeut dina konsumsi RAM, nu satutasna aksés ka sakabéh mesin ieu abruptly ditumpurkeun;
  3. Hiji tugas badag datang dina Mongo, nu maksakeun prosés DBMS ngagunakeun memori leuwih jeung aktip maca tina disk.

Tétéla yén lamun Linux béakna mémori bébas (tekanan mémori disetél dina) jeung euweuh swap, teras ka Nalika pembunuh OOM sumping, tindakan balancing tiasa timbul antara ngalungkeun halaman kana cache halaman sareng nyerat deui kana disk. Hal ieu dilakukeun ku kswapd, nu bravely frees up saloba kaca memori sabisa pikeun sebaran saterusna.

Hanjakalna, kalayan beban I/O anu ageung ditambah sareng jumlah mémori gratis anu sakedik, kswapd janten bottleneck tina sakabéh sistem, sabab dihijikeun ka dinya sadaya alokasi (kaca faults) kaca memori dina sistem. Ieu bisa lumangsung pikeun waktu anu pohara lila lamun prosés teu hayang make memori deui, tapi dibereskeun di pisan ujung jurang OOM-killer.

Patarosan alami nyaéta: naha pembunuh OOM datang telat? Dina iteration na ayeuna, pembunuh OOM pisan bodo: éta bakal maéhan prosés ngan lamun usaha pikeun allocate kaca memori gagal, i.e. lamun kaca sesar gagal. Ieu henteu lumangsung pikeun rada lila, sabab kswapd bravely frees kaca memori, dumping kaca cache (sakabeh disk I / O dina sistem, kanyataanna) deui disk. Dina leuwih jéntré, kalawan katerangan ngeunaan léngkah diperlukeun pikeun ngaleungitkeun masalah sapertos dina kernel, Anjeun bisa maca di dieu.

kabiasaan ieu kedah ningkatkeun kalawan Linux Ubuntu kernel 4.6+.

Carita 6. Pods macét dina kaayaan Pending

Dina sababaraha klaster, dimana aya seueur pisan polong anu beroperasi, urang mimiti perhatikeun yén kalolobaanana "ngagantung" kanggo waktos anu lami pisan di nagara éta. Pending, sanaos wadah Docker sorangan parantos dijalankeun dina titik sareng tiasa digarap sacara manual.

Sumawona, di describe teu aya anu lepat:

  Type    Reason                  Age                From                     Message
  ----    ------                  ----               ----                     -------
  Normal  Scheduled               1m                 default-scheduler        Successfully assigned sphinx-0 to ss-dev-kub07
  Normal  SuccessfulAttachVolume  1m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-6aaad34f-ad10-11e8-a44c-52540035a73b"
  Normal  SuccessfulMountVolume   1m                 kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "sphinx-config"
  Normal  SuccessfulMountVolume   1m                 kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "default-token-fzcsf"
  Normal  SuccessfulMountVolume   49s (x2 over 51s)  kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "pvc-6aaad34f-ad10-11e8-a44c-52540035a73b"
  Normal  Pulled                  43s                kubelet, ss-dev-kub07    Container image "registry.example.com/infra/sphinx-exporter/sphinx-indexer:v1" already present on machine
  Normal  Created                 43s                kubelet, ss-dev-kub07    Created container
  Normal  Started                 43s                kubelet, ss-dev-kub07    Started container
  Normal  Pulled                  43s                kubelet, ss-dev-kub07    Container image "registry.example.com/infra/sphinx/sphinx:v1" already present on machine
  Normal  Created                 42s                kubelet, ss-dev-kub07    Created container
  Normal  Started                 42s                kubelet, ss-dev-kub07    Started container

Saatos sababaraha digging, urang nyieun asumsi yén kubelet saukur teu boga waktu pikeun ngirimkeun sagala informasi ngeunaan kaayaan pods sarta liveness / tes kesiapan ka server API.

Sareng saatos diajar bantosan, kami mendakan parameter ieu:

--kube-api-qps - QPS to use while talking with kubernetes apiserver (default 5)
--kube-api-burst  - Burst to use while talking with kubernetes apiserver (default 10) 
--event-qps - If > 0, limit event creations per second to this value. If 0, unlimited. (default 5)
--event-burst - Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0 (default 10) 
--registry-qps - If > 0, limit registry pull QPS to this value.
--registry-burst - Maximum size of bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry-qps. Only used if --registry-qps > 0 (default 10)

Saperti katempo, nilai standar rada leutik, sarta dina 90% aranjeunna nutupan sagala kaperluan... Sanajan kitu, dina hal urang ieu teu cukup. Ku kituna, urang nangtukeun nilai handap:

--event-qps=30 --event-burst=40 --kube-api-burst=40 --kube-api-qps=30 --registry-qps=30 --registry-burst=40

... sareng balikan deui kubelets, saatos éta kami ningali gambar di handap ieu dina grafik telepon ka server API:

6 bug sistem anu pikaresepeun nalika nganggo Kubernetes [sareng solusina]

... na enya, sagalana mimiti ngapung!

PS

Pikeun pitulung maranéhanana dina ngumpulkeun bug jeung Nyiapkeun artikel ieu, kuring ngucapkeun syukur pisan ka sababaraha insinyur pausahaan urang, sarta hususna ka batur sapagawean ti tim R&D urang Andrey Klimentyev (zuzzas).

PPS

Baca ogé dina blog urang:

sumber: www.habr.com

Tambahkeun komentar