Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Kubernetes-ni ishlab chiqarishda qo'llash yillari davomida biz turli xil tizim komponentlaridagi xatolar konteynerlar va podlarning ishlashiga ta'sir qiluvchi noxush va/yoki tushunarsiz oqibatlarga olib kelganligi haqida ko'plab qiziqarli hikoyalarni to'pladik. Ushbu maqolada biz eng keng tarqalgan yoki qiziqarli narsalarni tanladik. Bunday vaziyatlarga duch kelish omadingiz bo'lmasa ham, bunday qisqa detektiv hikoyalarni o'qish, ayniqsa "birinchi qo'l" - har doim qiziqarli, shunday emasmi?..

Hikoya 1. Supercronic va Docker osilgan

Klasterlardan birida biz vaqti-vaqti bilan muzlatilgan Dockerni oldik, bu klasterning normal ishlashiga xalaqit berdi. Shu bilan birga, Docker jurnallarida quyidagilar kuzatildi:

level=error msg="containerd: start init process" error="exit status 2: "runtime/cgo: pthread_create failed: No space left on device
SIGABRT: abort
PC=0x7f31b811a428 m=0

goroutine 0 [idle]:

goroutine 1 [running]:
runtime.systemstack_switch() /usr/local/go/src/runtime/asm_amd64.s:252 fp=0xc420026768 sp=0xc420026760
runtime.main() /usr/local/go/src/runtime/proc.go:127 +0x6c fp=0xc4200267c0 sp=0xc420026768
runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1 fp=0xc4200267c8 sp=0xc4200267c0

goroutine 17 [syscall, locked to thread]:
runtime.goexit() /usr/local/go/src/runtime/asm_amd64.s:2086 +0x1

…

Ushbu xato haqida bizni eng ko'p qiziqtiradigan narsa bu xabar: pthread_create failed: No space left on device. Tez o'rganish hujjatlar Docker jarayonni o'zgartira olmasligini tushuntirdi, shuning uchun u vaqti-vaqti bilan muzlatib qo'ydi.

Monitoringda quyidagi rasm nima sodir bo'layotganiga mos keladi:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Xuddi shunday holat boshqa tugunlarda ham kuzatiladi:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Xuddi shu tugunlarda biz quyidagilarni ko'ramiz:

root@kube-node-1 ~ # ps auxfww | grep curl -c
19782
root@kube-node-1 ~ # ps auxfww | grep curl | head
root     16688  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     17398  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     16852  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      9473  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      4664  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     30571  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     24113  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root     16475  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      7176  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>
root      1090  0.0  0.0      0     0 ?        Z    Feb06   0:00      |       _ [curl] <defunct>

Ma'lum bo'lishicha, bu xatti-harakat pod bilan ishlashning natijasidir superkronik (biz podlarda cron ishlarini bajarish uchun foydalanadigan Go yordam dasturi):

 _ docker-containerd-shim 833b60bb9ff4c669bb413b898a5fd142a57a21695e5dc42684235df907825567 /var/run/docker/libcontainerd/833b60bb9ff4c669bb413b898a5fd142a57a21695e5dc42684235df907825567 docker-runc
|   _ /usr/local/bin/supercronic -json /crontabs/cron
|       _ /usr/bin/newrelic-daemon --agent --pidfile /var/run/newrelic-daemon.pid --logfile /dev/stderr --port /run/newrelic.sock --tls --define utilization.detect_aws=true --define utilization.detect_azure=true --define utilization.detect_gcp=true --define utilization.detect_pcf=true --define utilization.detect_docker=true
|       |   _ /usr/bin/newrelic-daemon --agent --pidfile /var/run/newrelic-daemon.pid --logfile /dev/stderr --port /run/newrelic.sock --tls --define utilization.detect_aws=true --define utilization.detect_azure=true --define utilization.detect_gcp=true --define utilization.detect_pcf=true --define utilization.detect_docker=true -no-pidfile
|       _ [newrelic-daemon] <defunct>
|       _ [curl] <defunct>
|       _ [curl] <defunct>
|       _ [curl] <defunct>
…

Muammo shundaki: vazifa superkronikda bajarilganda, jarayon u tomonidan paydo bo'ladi to'g'ri tugata olmaydi, aylanadi zombi.

nota: Aniqroq qilib aytadigan bo'lsak, jarayonlar cron vazifalari bilan tug'iladi, lekin supercronic init tizimi emas va uning bolalari paydo bo'lgan jarayonlarni "qabul qila olmaydi". SIGHUP yoki SIGTERM signallari ko'tarilganda, ular bolalar jarayonlariga o'tkazilmaydi, natijada bola jarayonlari tugamaydi va zombi holatida qoladi. Bularning barchasi haqida ko'proq o'qishingiz mumkin, masalan bunday maqola.

Muammolarni hal qilishning bir necha yo'li mavjud:

  1. Vaqtinchalik vaqtinchalik yechim sifatida - bir vaqtning o'zida tizimdagi PID sonini ko'paytirish:
           /proc/sys/kernel/pid_max (since Linux 2.5.34)
                  This file specifies the value at which PIDs wrap around (i.e., the value in this file is one greater than the maximum PID).  PIDs greater than this  value  are  not  allo‐
                  cated;  thus, the value in this file also acts as a system-wide limit on the total number of processes and threads.  The default value for this file, 32768, results in the
                  same range of PIDs as on earlier kernels
  2. Yoki vazifalarni superkronikda to'g'ridan-to'g'ri emas, balki xuddi shu tarzda ishga tushiring kalay, bu jarayonlarni to'g'ri tugatishga qodir va zombi tug'dirmaydi.

Hikoya 2. Guruhni o'chirishda "Zombi"

Kubelet juda ko'p CPU iste'mol qila boshladi:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Bu hech kimga yoqmaydi, shuning uchun biz o'zimizni qurollantirdik mukammal va muammoni hal qila boshladi. Tekshiruv natijalari quyidagicha edi:

  • Kubelet protsessor vaqtining uchdan biridan ko'prog'ini barcha guruhlardan xotira ma'lumotlarini olish uchun sarflaydi:

    Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

  • Yadro ishlab chiquvchilari pochta ro'yxatida siz topishingiz mumkin muammoni muhokama qilish. Xulosa qilib aytganda, masala quyidagicha bo'ladi: turli xil tmpfs fayllari va boshqa shunga o'xshash narsalar tizimdan butunlay olib tashlanmaydi bir guruhni o'chirishda, deb atalmish memcg zombi. Ertami-kechmi ular sahifa keshidan o'chiriladi, lekin serverda xotira juda ko'p va yadro ularni o'chirish uchun vaqtni behuda sarflashning ma'nosini ko'rmaydi. Shuning uchun ular to'planib qolishadi. Nega bu hatto sodir bo'lmoqda? Bu doimiy ravishda yangi ish o'rinlari va ular bilan birga yangi podlar yaratadigan cron ishlariga ega server. Shunday qilib, ulardagi konteynerlar uchun yangi guruhlar yaratiladi, ular tez orada o'chiriladi.
  • Nima uchun kubeletdagi cAdvisor ko'p vaqtni behuda sarflaydi? Buni eng oddiy ijro bilan ko'rish oson time cat /sys/fs/cgroup/memory/memory.stat. Agar sog'lom mashinada operatsiya 0,01 soniya davom etsa, muammoli cron02 da 1,2 soniya davom etadi. Gap shundaki, sysfs ma'lumotlarini juda sekin o'qiy oladigan cAdvisor zombi guruhlarida ishlatiladigan xotirani hisobga olishga harakat qiladi.
  • Zombilarni majburan olib tashlash uchun LKML da tavsiya etilganidek keshlarni tozalashga harakat qildik: sync; echo 3 > /proc/sys/vm/drop_caches, - lekin yadro yanada murakkab bo'lib chiqdi va mashinani qulab tushdi.

Nima qilsa bo'ladi? Muammo hal qilinmoqda (topshirmoq, va tavsif uchun qarang xabarni chiqarish) Linux yadrosini 4.16 versiyasiga yangilash.

Tarix 3. Systemd va uning o'rnatilishi

Shunga qaramay, kubelet ba'zi tugunlarda juda ko'p resurslarni iste'mol qilmoqda, ammo bu safar u juda ko'p xotirani iste'mol qilmoqda:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Ma'lum bo'lishicha, Ubuntu 16.04 da qo'llaniladigan tizimda muammo bor va u ulanish uchun yaratilgan o'rnatishlarni boshqarishda yuzaga keladi. subPath ConfigMap yoki sirlaridan. Pod o'z ishini tugatgandan so'ng systemd xizmati va uning xizmat ko'rsatish moslamasi qoladi tizimda. Vaqt o'tishi bilan ularning juda ko'p soni to'planadi. Hatto ushbu mavzu bo'yicha muammolar mavjud:

  1. #5916;
  2. kubernetes # 57345.

... oxirgisi systemd dagi PRga tegishli: #7811 (sistemadagi muammo - #7798).

Muammo endi Ubuntu 18.04 da mavjud emas, lekin agar siz Ubuntu 16.04 dan foydalanishni davom ettirmoqchi boʻlsangiz, ushbu mavzu boʻyicha bizning vaqtinchalik yechimimiz foydali boʻlishi mumkin.

Shunday qilib, biz quyidagi DaemonSet ni yaratdik:

---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app: systemd-slices-cleaner
  name: systemd-slices-cleaner
  namespace: kube-system
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: systemd-slices-cleaner
  template:
    metadata:
      labels:
        app: systemd-slices-cleaner
    spec:
      containers:
      - command:
        - /usr/local/bin/supercronic
        - -json
        - /app/crontab
        Image: private-registry.org/systemd-slices-cleaner/systemd-slices-cleaner:v0.1.0
        imagePullPolicy: Always
        name: systemd-slices-cleaner
        resources: {}
        securityContext:
          privileged: true
        volumeMounts:
        - name: systemd
          mountPath: /run/systemd/private
        - name: docker
          mountPath: /run/docker.sock
        - name: systemd-etc
          mountPath: /etc/systemd
        - name: systemd-run
          mountPath: /run/systemd/system/
        - name: lsb-release
          mountPath: /etc/lsb-release-host
      imagePullSecrets:
      - name: antiopa-registry
      priorityClassName: cluster-low
      tolerations:
      - operator: Exists
      volumes:
      - name: systemd
        hostPath:
          path: /run/systemd/private
      - name: docker
        hostPath:
          path: /run/docker.sock
      - name: systemd-etc
        hostPath:
          path: /etc/systemd
      - name: systemd-run
        hostPath:
          path: /run/systemd/system/
      - name: lsb-release
        hostPath:
          path: /etc/lsb-release

... va u quyidagi skriptdan foydalanadi:

#!/bin/bash

# we will work only on xenial
hostrelease="/etc/lsb-release-host"
test -f ${hostrelease} && grep xenial ${hostrelease} > /dev/null || exit 0

# sleeping max 30 minutes to dispense load on kube-nodes
sleep $((RANDOM % 1800))

stoppedCount=0
# counting actual subpath units in systemd
countBefore=$(systemctl list-units | grep subpath | grep "run-" | wc -l)
# let's go check each unit
for unit in $(systemctl list-units | grep subpath | grep "run-" | awk '{print $1}'); do
  # finding description file for unit (to find out docker container, who born this unit)
  DropFile=$(systemctl status ${unit} | grep Drop | awk -F': ' '{print $2}')
  # reading uuid for docker container from description file
  DockerContainerId=$(cat ${DropFile}/50-Description.conf | awk '{print $5}' | cut -d/ -f6)
  # checking container status (running or not)
  checkFlag=$(docker ps | grep -c ${DockerContainerId})
  # if container not running, we will stop unit
  if [[ ${checkFlag} -eq 0 ]]; then
    echo "Stopping unit ${unit}"
    # stoping unit in action
    systemctl stop $unit
    # just counter for logs
    ((stoppedCount++))
    # logging current progress
    echo "Stopped ${stoppedCount} systemd units out of ${countBefore}"
  fi
done

... va u har 5 daqiqada yuqorida aytib o'tilgan superkronik yordamida ishlaydi. Uning Docker fayli quyidagicha ko'rinadi:

FROM ubuntu:16.04
COPY rootfs /
WORKDIR /app
RUN apt-get update && 
    apt-get upgrade -y && 
    apt-get install -y gnupg curl apt-transport-https software-properties-common wget
RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu xenial stable" && 
    curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && 
    apt-get update && 
    apt-get install -y docker-ce=17.03.0*
RUN wget https://github.com/aptible/supercronic/releases/download/v0.1.6/supercronic-linux-amd64 -O 
    /usr/local/bin/supercronic && chmod +x /usr/local/bin/supercronic
ENTRYPOINT ["/bin/bash", "-c", "/usr/local/bin/supercronic -json /app/crontab"]

Hikoya 4. Podkalarni rejalashtirishda raqobatbardoshlik

Shunisi e'tiborga loyiqki, agar bizda tugunga podkast qo'yilgan bo'lsa va uning tasviri juda uzoq vaqt davomida pompalansa, xuddi shu tugunga "urilgan" boshqa pod oddiygina bo'ladi. yangi podning tasvirini tortib olishni boshlamaydi. Buning o'rniga, oldingi podning tasviri tortilguncha kutadi. Natijada, allaqachon rejalashtirilgan va tasviri bir daqiqada yuklab olinishi mumkin bo'lgan podkast quyidagi holatida bo'ladi. containerCreating.

Voqealarning ko'rinishi quyidagicha bo'ladi:

Normal  Pulling    8m    kubelet, ip-10-241-44-128.ap-northeast-1.compute.internal  pulling image "registry.example.com/infra/openvpn/openvpn:master"

Bu chiqadi sekin ro'yxatga olish kitobidan bitta rasm joylashtirishni bloklashi mumkin har bir tugun uchun.

Afsuski, vaziyatdan chiqish yo'llari ko'p emas:

  1. Docker Registry-dan to'g'ridan-to'g'ri klasterda yoki to'g'ridan-to'g'ri klaster bilan foydalanishga harakat qiling (masalan, GitLab Registry, Nexus va boshqalar);
  2. kabi yordamchi dasturlardan foydalaning kraken.

Hikoya 5. Xotira etishmasligi tufayli tugunlar osilib qoladi

Turli xil ilovalarning ishlashi paytida biz tugunga kirishni to'liq to'xtatadigan vaziyatga duch keldik: SSH javob bermayapti, barcha monitoring demonlari yiqilib, keyin jurnallarda hech qanday anomaliya (yoki deyarli hech narsa) yo'q.

Men sizga MongoDB ishlagan bitta tugun misolidan foydalanib, rasmlarda aytib beraman.

Tepada shunday ko'rinadi uchun baxtsiz hodisalar:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Va shunga o'xshash - после baxtsiz hodisalar:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Monitoringda keskin sakrash ham mavjud bo'lib, tugun mavjud bo'lishni to'xtatadi:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

Shunday qilib, skrinshotlardan ko'rinib turibdiki:

  1. Mashinadagi RAM oxiriga yaqin;
  2. RAM iste'molida keskin sakrash mavjud, shundan so'ng butun mashinaga kirish keskin o'chirib qo'yiladi;
  3. Mongo-ga katta vazifa keladi, bu esa DBMS jarayonini ko'proq xotiradan foydalanishga va diskdan faol o'qishga majbur qiladi.

Ma'lum bo'lishicha, agar Linuxda bo'sh xotira tugasa (xotira bosimi o'rnatilsa) va almashtirish bo'lmasa, u holda uchun OOM qotili kelganda, sahifalarni sahifa keshiga tashlash va ularni diskka qaytarish o'rtasida muvozanatlash harakati paydo bo'lishi mumkin. Bu kswapd tomonidan amalga oshiriladi, bu keyingi tarqatish uchun iloji boricha ko'proq xotira sahifalarini jasorat bilan bo'shatadi.

Afsuski, katta kiritish-chiqarish yuki va kichik hajmdagi bo'sh xotira bilan, kswapd butun tizimning darbog'iga aylanadi, chunki ular unga bog'langan hamma tizimdagi xotira sahifalarining taqsimlanishi (sahifadagi nosozliklar). Agar jarayonlar endi xotiradan foydalanishni istamasa, lekin OOM qotil tubsizligining eng chekkasida o'rnatilsa, bu juda uzoq vaqt davom etishi mumkin.

Tabiiy savol: nega OOM qotili juda kech keladi? Hozirgi iteratsiyada OOM qotili juda ahmoqdir: u xotira sahifasini ajratishga urinish muvaffaqiyatsizlikka uchragandagina jarayonni o'ldiradi, ya'ni. agar sahifa xatosi bajarilmasa. Bu juda uzoq vaqt davomida sodir bo'lmaydi, chunki kswapd xotira sahifalarini jasorat bilan bo'shatib, sahifa keshini (aslida tizimdagi butun disk kiritish-chiqarish) diskka qaytaradi. Batafsilroq, yadrodagi bunday muammolarni bartaraf etish uchun zarur bo'lgan qadamlarning tavsifi bilan siz o'qishingiz mumkin shu yerda.

Bu xatti-harakat yaxshilash kerak Linux yadrosi 4.6+ bilan.

Hikoya 6. Podlar kutilayotgan holatda tiqilib qoladi

Haqiqatan ham ko'plab podalar ishlaydigan ba'zi klasterlarda biz ularning aksariyati shtatda juda uzoq vaqt "osilib turishini" payqadik. Pending, garchi Docker konteynerlarining o'zi allaqachon tugunlarda ishlayotgan bo'lsa-da va ular bilan qo'lda ishlash mumkin.

Bundan tashqari, ichida describe hech qanday yomon narsa yo'q:

  Type    Reason                  Age                From                     Message
  ----    ------                  ----               ----                     -------
  Normal  Scheduled               1m                 default-scheduler        Successfully assigned sphinx-0 to ss-dev-kub07
  Normal  SuccessfulAttachVolume  1m                 attachdetach-controller  AttachVolume.Attach succeeded for volume "pvc-6aaad34f-ad10-11e8-a44c-52540035a73b"
  Normal  SuccessfulMountVolume   1m                 kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "sphinx-config"
  Normal  SuccessfulMountVolume   1m                 kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "default-token-fzcsf"
  Normal  SuccessfulMountVolume   49s (x2 over 51s)  kubelet, ss-dev-kub07    MountVolume.SetUp succeeded for volume "pvc-6aaad34f-ad10-11e8-a44c-52540035a73b"
  Normal  Pulled                  43s                kubelet, ss-dev-kub07    Container image "registry.example.com/infra/sphinx-exporter/sphinx-indexer:v1" already present on machine
  Normal  Created                 43s                kubelet, ss-dev-kub07    Created container
  Normal  Started                 43s                kubelet, ss-dev-kub07    Started container
  Normal  Pulled                  43s                kubelet, ss-dev-kub07    Container image "registry.example.com/infra/sphinx/sphinx:v1" already present on machine
  Normal  Created                 42s                kubelet, ss-dev-kub07    Created container
  Normal  Started                 42s                kubelet, ss-dev-kub07    Started container

Biroz qazishdan so'ng, biz kubeletda podkastlarning holati va jonlilik/tayyorlik testlari haqidagi barcha ma'lumotlarni API serveriga yuborishga ulgurmaydi, deb taxmin qildik.

Va yordamni o'rgangach, biz quyidagi parametrlarni topdik:

--kube-api-qps - QPS to use while talking with kubernetes apiserver (default 5)
--kube-api-burst  - Burst to use while talking with kubernetes apiserver (default 10) 
--event-qps - If > 0, limit event creations per second to this value. If 0, unlimited. (default 5)
--event-burst - Maximum size of a bursty event records, temporarily allows event records to burst to this number, while still not exceeding event-qps. Only used if --event-qps > 0 (default 10) 
--registry-qps - If > 0, limit registry pull QPS to this value.
--registry-burst - Maximum size of bursty pulls, temporarily allows pulls to burst to this number, while still not exceeding registry-qps. Only used if --registry-qps > 0 (default 10)

Ko'rinib turibdiki, standart qiymatlar juda kichik, va 90% da ular barcha ehtiyojlarni qoplaydi ... Biroq, bizning holatlarimizda bu etarli emas edi. Shuning uchun biz quyidagi qiymatlarni o'rnatamiz:

--event-qps=30 --event-burst=40 --kube-api-burst=40 --kube-api-qps=30 --registry-qps=30 --registry-burst=40

... va kubeletlarni qayta ishga tushirdik, shundan so'ng biz API serveriga qo'ng'iroqlar grafiklarida quyidagi rasmni ko'rdik:

Kubernetes ishidagi 6 ta qiziqarli tizim xatolari [va ularning yechimi]

... va ha, hamma narsa ucha boshladi!

PS

Xatolarni to'plash va ushbu maqolani tayyorlashda yordam bergani uchun men kompaniyamizning ko'plab muhandislariga va ayniqsa ilmiy-tadqiqot guruhimizdagi hamkasbim Andrey Klimentyevga chuqur minnatdorchilik bildiraman (zuzzalar).

PPS

Shuningdek, bizning blogimizda o'qing:

Manba: www.habr.com

a Izoh qo'shish