Isibonelo esisebenzayo sokuxhuma isitoreji esisekelwe ku-Ceph kuqoqo le-Kubernetes

I-Container Storage Interface (CSI) iwukuxhumana okuhlanganisiwe phakathi kwe-Kubernetes nezinhlelo zokugcina. Sesike sakhuluma ngakho kafushane utshele, futhi namuhla sizobhekisisa inhlanganisela ye-CSI ne-Ceph: sizobonisa ukuthi kanjani xhuma i-Ceph storage kwiqoqo leKubernetes.
I-athikili ihlinzeka ngezibonelo zangempela, nakuba zenziwe lula ukuze kube lula ukuqonda. Asicabangi ukufaka nokumisa amaqoqo e-Ceph ne-Kubernetes.

Uyazibuza ukuthi kusebenza kanjani?

Isibonelo esisebenzayo sokuxhuma isitoreji esisekelwe ku-Ceph kuqoqo le-Kubernetes

Ngakho-ke, uneqoqo le-Kubernetes ezandleni zakho, elisetshenzisiwe, ngokwesibonelo, kubespray. Kukhona iqoqo le-Ceph elisebenza eduze - ungalifaka futhi, isibonelo, nalokhu iqoqo lezincwadi zokudlala. Ngithemba ukuthi asikho isidingo sokusho ukuthi ukukhiqiza phakathi kwabo kufanele kube nenethiwekhi enomkhawulokudonsa okungenani we-10 Gbit / s.

Uma unakho konke lokhu, asihambe!

Okokuqala, ake siye kwenye ye-Ceph cluster node futhi sihlole ukuthi yonke into ihlelekile:

ceph health
ceph -s

Okulandelayo, sizokwakha ngokushesha ichibi lamadiski e-RBD:

ceph osd pool create kube 32
ceph osd pool application enable kube rbd

Asiqhubekele kwiqoqo le-Kubernetes. Lapho, okokuqala, sizofaka umshayeli we-Ceph CSI we-RBD. Sizofaka, njengoba kulindelekile, nge-Helm.
Sengeza inqolobane eneshadi, sithola isethi yezinto eziguquguqukayo zeshadi le-ceph-csi-rbd:

helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml

Manje udinga ukugcwalisa ifayela elithi cephrbd.yml. Ukuze wenze lokhu, thola i-ID yeqoqo namakheli e-IP wabaqaphi ku-Ceph:

ceph fsid  # Ρ‚Π°ΠΊ ΠΌΡ‹ ΡƒΠ·Π½Π°Π΅ΠΌ clusterID
ceph mon dump  # Π° Ρ‚Π°ΠΊ ΡƒΠ²ΠΈΠ΄ΠΈΠΌ IP-адрСса ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΎΠ²

Sifaka amanani atholiwe kufayela elithi cephrbd.yml. Ngesikhathi esifanayo, sinika amandla ukudalwa kwezinqubomgomo ze-PSP (Izinqubomgomo Zokuphepha Ze-Pod). Izinketho ezigabeni i-nodeplugin ΠΈ umnikezeli asevele efayelini, angalungiswa njengoba kukhonjisiwe ngezansi:

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
      - "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
      - "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"

nodeplugin:
  podSecurityPolicy:
    enabled: true

provisioner:
  podSecurityPolicy:
    enabled: true

Okulandelayo, okusele kithi ukufaka ishadi kuqoqo le-Kubernetes.

helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace

Kuhle, umshayeli we-RBD uyasebenza!
Masidale i-StorageClass entsha ku-Kubernetes. Lokhu futhi kudinga ukuthintana kancane noCeph.

Sidala umsebenzisi omusha ku-Ceph futhi simnike amalungelo okubhalela ichibi ikhiyubhu:

ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'

Manje ake sibone ukhiye wokufinyelela usekhona:

ceph auth get-key client.rbdkube

Umyalo uzokhipha okufana nalokhu:

AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==

Ake sengeze leli nani kokuthi Imfihlo kuqoqo le-Kubernetes - lapho silidinga khona umsebenzisiKey:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: ceph-csi-rbd
stringData:
  # ЗначСния ΠΊΠ»ΡŽΡ‡Π΅ΠΉ ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‚ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»Ρ ΠΈ Π΅Π³ΠΎ ΠΊΠ»ΡŽΡ‡Ρƒ, ΠΊΠ°ΠΊ ΡƒΠΊΠ°Π·Π°Π½ΠΎ Π²
  # кластСрС Ceph. ID ΡŽΠ·Π΅Ρ€Π° Π΄ΠΎΠ»ΠΆΠ΅Π½ ΠΈΠΌΠ΅Ρ‚ΡŒ доступ ΠΊ ΠΏΡƒΠ»Ρƒ,
  # ΡƒΠΊΠ°Π·Π°Π½Π½ΠΎΠΌΡƒ Π² storage class
  userID: rbdkube
  userKey: <user-key>

Futhi sakha imfihlo yethu:

kubectl apply -f secret.yaml

Okulandelayo, sidinga i-StorageClass manifest into efana nale:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: <cluster-id>
   pool: kube

   imageFeatures: layering

   # Π­Ρ‚ΠΈ сСкрСты Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠ΄Π΅Ρ€ΠΆΠ°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅ для Π°Π²Ρ‚ΠΎΡ€ΠΈΠ·Π°Ρ†ΠΈΠΈ
   # Π² ваш ΠΏΡƒΠ».
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd

   csi.storage.k8s.io/fstype: ext4

reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - discard

Idinga ukugcwaliswa iqoqoID, esesivele siyifundile yiqembu ceph fsid, futhi usebenzise le-manifest kuqoqo le-Kubernetes:

kubectl apply -f storageclass.yaml

Ukuze uhlole ukuthi amaqoqo asebenzisana kanjani, ake sakhe i-PVC elandelayo (Isimangalo Somthamo Oqhubekayo):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

Ake sibone ngokushesha ukuthi uKubernetes udale kanjani ivolumu eceliwe ku-Ceph:

kubectl get pvc
kubectl get pv

Konke kubonakala kukuhle! Kubukeka kanjani lokhu ohlangothini lweCeph?
Sithola uhlu lwamavolumu echibini futhi sibuka ulwazi mayelana nevolumu yethu:

rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653  # Ρ‚ΡƒΡ‚, ΠΊΠΎΠ½Π΅Ρ‡Π½ΠΎ ΠΆΠ΅, Π±ΡƒΠ΄Π΅Ρ‚ Π΄Ρ€ΡƒΠ³ΠΎΠΉ ID Ρ‚ΠΎΠΌΠ°, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π²Ρ‹Π΄Π°Π»Π° прСдыдущая ΠΊΠΎΠΌΠ°Π½Π΄Π°

Manje ake sibone ukuthi ukukhulisa usayizi wevolumu ye-RBD kusebenza kanjani.
Shintsha usayizi wevolumu ku-manifest ye-pvc.yaml uye ku-2Gi futhi uyisebenzise:

kubectl apply -f pvc.yaml

Asilinde ukuthi izinguquko ziqale ukusebenza bese sibheka usayizi wevolumu futhi.

rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653

kubectl get pv
kubectl get pvc

Siyabona ukuthi ubukhulu be-PVC abukashintshi. Ukuze uthole ukuthi kungani, ungabuza u-Kubernetes ngencazelo ye-YAML ye-PVC:

kubectl get pvc rbd-pvc -o yaml

Nansi inkinga:

umyalezo: Ilinde umsebenzisi ukuthi (kabusha) aqale i-pod ukuze aqedele usayizi wohlelo lwefayela wevolumu endaweni. uhlobo: FileSystemResizePending

Okusho ukuthi, i-disk ikhulile, kodwa uhlelo lwefayela olukulo aluzange.
Ukuze ukhulise isistimu yefayela, udinga ukukhweza ivolumu. Ezweni lakithi, i-PVC/PV edaliwe okwamanje ayisetshenziswa nganoma iyiphi indlela.

Singakha i-Pod yokuhlola, isibonelo kanje:

---
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx:1.17.6
      volumeMounts:
        - name: mypvc
          mountPath: /data
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false

Futhi manje ake sibheke i-PVC:

kubectl get pvc

Usayizi ushintshile, konke kuhamba kahle.

Engxenyeni yokuqala, sisebenze nedivayisi ye-block ye-RBD (imele i-Rados Block Device), kodwa lokhu akunakwenziwa uma ama-microservices ahlukene adinga ukusebenza nale disk ngesikhathi esisodwa. I-CephFS ifaneleka kangcono ukusebenza ngamafayela kunezithombe zediski.
Sisebenzisa isibonelo samaqoqo e-Ceph ne-Kubernetes, sizomisa i-CSI nezinye izinhlangano ezidingekayo ukuze zisebenze ne-CephFS.

Ake sithole amanani eshadini elisha le-Helm esilidingayo:

helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml

Futhi udinga ukugcwalisa ifayela elithi cephfs.yml. Njengangaphambili, imiyalo kaCeph izosiza:

ceph fsid
ceph mon dump

Gcwalisa ifayela ngamavelu afana nalawa:

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "172.18.8.5:6789"
      - "172.18.8.6:6789"
      - "172.18.8.7:6789"

nodeplugin:
  httpMetrics:
    enabled: true
    containerPort: 8091
  podSecurityPolicy:
    enabled: true

provisioner:
  replicaCount: 1
  podSecurityPolicy:
    enabled: true

Sicela uqaphele ukuthi amakheli okuqapha acaciswe ekhelini elilula elithi:port. Ukuze ukhweze ama-cephf endaweni, lawa makheli adluliselwa kumojula ye-kernel, engakazi okwamanje ukuthi isebenza kanjani ne-v2 monitor protocol.
Sishintsha imbobo ye-httpMetrics (i-Prometheus izoya lapho ukuze ihlole amamethrikhi) ukuze ingangqubuzani ne-nginx-proxy, efakwe yi-Kubespray. Ungase ungakudingi lokhu.

Faka ishadi le-Helm kuqoqo le-Kubernetes:

helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace

Ake siye esitolo sedatha ye-Ceph ukuze sakhe umsebenzisi ohlukile lapho. Imibhalo ithi umhlinzeki we-CephFS udinga amalungelo okufinyelela omlawuli weqoqo. Kodwa sizodala umsebenzisi ohlukile fs namalungelo anomkhawulo:

ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'

Futhi masibheke ngokushesha ukhiye wakhe wokufinyelela, sizowudinga kamuva:

ceph auth get-key client.fs

Masidale i- Secret and StorageClass ehlukene.
Akukho okusha, sesivele sikubonile lokhu esibonelweni se-RBD:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: ceph-csi-cephfs
stringData:
  # НСобходимо для динамичСски создаваСмых Ρ‚ΠΎΠΌΠΎΠ²
  adminID: fs
  adminKey: <Π²Ρ‹Π²ΠΎΠ΄ ΠΏΡ€Π΅Π΄Ρ‹Π΄ΡƒΡ‰Π΅ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ‹>

Ukusebenzisa i-manifest:

kubectl apply -f secret.yaml

Futhi manje - i-StorageClass ehlukile:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: <cluster-id>

  # Имя Ρ„Π°ΠΉΠ»ΠΎΠ²ΠΎΠΉ систСмы CephFS, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ Π±ΡƒΠ΄Π΅Ρ‚ создан Ρ‚ΠΎΠΌ
  fsName: cephfs

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) ΠŸΡƒΠ» Ceph, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΌ Π±ΡƒΠ΄ΡƒΡ‚ Ρ…Ρ€Π°Π½ΠΈΡ‚ΡŒΡΡ Π΄Π°Π½Π½Ρ‹Π΅ Ρ‚ΠΎΠΌΠ°
  # pool: cephfs_data

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½Ρ‹Π΅ запятыми ΠΎΠΏΡ†ΠΈΠΈ монтирования для Ceph-fuse
  # Π½Π°ΠΏΡ€ΠΈΠΌΠ΅Ρ€:
  # fuseMountOptions: debug

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½Ρ‹Π΅ запятыми ΠΎΠΏΡ†ΠΈΠΈ монтирования CephFS для ядра
  # Π‘ΠΌ. man mount.ceph Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΡƒΠ·Π½Π°Ρ‚ΡŒ список этих ΠΎΠΏΡ†ΠΈΠΉ. НапримСр:
  # kernelMountOptions: readdir_max_bytes=1048576,norbytes

  # Π‘Π΅ΠΊΡ€Π΅Ρ‚Ρ‹ Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠ΄Π΅Ρ€ΠΆΠ°Ρ‚ΡŒ доступы для Π°Π΄ΠΌΠΈΠ½Π° ΠΈ/ΠΈΠ»ΠΈ ΡŽΠ·Π΅Ρ€Π° Ceph.
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π”Ρ€Π°ΠΉΠ²Π΅Ρ€ ΠΌΠΎΠΆΠ΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π»ΠΈΠ±ΠΎ ceph-fuse (fuse), 
  # Π»ΠΈΠ±ΠΎ ceph kernelclient (kernel).
  # Если Π½Π΅ ΡƒΠΊΠ°Π·Π°Π½ΠΎ, Π±ΡƒΠ΄Π΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒΡΡ ΠΌΠΎΠ½Ρ‚ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ Ρ‚ΠΎΠΌΠΎΠ² ΠΏΠΎ ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ,
  # это опрСдСляСтся поиском ceph-fuse ΠΈ mount.ceph
  # mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

Asigcwalise lapha iqoqoID futhi kusebenza ku-Kubernetes:

kubectl apply -f storageclass.yaml

wokuhlola

Ukuze uhlole, njengasesibonelweni sangaphambilini, ake sakhe i-PVC:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-cephfs-sc

Futhi hlola ubukhona be-PVC/PV:

kubectl get pvc
kubectl get pv

Uma ufuna ukubuka amafayela nezinkomba ku-CephFS, ungafaka lolu hlelo lwefayela endaweni ethile. Isibonelo njengoba kukhonjisiwe ngezansi.

Ake siye kwenye ye-Ceph cluster node futhi senze lezi zenzo ezilandelayo:

# Π’ΠΎΡ‡ΠΊΠ° монтирования
mkdir -p /mnt/cephfs

# Π‘ΠΎΠ·Π΄Π°Ρ‘ΠΌ Ρ„Π°ΠΉΠ» с ΠΊΠ»ΡŽΡ‡ΠΎΠΌ администратора
ceph auth get-key client.admin >/etc/ceph/secret.key

# ДобавляСм запись Π² /etc/fstab
# !! ИзмСняСм ip адрСс Π½Π° адрСс нашСго ΡƒΠ·Π»Π°
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev    0       2" >> /etc/fstab

mount /mnt/cephfs

Vele, ukukhweza i-FS ku-Ceph node efana nalena kulungele izinjongo zokuqeqesha kuphela, okuyinto esiyenzayo kuhlelo lwethu lokusebenza. Izifundo ze-slurm. Angicabangi ukuthi kukhona ongenza lokhu ekukhiqizeni; kunengozi enkulu yokusula amafayela abalulekile ngephutha.

Futhi ekugcineni, ake sibheke ukuthi izinto zisebenza kanjani ngokushintsha usayizi wamavolumu esimweni se-CephFS. Masibuyele ku-Kubernetes futhi sihlele i-manifest yethu ye-PVC - sandise usayizi lapho, ngokwesibonelo, ukuya ku-7Gi.

Masisebenzise ifayela elihleliwe:

kubectl apply -f pvc.yaml

Ake sibheke uhla lwemibhalo ukuze sibone ukuthi i-quota ishintshe kanjani:

getfattr -n ceph.quota.max_bytes <ΠΊΠ°Ρ‚Π°Π»ΠΎΠ³-с-Π΄Π°Π½Π½Ρ‹ΠΌΠΈ>

Ukuze lo myalo usebenze, kungase kudingeke ukuthi ufake iphakheji kusistimu yakho attr.

Amehlo ayesaba, kodwa izandla ziyesaba

Zonke lezi ziphonso kanye nokubonakaliswa okude kwe-YAML kubonakala kuyinkimbinkimbi ngaphezulu, kodwa empeleni, abafundi be-Slurm bathola ukulengiswa kwakho ngokushesha okukhulu.
Kulesi sihloko asingenanga kakhulu ehlathini - kukhona imibhalo esemthethweni yalokho. Uma unentshisekelo emininingwaneni yokusetha isitoreji se-Ceph ngeqoqo le-Kubernetes, lezi zixhumanisi zizosiza:

Izimiso ezijwayelekile ze-Kubernetes esebenza ngamavolumu
Imibhalo ye-RBD
Ukuhlanganisa i-RBD ne-Kubernetes ngokombono we-Ceph
Ukuhlanganisa i-RBD ne-Kubernetes ngokombono we-CSI
Imibhalo Ye-CephFS Ejwayelekile
Ukuhlanganisa i-CephFS ne-Kubernetes ngokombono we-CSI

Esifundweni se-Slurm Kubernetes Base ungaqhubeka kancane futhi usebenzise uhlelo lokusebenza lwangempela ku-Kubernetes oluzosebenzisa i-CephFS njengokugcina ifayela. Ngezicelo ze-GET/POST uzokwazi ukudlulisela amafayela futhi uwathole ku-Ceph.

Futhi uma unentshisekelo enkulu yokugcina idatha, bhalisela isifundo esisha ku-Ceph. Ngenkathi ukuhlolwa kwe-beta kusaqhubeka, isifundo singatholwa ngesaphulelo futhi ungathonya okuqukethwe kwayo.

Umbhali walesi sihloko: Alexander Shvalov, unjiniyela I-Southbridge, Umqondisi Oqinisekisiwe we-Kubernetes, umbhali kanye nomthuthukisi wezifundo ze-Slurm.

Source: www.habr.com