Misali mai amfani na haɗa tushen tushen Ceph zuwa gungu na Kubernetes

Interface Ma'ajiyar Kwantena (CSI) haɗin kai ne tsakanin Kubernetes da tsarin ajiya. Mun riga mun yi magana game da shi a takaice gaya, kuma a yau za mu yi la'akari da haɗuwa da CSI da Ceph: za mu nuna yadda haɗa Ceph ajiya zuwa gungu na Kubernetes.
Labarin yana ba da misalai na gaske, kodayake sauƙaƙan misalai don sauƙin fahimta. Ba mu yi la'akari da shigarwa da daidaita tarin Ceph da Kubernetes ba.

Kuna mamakin yadda yake aiki?

Misali mai amfani na haɗa tushen tushen Ceph zuwa gungu na Kubernetes

Don haka, kuna da gungu na Kubernetes a yatsar ku, wanda aka tura, misali, kubespray. Akwai gungu na Ceph da ke aiki a kusa - kuma kuna iya shigar da shi, misali, tare da wannan saitin littattafan wasa. Ina fatan babu buƙatar ambaci cewa don samarwa a tsakanin su dole ne a sami hanyar sadarwa tare da bandwidth na akalla 10 Gbit / s.

Idan kana da duk wannan, mu tafi!

Da farko, bari mu je ɗaya daga cikin kuɗaɗɗen ƙungiyar Ceph kuma mu duba cewa komai yana cikin tsari:

ceph health
ceph -s

Na gaba, nan da nan za mu ƙirƙiri tafki don faifan RBD:

ceph osd pool create kube 32
ceph osd pool application enable kube rbd

Bari mu matsa zuwa gungu na Kubernetes. A can, da farko, za mu shigar da direban Ceph CSI don RBD. Za mu shigar, kamar yadda aka zata, ta hanyar Helm.
Mun ƙara wurin ajiya tare da ginshiƙi, muna samun saitin masu canji don ginshiƙi ceph-csi-rbd:

helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml

Yanzu kuna buƙatar cika fayil ɗin cephrbd.yml. Don yin wannan, nemo gungun ID da adiresoshin IP na masu saka idanu a cikin Ceph:

ceph fsid  # так мы узнаем clusterID
ceph mon dump  # а так увидим IP-адреса мониторов

Mun shigar da ƙimar da aka samu a cikin fayil ɗin cephrbd.yml. A lokaci guda, muna ba da damar ƙirƙirar manufofin PSP (Manufofin Tsaro na Pod). Zaɓuɓɓuka a cikin sassan nodeplugin и mai bayarwa riga a cikin fayil ɗin, ana iya gyara su kamar yadda aka nuna a ƙasa:

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
      - "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
      - "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"

nodeplugin:
  podSecurityPolicy:
    enabled: true

provisioner:
  podSecurityPolicy:
    enabled: true

Na gaba, duk abin da ya rage mana shine shigar da ginshiƙi a cikin gungu na Kubernetes.

helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace

Mai girma, direban RBD yana aiki!
Bari mu ƙirƙiri sabon Ajiyayyen Ajiye a cikin Kubernetes. Wannan kuma yana buƙatar ɗan tinkering tare da Ceph.

Mun ƙirƙiri sabon mai amfani a Ceph kuma muna ba shi haƙƙoƙin rubutu zuwa tafkin cube:

ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'

Yanzu bari mu ga maɓallin shiga yana nan:

ceph auth get-key client.rbdkube

Umurnin zai fitar da wani abu kamar haka:

AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==

Bari mu ƙara wannan ƙimar zuwa Sirrin a cikin gungu na Kubernetes - inda muke buƙata mai amfaniKey:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: ceph-csi-rbd
stringData:
  # Значения ключей соответствуют имени пользователя и его ключу, как указано в
  # кластере Ceph. ID юзера должен иметь доступ к пулу,
  # указанному в storage class
  userID: rbdkube
  userKey: <user-key>

Kuma muna ƙirƙirar sirrinmu:

kubectl apply -f secret.yaml

Na gaba, muna buƙatar StorageClass bayyana wani abu kamar haka:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: <cluster-id>
   pool: kube

   imageFeatures: layering

   # Эти секреты должны содержать данные для авторизации
   # в ваш пул.
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd

   csi.storage.k8s.io/fstype: ext4

reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - discard

Yana buƙatar cikawa clusterID, wanda muka riga muka koya daga ƙungiyar cef fsid, kuma a yi amfani da wannan bayyani zuwa gungu na Kubernetes:

kubectl apply -f storageclass.yaml

Don duba yadda gungu ke aiki tare, bari mu ƙirƙiri PVC mai zuwa (Da'awar Ƙarar Ƙarfafawa):

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

Nan da nan mu ga yadda Kubernetes ya ƙirƙira ƙarar da ake buƙata a Ceph:

kubectl get pvc
kubectl get pv

Komai yana da kyau! Menene wannan yayi kama a gefen Ceph?
Muna samun lissafin juzu'i a cikin tafkin kuma duba bayanai game da ƙarar mu:

rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653  # тут, конечно же, будет другой ID тома, который выдала предыдущая команда

Yanzu bari mu ga yadda girman girman RBD ke aiki.
Canja girman girman a cikin bayanin pvc.yaml zuwa 2Gi kuma yi amfani da shi:

kubectl apply -f pvc.yaml

Bari mu jira canje-canje suyi tasiri kuma mu sake duba girman ƙarar.

rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653

kubectl get pv
kubectl get pvc

Mun ga cewa girman PVC bai canza ba. Don gano dalilin, zaku iya tambayar Kubernetes don bayanin YAML na PVC:

kubectl get pvc rbd-pvc -o yaml

Ga matsalar:

saƙo: Jiran mai amfani don (sake-)fara kwafsa don gama tsarin tsarin girman girman ƙarar akan kumburi. nau'in: FileSystemResizePending

Wato faifan ya girma, amma tsarin fayil ɗin da ke cikinsa bai yi ba.
Don haɓaka tsarin fayil, kuna buƙatar hawa ƙarar. A cikin ƙasarmu, ba a amfani da PVC/PV da aka yi a halin yanzu ta kowace hanya.

Za mu iya ƙirƙirar Pod gwaji, misali kamar haka:

---
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx:1.17.6
      volumeMounts:
        - name: mypvc
          mountPath: /data
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false

Kuma yanzu bari mu dubi PVC:

kubectl get pvc

Girman ya canza, komai yana da kyau.

A kashi na farko, mun yi aiki tare da na'urar toshe RBD (yana nufin Rados Block Device), amma ba za a iya yin wannan ba idan microservices daban-daban suna buƙatar yin aiki tare da wannan faifai lokaci guda. CephFS ya fi dacewa don aiki tare da fayiloli maimakon hotunan diski.
Yin amfani da misalin gungu na Ceph da Kubernetes, za mu saita CSI da sauran abubuwan da suka dace don yin aiki tare da CephFS.

Bari mu sami ƙimar daga sabon ginshiƙi na Helm da muke buƙata:

helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml

Hakanan kuna buƙatar cika fayil ɗin cephfs.yml. Kamar yadda ya gabata, umarnin Ceph zai taimaka:

ceph fsid
ceph mon dump

Cika fayil ɗin da ƙima kamar haka:

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "172.18.8.5:6789"
      - "172.18.8.6:6789"
      - "172.18.8.7:6789"

nodeplugin:
  httpMetrics:
    enabled: true
    containerPort: 8091
  podSecurityPolicy:
    enabled: true

provisioner:
  replicaCount: 1
  podSecurityPolicy:
    enabled: true

Lura cewa an ƙayyade adiresoshin masu saka idanu a cikin sauƙi mai sauƙi: tashar jiragen ruwa. Don hawa cephfs akan kumburi, ana tura waɗannan adireshi zuwa tsarin kernel, wanda har yanzu bai san yadda ake aiki da ƙa'idar saka idanu ta v2 ba.
Muna canza tashar jiragen ruwa don httpMetrics (Prometheus zai je can don saka idanu awo) don kada ya ci karo da nginx-proxy, wanda Kubespray ya shigar. Wataƙila ba za ku buƙaci wannan ba.

Shigar da ginshiƙi na Helm a cikin gungu na Kubernetes:

helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace

Mu je kantin bayanan Ceph don ƙirƙirar keɓantaccen mai amfani a wurin. Takardun sun bayyana cewa mai ba da izini na CephFS yana buƙatar haƙƙin samun dama ga mai gudanarwa ta gungu. Amma za mu ƙirƙiri mai amfani dabam fs tare da iyakance haƙƙoƙin:

ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'

Kuma nan da nan bari mu kalli maɓallin shigansa, za mu buƙaci shi daga baya:

ceph auth get-key client.fs

Bari mu ƙirƙiri Sirrin Sirri da Ajiye daban.
Babu wani sabon abu, mun riga mun ga wannan a cikin misalin RBD:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: ceph-csi-cephfs
stringData:
  # Необходимо для динамически создаваемых томов
  adminID: fs
  adminKey: <вывод предыдущей команды>

Aiwatar da bayanin:

kubectl apply -f secret.yaml

Kuma yanzu - daban-daban StorageClass:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: <cluster-id>

  # Имя файловой системы CephFS, в которой будет создан том
  fsName: cephfs

  # (необязательно) Пул Ceph, в котором будут храниться данные тома
  # pool: cephfs_data

  # (необязательно) Разделенные запятыми опции монтирования для Ceph-fuse
  # например:
  # fuseMountOptions: debug

  # (необязательно) Разделенные запятыми опции монтирования CephFS для ядра
  # См. man mount.ceph чтобы узнать список этих опций. Например:
  # kernelMountOptions: readdir_max_bytes=1048576,norbytes

  # Секреты должны содержать доступы для админа и/или юзера Ceph.
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs

  # (необязательно) Драйвер может использовать либо ceph-fuse (fuse), 
  # либо ceph kernelclient (kernel).
  # Если не указано, будет использоваться монтирование томов по умолчанию,
  # это определяется поиском ceph-fuse и mount.ceph
  # mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

Mu cika a nan clusterID kuma yana aiki a Kubernetes:

kubectl apply -f storageclass.yaml

dubawa

Don bincika, kamar yadda yake a cikin misali na baya, bari mu ƙirƙiri PVC:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-cephfs-sc

Kuma duba kasancewar PVC/PV:

kubectl get pvc
kubectl get pv

Idan kuna son duba fayiloli da kundayen adireshi a cikin CephFS, zaku iya hawa wannan tsarin fayil ɗin wani wuri. Misali kamar yadda aka nuna a kasa.

Bari mu je ɗaya daga cikin kuɗaɗɗen ƙungiyar Ceph kuma mu aiwatar da ayyuka masu zuwa:

# Точка монтирования
mkdir -p /mnt/cephfs

# Создаём файл с ключом администратора
ceph auth get-key client.admin >/etc/ceph/secret.key

# Добавляем запись в /etc/fstab
# !! Изменяем ip адрес на адрес нашего узла
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev    0       2" >> /etc/fstab

mount /mnt/cephfs

Tabbas, hawan FS akan kullin Ceph kamar wannan ya dace ne kawai don dalilai na horo, wanda shine abin da muke yi akan mu. Darussan slurm. Ba na tsammanin kowa zai yi wannan a cikin samarwa; akwai babban haɗari na shafe mahimman fayiloli da gangan.

Kuma a ƙarshe, bari mu bincika yadda abubuwa ke aiki tare da sake girman kundin a cikin yanayin CephFS. Bari mu koma Kubernetes kuma mu gyara bayanin mu don PVC - ƙara girman wurin, misali, zuwa 7Gi.

Bari mu yi amfani da fayil ɗin da aka gyara:

kubectl apply -f pvc.yaml

Bari mu kalli littafin da aka ɗora don ganin yadda adadin ya canza:

getfattr -n ceph.quota.max_bytes <каталог-с-данными>

Domin wannan umarni ya yi aiki, ƙila kuna buƙatar shigar da kunshin akan tsarin ku attr.

Idanun suna tsoro, amma hannaye suna yi

Duk waɗannan tsafi da dogayen bayanan YAML suna da kama da rikitarwa a saman, amma a aikace, ɗaliban Slurm suna rataye su da sauri.
A cikin wannan labarin ba mu shiga cikin daji ba - akwai takaddun hukuma don hakan. Idan kuna sha'awar cikakkun bayanai na saita ajiyar Ceph tare da gungu na Kubernetes, waɗannan hanyoyin haɗin zasu taimaka:

Janar ka'idodin Kubernetes aiki tare da kundin
Takardun RBD
Haɗa RBD da Kubernetes daga hangen Ceph
Haɗa RBD da Kubernetes daga hangen CSI
Babban Takardun CephFS
Haɗa CephFS da Kubernetes daga hangen CSI

A kan hanyar Slurm Kubernetes Base za ku iya ci gaba kaɗan kuma ƙaddamar da ainihin aikace-aikacen a Kubernetes wanda zai yi amfani da CephFS azaman ajiyar fayil. Ta hanyar buƙatun GET/POST zaku sami damar canja wurin fayiloli zuwa kuma karɓe su daga Ceph.

Kuma idan kun fi sha'awar adana bayanai, to ku yi rajista sabon kwas a kan Ceph. Yayin gwajin beta yana gudana, ana iya samun kwas ɗin akan ragi kuma kuna iya yin tasiri akan abun ciki.

Marubucin labarin: Alexander Shvalov, aikin injiniya Southbridge, Certified Kubernetes Administrator, marubuci kuma mai haɓaka darussan Slurm.

source: www.habr.com