ʻO ka Container Storage Interface (CSI) kahi pilina i hui pū ʻia ma waena o nā Kubernetes a me nā ʻōnaehana mālama. Ua kamaʻilio pōkole mākou no ia mea
Hāʻawi ka ʻatikala i nā hiʻohiʻona maoli, ʻoiai maʻalahi iki no ka maʻalahi o ka ʻike. ʻAʻole mākou e noʻonoʻo i ke kau ʻana a me ka hoʻonohonoho ʻana i nā pūʻulu Ceph a me Kubernetes.
Ke noʻonoʻo nei ʻoe pehea ia hana?
No laila, loaʻa iā ʻoe kahi puʻupuʻu Kubernetes ma kou manamana lima, i kau ʻia, no ka laʻana,
Inā loaʻa iā ʻoe kēia mau mea a pau, e hele kāua!
ʻO ka mea mua, e hele kāua i kekahi o nā puʻupuʻu Ceph a nānā i ka pololei o nā mea a pau:
ceph health
ceph -s
A laila, e hana koke mākou i kahi wai no nā disks RBD:
ceph osd pool create kube 32
ceph osd pool application enable kube rbd
E neʻe kākou i ka hui Kubernetes. Ma laila, ʻo ka mea mua, e hoʻokomo mākou i ka mea hoʻokele Ceph CSI no RBD. E hoʻouka mākou, e like me ka mea i manaʻo ʻia, ma o Helm.
Hoʻohui mākou i kahi waihona me kahi pakuhi, loaʻa iā mākou kahi hoʻonohonoho o nā mea hoʻololi no ka pakuhi ceph-csi-rbd:
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml
I kēia manawa pono ʻoe e hoʻopiha i ka faila cephrbd.yml. No ka hana ʻana i kēia, e ʻike i ka cluster ID a me nā helu IP o nā mākaʻikaʻi ma Ceph:
ceph fsid # так мы узнаем clusterID
ceph mon dump # а так увидим IP-адреса мониторов
Hoʻokomo mākou i nā waiwai i loaʻa i ka faila cephrbd.yml. Ma ka manawa like, hiki iā mākou ke hana i nā kulekele PSP (Pod Security Policies). Nā koho ma nā ʻāpana nodeplugin и mea hoʻolako aia i loko o ka faila, hiki ke hoʻoponopono ʻia e like me ka mea i hōʻike ʻia ma lalo nei:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
- "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
- "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"
nodeplugin:
podSecurityPolicy:
enabled: true
provisioner:
podSecurityPolicy:
enabled: true
A laila, ʻo nā mea a pau e koe iā mākou, ʻo ia ke kau ʻana i ka pakuhi ma ka pūʻulu Kubernetes.
helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace
Nui, hana ka mea hoʻokele RBD!
E hana kākou i kahi StorageClass hou ma Kubernetes. Pono hou kēia i ka hoʻomaʻamaʻa ʻana me Ceph.
Hana mākou i mea hoʻohana hou ma Ceph a hāʻawi iā ia i nā kuleana e kākau i ka loko cube:
ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'
I kēia manawa e ʻike kākou aia ke kī komo:
ceph auth get-key client.rbdkube
E hoʻopuka ke kauoha i kahi mea e like me kēia:
AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==
E hoʻohui i kēia waiwai iā Secret i ka hui Kubernetes - kahi e pono ai mākou mea hoʻohana kī:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: ceph-csi-rbd
stringData:
# Значения ключей соответствуют имени пользователя и его ключу, как указано в
# кластере Ceph. ID юзера должен иметь доступ к пулу,
# указанному в storage class
userID: rbdkube
userKey: <user-key>
A hana mākou i kā mākou mea huna:
kubectl apply -f secret.yaml
A laila, pono mākou i kahi StorageClass e hōʻike i kahi mea e like me kēia:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: kube
imageFeatures: layering
# Эти секреты должны содержать данные для авторизации
# в ваш пул.
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
Pono e hoʻopiha ʻia clusterID, a mākou i aʻo mua ai e ka hui ceph fsid, a hoʻopili i kēia hōʻike i ka hui Kubernetes:
kubectl apply -f storageclass.yaml
No ka nānā ʻana i ka hana like ʻana o nā puʻupuʻu, e hana kākou i kēia PVC (Persistent Volume Claim):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
E ʻike koke i ka hana ʻana o Kubernetes i ka leo i noi ʻia ma Ceph:
kubectl get pvc
kubectl get pv
Me he mea lā ua maikaʻi nā mea a pau! Pehea ke ano o keia ma ka aoao Ceph?
Loaʻa iā mākou kahi papa inoa o nā puke i loko o ka loko wai a nānā i ka ʻike e pili ana i kā mākou leo:
rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653 # тут, конечно же, будет другой ID тома, который выдала предыдущая команда
I kēia manawa e ʻike kākou pehea e hana ai ka hoʻololi ʻana i ka leo RBD.
E hoʻololi i ka nui o ka leo ma ka pvc.yaml hōʻike i 2Gi a hoʻopili iā ia:
kubectl apply -f pvc.yaml
E kali kākou no ka hoʻololi ʻana a nānā hou i ka nui o ka leo.
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653
kubectl get pv
kubectl get pvc
ʻIke mākou ʻaʻole i loli ka nui o PVC. No ka ʻike ʻana i ke kumu, hiki iā ʻoe ke nīnau iā Kubernetes no kahi wehewehe YAML o ka PVC:
kubectl get pvc rbd-pvc -o yaml
Eia ka pilikia:
memo: Ke kali nei i ka mea hoʻohana e (re-) hoʻomaka i kahi pod e hoʻopau i ka ʻōnaehana faila e hoʻololi i ka leo ma ka node. ʻano: FileSystemResizePending
ʻO ia hoʻi, ua ulu ka disk, akā ʻaʻole i loaʻa ka ʻōnaehana faila ma luna.
No ka hoʻonui ʻana i ka ʻōnaehana faila, pono ʻoe e kau i ka leo. I ko mākou ʻāina, ʻaʻole hoʻohana ʻia ka PVC / PV i hana ʻia i kēia manawa.
Hiki iā mākou ke hana i kahi Pod hoʻāʻo, e like me kēia:
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx:1.17.6
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
A i kēia manawa e nānā kākou i ka PVC:
kubectl get pvc
Ua loli ka nui, ua maikaʻi nā mea a pau.
Ma ka hapa mua, ua hana mākou me ka RBD block device (kū ia no Rados Block Device), akā ʻaʻole hiki ke hana inā pono nā microservices e hana me kēia disk i ka manawa like. ʻOi aku ka maikaʻi o CephFS no ka hana ʻana me nā faila ma mua o nā kiʻi disk.
Ke hoʻohana nei i ka laʻana o Ceph a me Kubernetes clusters, e hoʻonohonoho mākou i ka CSI a me nā mea pono ʻē aʻe e hana pū me CephFS.
E kiʻi i nā waiwai mai ka pakuhi Helm hou e pono ai mākou:
helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml
Pono hou ʻoe e hoʻopiha i ka faila cephfs.yml. E like me ka wā ma mua, e kōkua nā kauoha Ceph:
ceph fsid
ceph mon dump
E hoʻopiha i ka faila me nā waiwai e like me kēia:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "172.18.8.5:6789"
- "172.18.8.6:6789"
- "172.18.8.7:6789"
nodeplugin:
httpMetrics:
enabled: true
containerPort: 8091
podSecurityPolicy:
enabled: true
provisioner:
replicaCount: 1
podSecurityPolicy:
enabled: true
E ʻoluʻolu e hoʻomaopopo i ka helu ʻana o ka nānā ʻana ma ke ʻano maʻalahi address:port. No ka kau ʻana i nā cephfs ma kahi node, ua hāʻawi ʻia kēia mau ʻōlelo i ka module kernel, ʻaʻole maopopo i ka hana ʻana me ka v2 monitor protocol.
Hoʻololi mākou i ke awa no httpMetrics (E hele ʻo Prometheus i laila no ka nānā ʻana i nā metric) i ʻole e kūʻē me ka nginx-proxy, i hoʻokomo ʻia e Kubespray. ʻAʻole pono paha ʻoe i kēia.
E hoʻouka i ka pakuhi Helm ma ka pūʻulu Kubernetes:
helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace
E hele kāua i ka hale kūʻai ʻikepili Ceph e hana i kahi mea hoʻohana kaʻawale ma laila. Ua ʻōlelo ka palapala e koi ana ka mea hoʻolako CephFS i nā kuleana e komo i ka luna hoʻomalu cluster. Akā e hana mākou i mea hoʻohana kaʻawale fs me nā kuleana palena ʻole:
ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'
A e nānā koke kākou i kāna kī komo, pono mākou ma hope:
ceph auth get-key client.fs
E hana mākou i kahi huna huna a me StorageClass.
ʻAʻohe mea hou, ua ʻike mua mākou i kēia ma ka laʻana o RBD:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi-cephfs
stringData:
# Необходимо для динамически создаваемых томов
adminID: fs
adminKey: <вывод предыдущей команды>
Ke hoʻohana nei i ka hōʻike:
kubectl apply -f secret.yaml
A i kēia manawa - kahi StorageClass ʻokoʻa:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: <cluster-id>
# Имя файловой системы CephFS, в которой будет создан том
fsName: cephfs
# (необязательно) Пул Ceph, в котором будут храниться данные тома
# pool: cephfs_data
# (необязательно) Разделенные запятыми опции монтирования для Ceph-fuse
# например:
# fuseMountOptions: debug
# (необязательно) Разделенные запятыми опции монтирования CephFS для ядра
# См. man mount.ceph чтобы узнать список этих опций. Например:
# kernelMountOptions: readdir_max_bytes=1048576,norbytes
# Секреты должны содержать доступы для админа и/или юзера Ceph.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (необязательно) Драйвер может использовать либо ceph-fuse (fuse),
# либо ceph kernelclient (kernel).
# Если не указано, будет использоваться монтирование томов по умолчанию,
# это определяется поиском ceph-fuse и mount.ceph
# mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
E hoopiha kakou maanei clusterID a pili i nā Kubernetes:
kubectl apply -f storageclass.yaml
nana
No ka nānā ʻana, e like me ka laʻana ma mua, e hana mākou i kahi PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: csi-cephfs-sc
A e nānā i ka hele ʻana o PVC/PV:
kubectl get pvc
kubectl get pv
Inā makemake ʻoe e nānā i nā faila a me nā papa kuhikuhi ma CephFS, hiki iā ʻoe ke kau i kēia ʻōnaehana faila ma kahi. No ka laʻana e like me ka mea i hōʻike ʻia ma lalo nei.
E hele kāua i kekahi o nā puʻupuʻu Ceph a hana i kēia mau hana:
# Точка монтирования
mkdir -p /mnt/cephfs
# Создаём файл с ключом администратора
ceph auth get-key client.admin >/etc/ceph/secret.key
# Добавляем запись в /etc/fstab
# !! Изменяем ip адрес на адрес нашего узла
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2" >> /etc/fstab
mount /mnt/cephfs
ʻOiaʻiʻo, ʻo ke kau ʻana i ka FS ma kahi puʻupuʻu Ceph e like me kēia mea kūpono wale no nā kumu aʻo, ʻo ia ka mea a mākou e hana ai ma kā mākou
A ʻo ka hope, e nānā kākou pehea e hana ai nā mea me ka hoʻololi ʻana i nā volumes i ka hihia o CephFS. E hoʻi kāua i Kubernetes a hoʻoponopono i kā mākou hōʻike no PVC - hoʻonui i ka nui ma laila, no ka laʻana, i 7Gi.
E hoʻohana i ka faila i hoʻoponopono ʻia:
kubectl apply -f pvc.yaml
E nānā kākou i ka papa kuhikuhi i kau ʻia e ʻike i ka loli ʻana o ka quota:
getfattr -n ceph.quota.max_bytes <каталог-с-данными>
No ka hana ʻana o kēia kauoha, pono ʻoe e hoʻokomo i ka pūʻolo ma kāu ʻōnaehana attr.
Makaʻu nā maka, akā na nā lima
ʻO kēia mau kiʻi a me nā hōʻike YAML lōʻihi e like me ka paʻakikī ma ka ʻili, akā i ka hoʻomaʻamaʻa, loaʻa koke nā haumāna Slurm iā lākou.
Ma kēia ʻatikala ʻaʻole mākou i hele hohonu i loko o ka ululāʻau - aia nā palapala mana no kēlā. Inā makemake ʻoe i nā kikoʻī o ka hoʻonohonoho ʻana i kahi waihona Ceph me kahi hui Kubernetes, e kōkua kēia mau loulou:
Ma ka papa Slurm
A inā makemake ʻoe i ka mālama ʻikepili, a laila e kākau inoa no
Ka mea kākau o ka ʻatikala: Alexander Shvalov, hoʻomaʻamaʻa ʻenekinia
Source: www.habr.com