Container Storage Interface (CSI) ke sehokelo se kopaneng pakeng tsa Kubernetes le litsamaiso tsa polokelo. Re se re buile ka eona hakhutšoanyane
Sengoliloeng se fana ka mehlala ea 'nete, le hoja e nolofalitsoe hanyenyane bakeng sa ho utloisisa habonolo. Ha re nahane ka ho kenya le ho hlophisa lihlopha tsa Ceph le Kubernetes.
Na ua ipotsa hore na e sebetsa joang?
Kahoo, o na le sehlopha sa Kubernetes letsohong la hau, se rometsoeng, mohlala,
Haeba u na le tsena tsohle, a re tsamaee!
Taba ea pele, ha re ee ho e 'ngoe ea li-cluster node tsa Ceph' me re hlahlobe hore na tsohle li hantle.
ceph health
ceph -s
E latelang, hang-hang re tla theha letamo la li-disk tsa RBD:
ceph osd pool create kube 32
ceph osd pool application enable kube rbd
Ha re feteleng ho sehlopha sa Kubernetes. Moo, pele ho tsohle, re tla kenya mokhanni oa Ceph CSI bakeng sa RBD. Re tla kenya, joalo ka ha ho lebelletsoe, ka Helm.
Re eketsa polokelo ka chate, re fumana mefuta e fapaneng bakeng sa chate ea ceph-csi-rbd:
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml
Hona joale o hloka ho tlatsa faele ea cephrbd.yml. Ho etsa sena, fumana ID ea sehlopha le liaterese tsa IP tsa bahlokomeli ho Ceph:
ceph fsid # так мы узнаем clusterID
ceph mon dump # а так увидим IP-адреса мониторов
Re kenya litekanyetso tse fumanoeng faeleng ea cephrbd.yml. Ka nako e ts'oanang, re nolofalletsa ho theha maano a PSP (Pod Security Policies). Dikgetho ka dikarolo nodeplugin и motlatsi e se e le faeleng, e ka lokisoa joalo ka ha ho bonts'itsoe ka tlase:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
- "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
- "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"
nodeplugin:
podSecurityPolicy:
enabled: true
provisioner:
podSecurityPolicy:
enabled: true
Ka mor'a moo, se setseng ho rona ke ho kenya chate sehlopheng sa Kubernetes.
helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace
E kholo, mokhanni oa RBD oa sebetsa!
Ha re theheng StorageClass e ncha ho Kubernetes. Sena se boetse se hloka ho hokahana hanyane le Ceph.
Re theha mosebelisi e mocha ho Ceph mme re mo fa litokelo tsa ho ngolla letamo khubu:
ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'
Joale ha re boneng senotlolo sa ho fihlella se ntse se le teng:
ceph auth get-key client.rbdkube
Taelo e tla hlahisa ntho e kang ena:
AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==
Ha re kenyelle boleng bona ho Lekunutu ho sehlopha sa Kubernetes - moo re se hlokang userKey:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: ceph-csi-rbd
stringData:
# Значения ключей соответствуют имени пользователя и его ключу, как указано в
# кластере Ceph. ID юзера должен иметь доступ к пулу,
# указанному в storage class
userID: rbdkube
userKey: <user-key>
Mme re theha lekunutu la rona:
kubectl apply -f secret.yaml
E latelang, re hloka StorageClass e bonts'a ntho e kang ena:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: kube
imageFeatures: layering
# Эти секреты должны содержать данные для авторизации
# в ваш пул.
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
E hloka ho tlatsoa sehlophaID, eo re seng re ithutile eona ke sehlopha ceph fsid, 'me u sebelise ponahalo ena ho sehlopha sa Kubernetes:
kubectl apply -f storageclass.yaml
Ho hlahloba hore na lihlopha li sebetsa 'moho joang, ha re theheng PVC e latelang (Klaimi e Tsoelang Pele ea Bolumo):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
Ha re bone hanghang hore na Kubernetes o thehile molumo o kopiloeng ho Ceph:
kubectl get pvc
kubectl get pv
Lintho tsohle li bonahala li le ntle! See se shebahala joang ka lehlakoreng la Ceph?
Re fumana lethathamo la meqolo ka letamong mme re sheba tlhahisoleseling mabapi le bophahamo ba rona:
rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653 # тут, конечно же, будет другой ID тома, который выдала предыдущая команда
Joale a re boneng hore na ho fetola boholo ba molumo oa RBD ho sebetsa joang.
Fetola boholo ba molumo ho pvc.yaml manifest ho 2Gi 'me u e sebelise:
kubectl apply -f pvc.yaml
Ha re emele hore liphetoho li sebetse 'me re shebe boholo ba molumo hape.
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653
kubectl get pv
kubectl get pvc
Rea bona hore boholo ba PVC ha boa fetoha. Ho tseba lebaka, o ka botsa Kubernetes bakeng sa tlhaloso ea YAML ea PVC:
kubectl get pvc rbd-pvc -o yaml
Bothata ke bona:
molaetsa: E emetse hore mosebelisi a (re-) a qale pod ho qeta mokhoa oa ho fetola boholo ba molumo ho node. mofuta: FileSystemResizePending
Ke hore, disk e se e hōlile, empa tsamaiso ea faele e ho eona ha e e-s'o ka e hōla.
Ho holisa sistimi ea faele, o hloka ho phahamisa molumo. Naheng ea rona, PVC / PV e entsoeng ha e sebelisoe hona joale ka tsela leha e le efe.
Re ka theha Pod ea liteko, mohlala o kang ona:
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx:1.17.6
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
'Me joale a re shebeng PVC:
kubectl get pvc
Boholo bo fetohile, tsohle li hantle.
Karolong ea pele, re ile ra sebetsa le RBD thibela sesebelisoa (e emetse Rados Block Device), empa sena se ke ke sa etsoa haeba li-microservices tse fapaneng li hloka ho sebetsa le disk ena ka nako e le 'ngoe. CephFS e loketse haholo ho sebetsa ka lifaele ho fapana le litšoantšo tsa disk.
Re sebelisa mohlala oa lihlopha tsa Ceph le Kubernetes, re tla lokisa CSI le mekhatlo e meng e hlokahalang ho sebetsa le CephFS.
Ha re fumane litekanyetso ho tsoa ho chate e ncha ea Helm eo re e hlokang:
helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml
Hape o hloka ho tlatsa faele ea cephfs.yml. Joalo ka pele, litaelo tsa Ceph li tla thusa:
ceph fsid
ceph mon dump
Tlatsa faele ka litekanyetso tse kang tsena:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "172.18.8.5:6789"
- "172.18.8.6:6789"
- "172.18.8.7:6789"
nodeplugin:
httpMetrics:
enabled: true
containerPort: 8091
podSecurityPolicy:
enabled: true
provisioner:
replicaCount: 1
podSecurityPolicy:
enabled: true
Ka kopo hlokomela hore liaterese tsa ho beha leihlo li hlalositsoe ka mokhoa o bonolo oa aterese:port. Ho kenya li-cephf sebakeng sa node, liaterese tsena li fetisetsoa ho kernel module, e e-s'o tsebe ho sebetsa le v2 monitor protocol.
Re fetola boema-kepe bakeng sa httpMetrics (Prometheus e tla ea moo bakeng sa metrics ea ho shebella) e le hore e se ke ea loantšana le nginx-proxy, e kentsoeng ke Kubespray. Mohlomong ha u hloke sena.
Kenya chate ea Helm sehlopheng sa Kubernetes:
helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace
Ha re ee lebenkeleng la data la Ceph ho theha mosebelisi ea arohaneng moo. Litokomane li bolela hore mofani oa CephFS o hloka litokelo tsa phihlello ea batsamaisi ba lihlopha. Empa re tla theha mosebelisi ea fapaneng fs ba nang le litokelo tse fokolang:
ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'
Mme hanghang re shebe senotlolo sa hae sa phihlello, re tla se hloka hamorao:
ceph auth get-key client.fs
Ha re theheng Sephiri le StorageClass tse arohaneng.
Ha ho letho le lecha, re se re bone sena mohlaleng oa RBD:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi-cephfs
stringData:
# Необходимо для динамически создаваемых томов
adminID: fs
adminKey: <вывод предыдущей команды>
Ho sebelisa manifesto:
kubectl apply -f secret.yaml
'Me joale - StorageClass e arohaneng:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: <cluster-id>
# Имя файловой системы CephFS, в которой будет создан том
fsName: cephfs
# (необязательно) Пул Ceph, в котором будут храниться данные тома
# pool: cephfs_data
# (необязательно) Разделенные запятыми опции монтирования для Ceph-fuse
# например:
# fuseMountOptions: debug
# (необязательно) Разделенные запятыми опции монтирования CephFS для ядра
# См. man mount.ceph чтобы узнать список этих опций. Например:
# kernelMountOptions: readdir_max_bytes=1048576,norbytes
# Секреты должны содержать доступы для админа и/или юзера Ceph.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (необязательно) Драйвер может использовать либо ceph-fuse (fuse),
# либо ceph kernelclient (kernel).
# Если не указано, будет использоваться монтирование томов по умолчанию,
# это определяется поиском ceph-fuse и mount.ceph
# mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
Ha re e tlatse mona sehlophaID 'me e sebetsa ho Kubernetes:
kubectl apply -f storageclass.yaml
hlahlobeloang
Ho hlahloba, joalo ka mohlala o fetileng, ha re theheng PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: csi-cephfs-sc
'Me u hlahlobe boteng ba PVC/PV:
kubectl get pvc
kubectl get pv
Haeba u batla ho sheba lifaele le li-directory ho CephFS, u ka beha sistimi ena ea faele kae-kae. Ka mohlala joalokaha ho bontšitsoe ka tlase.
Ha re ee ho e 'ngoe ea li-cluster node tsa Ceph 'me re etse liketso tse latelang:
# Точка монтирования
mkdir -p /mnt/cephfs
# Создаём файл с ключом администратора
ceph auth get-key client.admin >/etc/ceph/secret.key
# Добавляем запись в /etc/fstab
# !! Изменяем ip адрес на адрес нашего узла
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2" >> /etc/fstab
mount /mnt/cephfs
Ehlile, ho kenya FS sebakeng sa Ceph joalo ka sena ho loketse merero ea koetliso, ke seo re se etsang ho rona.
Qetellong, a re hlahlobeng hore na lintho li sebetsa joang ka ho fokotsa boholo ba li-volumes tabeng ea CephFS. Ha re khutleleng ho Kubernetes 'me re hlophise ponahalo ea rona ea PVC - eketsa boholo moo, mohlala, ho 7Gi.
Ha re sebeliseng faele e hlophisitsoeng:
kubectl apply -f pvc.yaml
Ha re shebeng bukana e kentsoeng ho bona hore na quota e fetohile joang:
getfattr -n ceph.quota.max_bytes <каталог-с-данными>
Hore taelo ena e sebetse, o kanna oa hloka ho kenya sephutheloana ho sistimi ea hau attr.
Mahlo a tšohile, empa matsoho aa tšoha
Mantsoe ana kaofela le lipontšo tse telele tsa YAML li bonahala li le thata holimo, empa ha e le hantle, baithuti ba Slurm ba li fumana kapele.
Sehloohong sena ha rea ka ra kena ka hare ho moru - ho na le litokomane tsa molao bakeng sa seo. Haeba u thahasella lintlha tsa ho theha polokelo ea Ceph ka sehlopha sa Kubernetes, lihokelo tsena li tla thusa:
Thutong ea Slurm
'Me haeba u thahasella ho boloka data, joale ngolisa
Mongoli oa sehlooho sena: Alexander Shvalov, moenjiniere ea sebetsang
Source: www.habr.com