Container Storage Interface (CSI) ishamwari yakabatana pakati peKubernetes uye masisitimu ekuchengetedza. Takatotaura nezvazvo muchidimbu
Chinyorwa chinopa mienzaniso chaiyo, kunyangwe yakarerutswa zvishoma kuitira nyore kuona. Isu hatifunge kuisa nekugadzirisa Ceph uye Kubernetes masumbu.
Uri kushamisika kuti zvinoshanda sei?
Saka, iwe une Kubernetes cluster pamunwe wako, akaiswa, semuenzaniso,
Kana uine zvese izvi, handei!
Kutanga, ngatiende kune imwe yeCeph cluster node uye tarisa kuti zvese zvakarongeka:
ceph health
ceph -s
Tevere, isu tichakurumidza kugadzira dziva reRBD disks:
ceph osd pool create kube 32
ceph osd pool application enable kube rbd
Ngatienderei kuKubernetes cluster. Ikoko, chekutanga pane zvese, tichaisa iyo Ceph CSI mutyairi weRBD. Tichaisa, sezvinotarisirwa, kuburikidza neHelm.
Isu tinowedzera repository ine chati, tinowana seti yezvakasiyana zveiyo ceph-csi-rbd chati:
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml
Iye zvino unofanirwa kuzadza iyo cephrbd.yml faira. Kuti uite izvi, tsvaga iyo cluster ID uye IP kero yevatariri muCeph:
ceph fsid # ΡΠ°ΠΊ ΠΌΡ ΡΠ·Π½Π°Π΅ΠΌ clusterID
ceph mon dump # Π° ΡΠ°ΠΊ ΡΠ²ΠΈΠ΄ΠΈΠΌ IP-Π°Π΄ΡΠ΅ΡΠ° ΠΌΠΎΠ½ΠΈΡΠΎΡΠΎΠ²
Isu tinoisa iyo yakawana kukosha muiyo cephrbd.yml faira. Panguva imwecheteyo, tinogonesa kugadzirwa kwePSP mitemo (Pod Security Policies). Sarudzo muzvikamu nodeplugin ΠΈ provider dzatova mufaira, dzinogona kugadziriswa sezvinoratidzwa pazasi:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
- "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
- "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"
nodeplugin:
podSecurityPolicy:
enabled: true
provisioner:
podSecurityPolicy:
enabled: true
Tevere, chasara kwatiri kuisa chati muKubernetes cluster.
helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace
Zvakanaka, mutyairi weRBD anoshanda!
Ngatigadzire itsva StorageClass muKubernetes. Izvi zvakare zvinoda kumboti tarisei naCeph.
Isu tinogadzira mushandisi mutsva muCeph uye tinomupa kodzero yekunyorera dziva Cube:
ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'
Zvino ngationei kiyi yekuwana ichiripo:
ceph auth get-key client.rbdkube
Murairo uchaburitsa chinhu chakadai:
AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==
Ngatiwedzerei kukosha uku kuChakavanzika muKubernetes cluster - kwatinoida userKey:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: ceph-csi-rbd
stringData:
# ΠΠ½Π°ΡΠ΅Π½ΠΈΡ ΠΊΠ»ΡΡΠ΅ΠΉ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΡΡ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ ΠΈ Π΅Π³ΠΎ ΠΊΠ»ΡΡΡ, ΠΊΠ°ΠΊ ΡΠΊΠ°Π·Π°Π½ΠΎ Π²
# ΠΊΠ»Π°ΡΡΠ΅ΡΠ΅ Ceph. ID ΡΠ·Π΅ΡΠ° Π΄ΠΎΠ»ΠΆΠ΅Π½ ΠΈΠΌΠ΅ΡΡ Π΄ΠΎΡΡΡΠΏ ΠΊ ΠΏΡΠ»Ρ,
# ΡΠΊΠ°Π·Π°Π½Π½ΠΎΠΌΡ Π² storage class
userID: rbdkube
userKey: <user-key>
Uye isu tinogadzira chakavanzika chedu:
kubectl apply -f secret.yaml
Tevere, isu tinoda StorageClass kuratidza chimwe chinhu chakadai:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: kube
imageFeatures: layering
# ΠΡΠΈ ΡΠ΅ΠΊΡΠ΅ΡΡ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ Π΄Π°Π½Π½ΡΠ΅ Π΄Π»Ρ Π°Π²ΡΠΎΡΠΈΠ·Π°ΡΠΈΠΈ
# Π² Π²Π°Ρ ΠΏΡΠ».
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
Inoda kuzadzwa clusterID, izvo zvatakatodzidza nechikwata ceph fsid, uye shandisa iyi manifest kune Kubernetes cluster:
kubectl apply -f storageclass.yaml
Kuti utarise kuti masumbu anoshanda pamwe chete sei, ngatigadzirei inotevera PVC (Inoenderera Volume Claim):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
Ngationei nekukurumidza kuti Kubernetes akagadzira sei vhoriyamu yakakumbirwa muCeph:
kubectl get pvc
kubectl get pv
Zvese zvinoita kunge zvakanaka! Izvi zvinotaridzika sei kudivi reCeph?
Isu tinowana runyorwa rwemavhoriyamu mudziva uye tinoona ruzivo nezve vhoriyamu yedu:
rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653 # ΡΡΡ, ΠΊΠΎΠ½Π΅ΡΠ½ΠΎ ΠΆΠ΅, Π±ΡΠ΄Π΅Ρ Π΄ΡΡΠ³ΠΎΠΉ ID ΡΠΎΠΌΠ°, ΠΊΠΎΡΠΎΡΡΠΉ Π²ΡΠ΄Π°Π»Π° ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ°Ρ ΠΊΠΎΠΌΠ°Π½Π΄Π°
Zvino ngationei kuti kudzoreredza vhoriyamu yeRBD kunoshanda sei.
Chinja saizi yevhoriyamu mu pvc.yaml manifest kuita 2Gi woishandisa:
kubectl apply -f pvc.yaml
Ngatimirirei kuti shanduko dziite totarisa size yevhoriyamu zvakare.
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653
kubectl get pv
kubectl get pvc
Isu tinoona kuti saizi yePVC haina kuchinja. Kuti uzive kuti sei, unogona kubvunza Kubernetes kune YAML tsananguro yePVC:
kubectl get pvc rbd-pvc -o yaml
Heino dambudziko:
meseji: Kumirira kuti mushandisi (re-) atange pod kupedzisa faira system resize yevhoriyamu pane node. mhando: FileSystemResizePending
Ndiko kuti, dhisiki yakakura, asi iyo faira system pairi haina.
Kuti ukure iyo faira system, unofanirwa kukwidza vhoriyamu. Munyika yedu, iyo yakagadzirwa PVC/PV haisati yashandiswa chero nzira.
Tinogona kugadzira bvunzo Pod, semuenzaniso seizvi:
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx:1.17.6
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
Uye zvino ngatitarisei PVC:
kubectl get pvc
Saizi yachinja, zvese zvakanaka.
Muchikamu chekutanga, takashanda neRBD block device (inomirira Rados Block Device), asi izvi hazvigone kuitwa kana microservices yakasiyana inoda kushanda ne disk iyi panguva imwe chete. CephFS inonyanya kukodzera kushanda nemafaira pane dhisiki mifananidzo.
Tichishandisa muenzaniso wemasumbu eCeph neKubernetes, isu tichagadzirisa CSI nemamwe masangano anodiwa kuti ashande neCephFS.
Ngatitorei kukosha kubva kune itsva Helm chati yatinoda:
helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml
Zvakare iwe unofanirwa kuzadza iyo cephfs.yml faira. Sekare, Ceph mirairo ichabatsira:
ceph fsid
ceph mon dump
Zadza faira nemaitiro akadai:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "172.18.8.5:6789"
- "172.18.8.6:6789"
- "172.18.8.7:6789"
nodeplugin:
httpMetrics:
enabled: true
containerPort: 8091
podSecurityPolicy:
enabled: true
provisioner:
replicaCount: 1
podSecurityPolicy:
enabled: true
Ndokumbira utarise kuti kero yekutarisa inotsanangurwa mune yakapusa fomu kero: port. Kuti uise cephs pane node, kero idzi dzinoendeswa kune kernel module, iyo isati yaziva kushanda ne v2 monitor protocol.
Isu tinoshandura chiteshi che httpMetrics (Prometheus ichaenda ikoko yekutarisa metrics) kuitira kuti isapesane nenginx-proxy, iyo yakaiswa neKubespray. Unogona kunge usingade izvi.
Isa iyo Helm chati muKubernetes cluster:
helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace
Handei kuCeph data chitoro kuti tigadzire akaparadzana mushandisi ipapo. Zvinyorwa zvinoti mupi weCephFS anoda kodzero dzekuwana maneja wemasumbu. Asi isu tichagadzira mushandisi akasiyana fs vane kodzero shoma:
ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'
Uye ngatitarisei kiyi yake yekuwana, tichaida gare gare:
ceph auth get-key client.fs
Ngatigadzire yakaparadzana Chakavanzika uye StorageClass.
Hapana chitsva, takatoona izvi mumuenzaniso weRBD:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi-cephfs
stringData:
# ΠΠ΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π΄Π»Ρ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈ ΡΠΎΠ·Π΄Π°Π²Π°Π΅ΠΌΡΡ
ΡΠΎΠΌΠΎΠ²
adminID: fs
adminKey: <Π²ΡΠ²ΠΎΠ΄ ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ΅ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ>
Kushandisa manifestation:
kubectl apply -f secret.yaml
Uye ikozvino - yakaparadzana StorageClass:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: <cluster-id>
# ΠΠΌΡ ΡΠ°ΠΉΠ»ΠΎΠ²ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ CephFS, Π² ΠΊΠΎΡΠΎΡΠΎΠΉ Π±ΡΠ΄Π΅Ρ ΡΠΎΠ·Π΄Π°Π½ ΡΠΎΠΌ
fsName: cephfs
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) ΠΡΠ» Ceph, Π² ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ΄ΡΡ Ρ
ΡΠ°Π½ΠΈΡΡΡΡ Π΄Π°Π½Π½ΡΠ΅ ΡΠΎΠΌΠ°
# pool: cephfs_data
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½ΡΠ΅ Π·Π°ΠΏΡΡΡΠΌΠΈ ΠΎΠΏΡΠΈΠΈ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π»Ρ Ceph-fuse
# Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ:
# fuseMountOptions: debug
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½ΡΠ΅ Π·Π°ΠΏΡΡΡΠΌΠΈ ΠΎΠΏΡΠΈΠΈ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ CephFS Π΄Π»Ρ ΡΠ΄ΡΠ°
# Π‘ΠΌ. man mount.ceph ΡΡΠΎΠ±Ρ ΡΠ·Π½Π°ΡΡ ΡΠΏΠΈΡΠΎΠΊ ΡΡΠΈΡ
ΠΎΠΏΡΠΈΠΉ. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ:
# kernelMountOptions: readdir_max_bytes=1048576,norbytes
# Π‘Π΅ΠΊΡΠ΅ΡΡ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ Π΄ΠΎΡΡΡΠΏΡ Π΄Π»Ρ Π°Π΄ΠΌΠΈΠ½Π° ΠΈ/ΠΈΠ»ΠΈ ΡΠ·Π΅ΡΠ° Ceph.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) ΠΡΠ°ΠΉΠ²Π΅Ρ ΠΌΠΎΠΆΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π»ΠΈΠ±ΠΎ ceph-fuse (fuse),
# Π»ΠΈΠ±ΠΎ ceph kernelclient (kernel).
# ΠΡΠ»ΠΈ Π½Π΅ ΡΠΊΠ°Π·Π°Π½ΠΎ, Π±ΡΠ΄Π΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡΡΡ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠΎΠΌΠΎΠ² ΠΏΠΎ ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ,
# ΡΡΠΎ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΡΡΡ ΠΏΠΎΠΈΡΠΊΠΎΠΌ ceph-fuse ΠΈ mount.ceph
# mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
Ngatizadzei pano clusterID uye inoshanda muKubernetes:
kubectl apply -f storageclass.yaml
kuonorora
Kuti utarise, semumuenzaniso wapfuura, ngatigadzire PVC:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: csi-cephfs-sc
Uye tarisa kuvapo kwePVC/PV:
kubectl get pvc
kubectl get pv
Kana iwe uchida kutarisa mafaera uye madhairekitori muCephFS, unogona kukwira iyi faira system kumwe kunhu. Somuenzaniso sezvinoratidzwa pasi apa.
Ngatiende kune imwe yeCeph cluster node uye tiite zvinotevera zviito:
# Π’ΠΎΡΠΊΠ° ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ
mkdir -p /mnt/cephfs
# Π‘ΠΎΠ·Π΄Π°ΡΠΌ ΡΠ°ΠΉΠ» Ρ ΠΊΠ»ΡΡΠΎΠΌ Π°Π΄ΠΌΠΈΠ½ΠΈΡΡΡΠ°ΡΠΎΡΠ°
ceph auth get-key client.admin >/etc/ceph/secret.key
# ΠΠΎΠ±Π°Π²Π»ΡΠ΅ΠΌ Π·Π°ΠΏΠΈΡΡ Π² /etc/fstab
# !! ΠΠ·ΠΌΠ΅Π½ΡΠ΅ΠΌ ip Π°Π΄ΡΠ΅Ρ Π½Π° Π°Π΄ΡΠ΅Ρ Π½Π°ΡΠ΅Π³ΠΎ ΡΠ·Π»Π°
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2" >> /etc/fstab
mount /mnt/cephfs
Ehe, kukwira FS pane yeCeph node seizvi inokodzera chete zvinangwa zvekudzidzisa, ndizvo zvatinoita pane yedu.
Uye pakupedzisira, ngatitarisei kuti zvinhu zvinoshanda sei nekugadzirisa mavhoriyamu munyaya yeCephFS. Ngatidzokerei kuKubernetes uye tigadzirise manifesto yedu yePVC - wedzera saizi ipapo, semuenzaniso, ku7Gi.
Ngatishandise iyo yakagadziridzwa faira:
kubectl apply -f pvc.yaml
Ngatitarisei dhairekitori rakaiswa kuti tione kuti quota yachinja sei:
getfattr -n ceph.quota.max_bytes <ΠΊΠ°ΡΠ°Π»ΠΎΠ³-Ρ-Π΄Π°Π½Π½ΡΠΌΠΈ>
Kuti uyu murairo ushande, ungangoda kuisa pasuru pane yako system attr.
Meso anotya, asi maoko anotya
Ese aya zviperengo uye akareba YAML anoratidza anoita seakaomarara pamusoro, asi mukuita, vadzidzi veSlurm vanowana kurembera kwavo nekukurumidza.
Muchikamu chino hatina kupinda mukati mesango - pane zvinyorwa zvepamutemo zveizvi. Kana iwe uchifarira ruzivo rwekumisikidza Ceph kuchengetedza neKubernetes cluster, aya malink anozobatsira:
PaSlurm course
Uye kana iwe uchifarira zvakanyanya kuchengetedza data, wobva wanyoresa
Munyori wenyaya: Alexander Shvalov, kudzidzira injiniya
Source: www.habr.com