Continens Repono Interface (CSI) unum interfacies inter Kubernetes et systemata repono. Diximus iam breviter
Articulus reales, licet leviter simpliciores, exempla praebet pro facilitate perceptionis. Non consideramus uvas insertas et configurantes Ceph et Kubernetes.
Miraris quomodo operatur?
So, Kubernetes botrus ad unguem habes, explicavit, e.g.
Si haec habes, eamus!
Primum, eamus ad unum e Ceph nodis botrinis et vide omnia in ordine;
ceph health
ceph -s
Deinde piscinam RBD orbis statim creabimus:
ceph osd pool create kube 32
ceph osd pool application enable kube rbd
Transeamus ad botrum Kubernetes. Ibi imprimis rectorem Ceph CSI instituemus pro RBD. Inaugurare, ut expectavi, per Helm.
Repositorium cum chartula addimus, copia variabilium pro chart ceph-csi-rbd accipimus:
helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml
Nunc opus est ut cephrbd.yml lima explere. Facere hoc, botrum ID et IP inscriptiones monitorium in Ceph inspice:
ceph fsid # ΡΠ°ΠΊ ΠΌΡ ΡΠ·Π½Π°Π΅ΠΌ clusterID
ceph mon dump # Π° ΡΠ°ΠΊ ΡΠ²ΠΈΠ΄ΠΈΠΌ IP-Π°Π΄ΡΠ΅ΡΠ° ΠΌΠΎΠ½ΠΈΡΠΎΡΠΎΠ²
Valores consecuti in fasciculum cephrbd.yml ingredimur. In via, PSP consilia creationis efficimus (Pod Securitatis Politiae). Optiones in sectionibus nodeplugin ΠΈ dispensator iam in tabella, corrigi possunt ut infra patebit;
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
- "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
- "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"
nodeplugin:
podSecurityPolicy:
enabled: true
provisioner:
podSecurityPolicy:
enabled: true
Deinde, quidquid nobis restat, chartula in Kubernetes botrum instituenda est.
helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace
Magna, RBD opera agitator!
Novum StorageClass in Kubernetes creemus. Hoc rursus aliquid postulat cum Ceph tinnitum.
Novum usorem in Ceph creamus et ei iura ad piscinam scribendam dabimus Kube:
ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'
Nunc aditum clavem ibi adhuc videamus;
ceph auth get-key client.rbdkube
Praeceptum erit output aliquid simile hoc;
AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==
Addamus hunc valorem ad Secretam in Botro in Kubernetes, ubi opus est userKey:
---
apiVersion: v1
kind: Secret
metadata:
name: csi-rbd-secret
namespace: ceph-csi-rbd
stringData:
# ΠΠ½Π°ΡΠ΅Π½ΠΈΡ ΠΊΠ»ΡΡΠ΅ΠΉ ΡΠΎΠΎΡΠ²Π΅ΡΡΡΠ²ΡΡΡ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΠ΅Π»Ρ ΠΈ Π΅Π³ΠΎ ΠΊΠ»ΡΡΡ, ΠΊΠ°ΠΊ ΡΠΊΠ°Π·Π°Π½ΠΎ Π²
# ΠΊΠ»Π°ΡΡΠ΅ΡΠ΅ Ceph. ID ΡΠ·Π΅ΡΠ° Π΄ΠΎΠ»ΠΆΠ΅Π½ ΠΈΠΌΠ΅ΡΡ Π΄ΠΎΡΡΡΠΏ ΠΊ ΠΏΡΠ»Ρ,
# ΡΠΊΠ°Π·Π°Π½Π½ΠΎΠΌΡ Π² storage class
userID: rbdkube
userKey: <user-key>
Et secreta nostra creamus;
kubectl apply -f secret.yaml
Deinceps opus est StorageClass manifesto simile quid:
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
clusterID: <cluster-id>
pool: kube
imageFeatures: layering
# ΠΡΠΈ ΡΠ΅ΠΊΡΠ΅ΡΡ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ Π΄Π°Π½Π½ΡΠ΅ Π΄Π»Ρ Π°Π²ΡΠΎΡΠΈΠ·Π°ΡΠΈΠΈ
# Π² Π²Π°Ρ ΠΏΡΠ».
csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd
csi.storage.k8s.io/fstype: ext4
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- discard
Impleri debet clusterIDquam iam per turmam ceph fsidatque hanc manifestam Kubernetes botri appone;
kubectl apply -f storageclass.yaml
Sisto quomodo racemi cooperantur, sequentia PVC (Persistent Volume Claim);
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: rbd-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: csi-rbd-sc
Statim videamus quomodo Kubernetes volumen rogatum in Ceph creaverit:
kubectl get pvc
kubectl get pv
Omnia magna videntur! Quid hoc simile in Ceph parte?
Indicem voluminum in piscinis habemus et informationes de nostro volumine accipimus:
rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653 # ΡΡΡ, ΠΊΠΎΠ½Π΅ΡΠ½ΠΎ ΠΆΠ΅, Π±ΡΠ΄Π΅Ρ Π΄ΡΡΠ³ΠΎΠΉ ID ΡΠΎΠΌΠ°, ΠΊΠΎΡΠΎΡΡΠΉ Π²ΡΠ΄Π°Π»Π° ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ°Ρ ΠΊΠΎΠΌΠ°Π½Π΄Π°
Nunc videamus quomodo volumen RBD operando resumit.
Mutato volumine magnitudinem in pvc.yaml manifestam 2Gi et applica;
kubectl apply -f pvc.yaml
Exspectemus mutationes effectuum ut magnitudo voluminis denuo inspiciat.
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653
kubectl get pv
kubectl get pvc
Magnitudinem PVC non mutata videmus. Ut scias quam ob rem, Kubernetes interrogationi potes pro YAML descriptione PVC:
kubectl get pvc rbd-pvc -o yaml
Hic est quaestio:
relatum: Exspecto usorem ut (re) incipe vasculum ad perficiendum systema fasciculi resize voluminis in nodi. type: FileSystemResizePending
Hoc est, orbis crevit, sed tabella ratio non habet.
Ut crescat tabella ratio, opus est ut volumen conscendas. In nostra regione, creatum PVC/PV nullo modo adhibitum est.
Pod experimentum creare possumus, exempli gratia hoc:
---
apiVersion: v1
kind: Pod
metadata:
name: csi-rbd-demo-pod
spec:
containers:
- name: web-server
image: nginx:1.17.6
volumeMounts:
- name: mypvc
mountPath: /data
volumes:
- name: mypvc
persistentVolumeClaim:
claimName: rbd-pvc
readOnly: false
Et nunc inspiciamus PVC:
kubectl get pvc
Magnitudo mutata, omnia denique.
In prima parte, cum RBD fabrica scandali (significat pro Rados Block Fabrica) laboravimus, sed hoc fieri non potest, si variae microformes cum hoc disco simul laborare debent. CephFS multo aptior est ad operandum cum lima quam in imaginibus orbis.
Exemplis Ceph et Kubernetes ligaturas utentes, configurabimus CSI et alia necessaria ad operandum cum CephFS.
Valores demus e chartula Helm nova quae nobis necessaria est:
helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml
Iterum opus est fasciculum cephfs.yml explere. Ut prius Ceph mandata juvabit;
ceph fsid
ceph mon dump
Tabellam cum valoribus imple sic:
csiConfig:
- clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
monitors:
- "172.18.8.5:6789"
- "172.18.8.6:6789"
- "172.18.8.7:6789"
nodeplugin:
httpMetrics:
enabled: true
containerPort: 8091
podSecurityPolicy:
enabled: true
provisioner:
replicaCount: 1
podSecurityPolicy:
enabled: true
Nota quaeso quod monitores inscriptiones electronicae simplicis formae in:portu specificantur. Ut cephfs in nodi conscendas, hae inscriptiones ad nucleum moduli transferuntur, quae nondum scit operari cum protocollo monitori V2.
Portum permutamus pro httpMetrics (Prometheus ibi ibit ad metri vigilantiam) ut cum nginx-procuratore non pugnet, quod ab Kubespray inauguratus est. Ut non egestas ipsum.
Instrue chartulam Helm in Kubernetes botrum;
helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace
Eamus ad Ceph data copia ut usorem separatum ibi crearet. Documenta affirmat provisionem CephFS glomerari administratorem accessum iuribus postulare. Sed nos separatum creare user fs limitata iura;
ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'
Et statim inspiciamus eius aditum clavem, eo postea indigebimus;
ceph auth get-key client.fs
Secretum et StorageClass separatim faciamus.
Nihil novi, iam in exemplo RBD vidimus;
---
apiVersion: v1
kind: Secret
metadata:
name: csi-cephfs-secret
namespace: ceph-csi-cephfs
stringData:
# ΠΠ΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ Π΄Π»Ρ Π΄ΠΈΠ½Π°ΠΌΠΈΡΠ΅ΡΠΊΠΈ ΡΠΎΠ·Π΄Π°Π²Π°Π΅ΠΌΡΡ
ΡΠΎΠΌΠΎΠ²
adminID: fs
adminKey: <Π²ΡΠ²ΠΎΠ΄ ΠΏΡΠ΅Π΄ΡΠ΄ΡΡΠ΅ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ>
Applicandi manifesti;
kubectl apply -f secret.yaml
Nunc autem - separatum StorageClass;
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
clusterID: <cluster-id>
# ΠΠΌΡ ΡΠ°ΠΉΠ»ΠΎΠ²ΠΎΠΉ ΡΠΈΡΡΠ΅ΠΌΡ CephFS, Π² ΠΊΠΎΡΠΎΡΠΎΠΉ Π±ΡΠ΄Π΅Ρ ΡΠΎΠ·Π΄Π°Π½ ΡΠΎΠΌ
fsName: cephfs
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) ΠΡΠ» Ceph, Π² ΠΊΠΎΡΠΎΡΠΎΠΌ Π±ΡΠ΄ΡΡ Ρ
ΡΠ°Π½ΠΈΡΡΡΡ Π΄Π°Π½Π½ΡΠ΅ ΡΠΎΠΌΠ°
# pool: cephfs_data
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½ΡΠ΅ Π·Π°ΠΏΡΡΡΠΌΠΈ ΠΎΠΏΡΠΈΠΈ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π»Ρ Ceph-fuse
# Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ:
# fuseMountOptions: debug
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½ΡΠ΅ Π·Π°ΠΏΡΡΡΠΌΠΈ ΠΎΠΏΡΠΈΠΈ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ CephFS Π΄Π»Ρ ΡΠ΄ΡΠ°
# Π‘ΠΌ. man mount.ceph ΡΡΠΎΠ±Ρ ΡΠ·Π½Π°ΡΡ ΡΠΏΠΈΡΠΎΠΊ ΡΡΠΈΡ
ΠΎΠΏΡΠΈΠΉ. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ:
# kernelMountOptions: readdir_max_bytes=1048576,norbytes
# Π‘Π΅ΠΊΡΠ΅ΡΡ Π΄ΠΎΠ»ΠΆΠ½Ρ ΡΠΎΠ΄Π΅ΡΠΆΠ°ΡΡ Π΄ΠΎΡΡΡΠΏΡ Π΄Π»Ρ Π°Π΄ΠΌΠΈΠ½Π° ΠΈ/ΠΈΠ»ΠΈ ΡΠ·Π΅ΡΠ° Ceph.
csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs
# (Π½Π΅ΠΎΠ±ΡΠ·Π°ΡΠ΅Π»ΡΠ½ΠΎ) ΠΡΠ°ΠΉΠ²Π΅Ρ ΠΌΠΎΠΆΠ΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π»ΠΈΠ±ΠΎ ceph-fuse (fuse),
# Π»ΠΈΠ±ΠΎ ceph kernelclient (kernel).
# ΠΡΠ»ΠΈ Π½Π΅ ΡΠΊΠ°Π·Π°Π½ΠΎ, Π±ΡΠ΄Π΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡΡΡ ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ ΡΠΎΠΌΠΎΠ² ΠΏΠΎ ΡΠΌΠΎΠ»ΡΠ°Π½ΠΈΡ,
# ΡΡΠΎ ΠΎΠΏΡΠ΅Π΄Π΅Π»ΡΠ΅ΡΡΡ ΠΏΠΎΠΈΡΠΊΠΎΠΌ ceph-fuse ΠΈ mount.ceph
# mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
- debug
Implete eam hic clusterID et competat in Kubernetes:
kubectl apply -f storageclass.yaml
inspectis
Sisto, sicut in exemplo praecedente, PVC faciamus:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-cephfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
storageClassName: csi-cephfs-sc
Deprime praesentiam PVC/PV:
kubectl get pvc
kubectl get pv
Si vis tabulas et directoria in CephFS inspicere, hunc systema fasciculi alicubi conscendere potes. Exempli gratia ut infra patebit.
Eamus ad unum e Nodis botri Ceph et sequentes actiones exequi;
# Π’ΠΎΡΠΊΠ° ΠΌΠΎΠ½ΡΠΈΡΠΎΠ²Π°Π½ΠΈΡ
mkdir -p /mnt/cephfs
# Π‘ΠΎΠ·Π΄Π°ΡΠΌ ΡΠ°ΠΉΠ» Ρ ΠΊΠ»ΡΡΠΎΠΌ Π°Π΄ΠΌΠΈΠ½ΠΈΡΡΡΠ°ΡΠΎΡΠ°
ceph auth get-key client.admin >/etc/ceph/secret.key
# ΠΠΎΠ±Π°Π²Π»ΡΠ΅ΠΌ Π·Π°ΠΏΠΈΡΡ Π² /etc/fstab
# !! ΠΠ·ΠΌΠ΅Π½ΡΠ΅ΠΌ ip Π°Π΄ΡΠ΅Ρ Π½Π° Π°Π΄ΡΠ΅Ρ Π½Π°ΡΠ΅Π³ΠΎ ΡΠ·Π»Π°
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev 0 2" >> /etc/fstab
mount /mnt/cephfs
Utique, FS in Ceph nodo ascendens, hoc simile est, tantum ad proposita formanda apta, quae in nostro agimus.
Et denique, quomodo res in libris CephFS resipiscendis laborent, scriptor inspiciamus. Redeamus ad Kubernetes et nostram manifestam pro PVC - augeamus ibi magnitudinem, exempli gratia, ad 7Gi.
Applicare ad edited lima:
kubectl apply -f pvc.yaml
Intueamur ad directorium equestrium quomodo numerus mutatus est:
getfattr -n ceph.quota.max_bytes <ΠΊΠ°ΡΠ°Π»ΠΎΠ³-Ρ-Π΄Π°Π½Π½ΡΠΌΠΈ>
Hoc mandatum ad operandum, sarcinam in systemate tuo instituere debes attr.
Oculi timent, sed manus
Haec omnia carmina et longa YAML manifestat in superficie implicata videntur, sed in praxi, Slurm discipuli satis cito suspendunt.
In hoc articulo in silvestrem profunde non venimus - documentum est officiale pro illo. Si interesse in singulis rebus repositionis Ceph cum botro Kubernetes erigendi, hae nexus adiuvabunt:
In Slurm cursum
Et si plus interest in notitia repono, signum pro
Auctor articuli: Alexander Shvalov, architectus practicus
Source: www.habr.com