Exemplum practicum coniungendi Ceph-repositionis ad botrum Kubernetes

Continens Repono Interface (CSI) unum interfacies inter Kubernetes et systemata repono. Diximus iam breviter dixitet hodie coniunctiorem CSI et Ceph inspiciemus: quomodo demonstrabimus connect Ceph storage ad Kubernetes botrus.
Articulus reales, licet leviter simpliciores, exempla praebet pro facilitate perceptionis. Non consideramus uvas insertas et configurantes Ceph et Kubernetes.

Miraris quomodo operatur?

Exemplum practicum coniungendi Ceph-repositionis ad botrum Kubernetes

So, Kubernetes botrus ad unguem habes, explicavit, e.g. kubespray. Est botrus Ceph prope laborat - illud etiam instituere potes, exempli gratia, cum hoc a paro of playbooks. Nihil opto commemorare opus esse ad productionem inter eas retis esse debere cum laxitate saltem 10 Gbit/s.

Si haec habes, eamus!

Primum, eamus ad unum e Ceph nodis botrinis et vide omnia in ordine;

ceph health
ceph -s

Deinde piscinam RBD orbis statim creabimus:

ceph osd pool create kube 32
ceph osd pool application enable kube rbd

Transeamus ad botrum Kubernetes. Ibi imprimis rectorem Ceph CSI instituemus pro RBD. Inaugurare, ut expectavi, per Helm.
Repositorium cum chartula addimus, copia variabilium pro chart ceph-csi-rbd accipimus:

helm repo add ceph-csi https://ceph.github.io/csi-charts
helm inspect values ceph-csi/ceph-csi-rbd > cephrbd.yml

Nunc opus est ut cephrbd.yml lima explere. Facere hoc, botrum ID et IP inscriptiones monitorium in Ceph inspice:

ceph fsid  # Ρ‚Π°ΠΊ ΠΌΡ‹ ΡƒΠ·Π½Π°Π΅ΠΌ clusterID
ceph mon dump  # Π° Ρ‚Π°ΠΊ ΡƒΠ²ΠΈΠ΄ΠΈΠΌ IP-адрСса ΠΌΠΎΠ½ΠΈΡ‚ΠΎΡ€ΠΎΠ²

Valores consecuti in fasciculum cephrbd.yml ingredimur. In via, PSP consilia creationis efficimus (Pod Securitatis Politiae). Optiones in sectionibus nodeplugin ΠΈ dispensator iam in tabella, corrigi possunt ut infra patebit;

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "v2:172.18.8.5:3300/0,v1:172.18.8.5:6789/0"
      - "v2:172.18.8.6:3300/0,v1:172.18.8.6:6789/0"
      - "v2:172.18.8.7:3300/0,v1:172.18.8.7:6789/0"

nodeplugin:
  podSecurityPolicy:
    enabled: true

provisioner:
  podSecurityPolicy:
    enabled: true

Deinde, quidquid nobis restat, chartula in Kubernetes botrum instituenda est.

helm upgrade -i ceph-csi-rbd ceph-csi/ceph-csi-rbd -f cephrbd.yml -n ceph-csi-rbd --create-namespace

Magna, RBD opera agitator!
Novum StorageClass in Kubernetes creemus. Hoc rursus aliquid postulat cum Ceph tinnitum.

Novum usorem in Ceph creamus et ei iura ad piscinam scribendam dabimus Kube:

ceph auth get-or-create client.rbdkube mon 'profile rbd' osd 'profile rbd pool=kube'

Nunc aditum clavem ibi adhuc videamus;

ceph auth get-key client.rbdkube

Praeceptum erit output aliquid simile hoc;

AQCO9NJbhYipKRAAMqZsnqqS/T8OYQX20xIa9A==

Addamus hunc valorem ad Secretam in Botro in Kubernetes, ubi opus est userKey:

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-rbd-secret
  namespace: ceph-csi-rbd
stringData:
  # ЗначСния ΠΊΠ»ΡŽΡ‡Π΅ΠΉ ΡΠΎΠΎΡ‚Π²Π΅Ρ‚ΡΡ‚Π²ΡƒΡŽΡ‚ ΠΈΠΌΠ΅Π½ΠΈ ΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚Π΅Π»Ρ ΠΈ Π΅Π³ΠΎ ΠΊΠ»ΡŽΡ‡Ρƒ, ΠΊΠ°ΠΊ ΡƒΠΊΠ°Π·Π°Π½ΠΎ Π²
  # кластСрС Ceph. ID ΡŽΠ·Π΅Ρ€Π° Π΄ΠΎΠ»ΠΆΠ΅Π½ ΠΈΠΌΠ΅Ρ‚ΡŒ доступ ΠΊ ΠΏΡƒΠ»Ρƒ,
  # ΡƒΠΊΠ°Π·Π°Π½Π½ΠΎΠΌΡƒ Π² storage class
  userID: rbdkube
  userKey: <user-key>

Et secreta nostra creamus;

kubectl apply -f secret.yaml

Deinceps opus est StorageClass manifesto simile quid:

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
   name: csi-rbd-sc
provisioner: rbd.csi.ceph.com
parameters:
   clusterID: <cluster-id>
   pool: kube

   imageFeatures: layering

   # Π­Ρ‚ΠΈ сСкрСты Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠ΄Π΅Ρ€ΠΆΠ°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅ для Π°Π²Ρ‚ΠΎΡ€ΠΈΠ·Π°Ρ†ΠΈΠΈ
   # Π² ваш ΠΏΡƒΠ».
   csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret
   csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/controller-expand-secret-name: csi-rbd-secret
   csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-rbd
   csi.storage.k8s.io/node-stage-secret-name: csi-rbd-secret
   csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-rbd

   csi.storage.k8s.io/fstype: ext4

reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - discard

Impleri debet clusterIDquam iam per turmam ceph fsidatque hanc manifestam Kubernetes botri appone;

kubectl apply -f storageclass.yaml

Sisto quomodo racemi cooperantur, sequentia PVC (Persistent Volume Claim);

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: csi-rbd-sc

Statim videamus quomodo Kubernetes volumen rogatum in Ceph creaverit:

kubectl get pvc
kubectl get pv

Omnia magna videntur! Quid hoc simile in Ceph parte?
Indicem voluminum in piscinis habemus et informationes de nostro volumine accipimus:

rbd ls -p kube
rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653  # Ρ‚ΡƒΡ‚, ΠΊΠΎΠ½Π΅Ρ‡Π½ΠΎ ΠΆΠ΅, Π±ΡƒΠ΄Π΅Ρ‚ Π΄Ρ€ΡƒΠ³ΠΎΠΉ ID Ρ‚ΠΎΠΌΠ°, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹ΠΉ Π²Ρ‹Π΄Π°Π»Π° прСдыдущая ΠΊΠΎΠΌΠ°Π½Π΄Π°

Nunc videamus quomodo volumen RBD operando resumit.
Mutato volumine magnitudinem in pvc.yaml manifestam 2Gi et applica;

kubectl apply -f pvc.yaml

Exspectemus mutationes effectuum ut magnitudo voluminis denuo inspiciat.

rbd -p kube info csi-vol-eb3d257d-8c6c-11ea-bff5-6235e7640653

kubectl get pv
kubectl get pvc

Magnitudinem PVC non mutata videmus. Ut scias quam ob rem, Kubernetes interrogationi potes pro YAML descriptione PVC:

kubectl get pvc rbd-pvc -o yaml

Hic est quaestio:

relatum: Exspecto usorem ut (re) incipe vasculum ad perficiendum systema fasciculi resize voluminis in nodi. type: FileSystemResizePending

Hoc est, orbis crevit, sed tabella ratio non habet.
Ut crescat tabella ratio, opus est ut volumen conscendas. In nostra regione, creatum PVC/PV nullo modo adhibitum est.

Pod experimentum creare possumus, exempli gratia hoc:

---
apiVersion: v1
kind: Pod
metadata:
  name: csi-rbd-demo-pod
spec:
  containers:
    - name: web-server
      image: nginx:1.17.6
      volumeMounts:
        - name: mypvc
          mountPath: /data
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: rbd-pvc
        readOnly: false

Et nunc inspiciamus PVC:

kubectl get pvc

Magnitudo mutata, omnia denique.

In prima parte, cum RBD fabrica scandali (significat pro Rados Block Fabrica) laboravimus, sed hoc fieri non potest, si variae microformes cum hoc disco simul laborare debent. CephFS multo aptior est ad operandum cum lima quam in imaginibus orbis.
Exemplis Ceph et Kubernetes ligaturas utentes, configurabimus CSI et alia necessaria ad operandum cum CephFS.

Valores demus e chartula Helm nova quae nobis necessaria est:

helm inspect values ceph-csi/ceph-csi-cephfs > cephfs.yml

Iterum opus est fasciculum cephfs.yml explere. Ut prius Ceph mandata juvabit;

ceph fsid
ceph mon dump

Tabellam cum valoribus imple sic:

csiConfig:
  - clusterID: "bcd0d202-fba8-4352-b25d-75c89258d5ab"
    monitors:
      - "172.18.8.5:6789"
      - "172.18.8.6:6789"
      - "172.18.8.7:6789"

nodeplugin:
  httpMetrics:
    enabled: true
    containerPort: 8091
  podSecurityPolicy:
    enabled: true

provisioner:
  replicaCount: 1
  podSecurityPolicy:
    enabled: true

Nota quaeso quod monitores inscriptiones electronicae simplicis formae in:portu specificantur. Ut cephfs in nodi conscendas, hae inscriptiones ad nucleum moduli transferuntur, quae nondum scit operari cum protocollo monitori V2.
Portum permutamus pro httpMetrics (Prometheus ibi ibit ad metri vigilantiam) ut cum nginx-procuratore non pugnet, quod ab Kubespray inauguratus est. Ut non egestas ipsum.

Instrue chartulam Helm in Kubernetes botrum;

helm upgrade -i ceph-csi-cephfs ceph-csi/ceph-csi-cephfs -f cephfs.yml -n ceph-csi-cephfs --create-namespace

Eamus ad Ceph data copia ut usorem separatum ibi crearet. Documenta affirmat provisionem CephFS glomerari administratorem accessum iuribus postulare. Sed nos separatum creare user fs limitata iura;

ceph auth get-or-create client.fs mon 'allow r' mgr 'allow rw' mds 'allow rws' osd 'allow rw pool=cephfs_data, allow rw pool=cephfs_metadata'

Et statim inspiciamus eius aditum clavem, eo postea indigebimus;

ceph auth get-key client.fs

Secretum et StorageClass separatim faciamus.
Nihil novi, iam in exemplo RBD vidimus;

---
apiVersion: v1
kind: Secret
metadata:
  name: csi-cephfs-secret
  namespace: ceph-csi-cephfs
stringData:
  # НСобходимо для динамичСски создаваСмых Ρ‚ΠΎΠΌΠΎΠ²
  adminID: fs
  adminKey: <Π²Ρ‹Π²ΠΎΠ΄ ΠΏΡ€Π΅Π΄Ρ‹Π΄ΡƒΡ‰Π΅ΠΉ ΠΊΠΎΠΌΠ°Π½Π΄Ρ‹>

Applicandi manifesti;

kubectl apply -f secret.yaml

Nunc autem - separatum StorageClass;

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: csi-cephfs-sc
provisioner: cephfs.csi.ceph.com
parameters:
  clusterID: <cluster-id>

  # Имя Ρ„Π°ΠΉΠ»ΠΎΠ²ΠΎΠΉ систСмы CephFS, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΉ Π±ΡƒΠ΄Π΅Ρ‚ создан Ρ‚ΠΎΠΌ
  fsName: cephfs

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) ΠŸΡƒΠ» Ceph, Π² ΠΊΠΎΡ‚ΠΎΡ€ΠΎΠΌ Π±ΡƒΠ΄ΡƒΡ‚ Ρ…Ρ€Π°Π½ΠΈΡ‚ΡŒΡΡ Π΄Π°Π½Π½Ρ‹Π΅ Ρ‚ΠΎΠΌΠ°
  # pool: cephfs_data

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½Ρ‹Π΅ запятыми ΠΎΠΏΡ†ΠΈΠΈ монтирования для Ceph-fuse
  # Π½Π°ΠΏΡ€ΠΈΠΌΠ΅Ρ€:
  # fuseMountOptions: debug

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π Π°Π·Π΄Π΅Π»Π΅Π½Π½Ρ‹Π΅ запятыми ΠΎΠΏΡ†ΠΈΠΈ монтирования CephFS для ядра
  # Π‘ΠΌ. man mount.ceph Ρ‡Ρ‚ΠΎΠ±Ρ‹ ΡƒΠ·Π½Π°Ρ‚ΡŒ список этих ΠΎΠΏΡ†ΠΈΠΉ. НапримСр:
  # kernelMountOptions: readdir_max_bytes=1048576,norbytes

  # Π‘Π΅ΠΊΡ€Π΅Ρ‚Ρ‹ Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΡΠΎΠ΄Π΅Ρ€ΠΆΠ°Ρ‚ΡŒ доступы для Π°Π΄ΠΌΠΈΠ½Π° ΠΈ/ΠΈΠ»ΠΈ ΡŽΠ·Π΅Ρ€Π° Ceph.
  csi.storage.k8s.io/provisioner-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/provisioner-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/controller-expand-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/controller-expand-secret-namespace: ceph-csi-cephfs
  csi.storage.k8s.io/node-stage-secret-name: csi-cephfs-secret
  csi.storage.k8s.io/node-stage-secret-namespace: ceph-csi-cephfs

  # (Π½Π΅ΠΎΠ±ΡΠ·Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎ) Π”Ρ€Π°ΠΉΠ²Π΅Ρ€ ΠΌΠΎΠΆΠ΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒ Π»ΠΈΠ±ΠΎ ceph-fuse (fuse), 
  # Π»ΠΈΠ±ΠΎ ceph kernelclient (kernel).
  # Если Π½Π΅ ΡƒΠΊΠ°Π·Π°Π½ΠΎ, Π±ΡƒΠ΄Π΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒΡΡ ΠΌΠΎΠ½Ρ‚ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ Ρ‚ΠΎΠΌΠΎΠ² ΠΏΠΎ ΡƒΠΌΠΎΠ»Ρ‡Π°Π½ΠΈΡŽ,
  # это опрСдСляСтся поиском ceph-fuse ΠΈ mount.ceph
  # mounter: kernel
reclaimPolicy: Delete
allowVolumeExpansion: true
mountOptions:
  - debug

Implete eam hic clusterID et competat in Kubernetes:

kubectl apply -f storageclass.yaml

inspectis

Sisto, sicut in exemplo praecedente, PVC faciamus:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-cephfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
  storageClassName: csi-cephfs-sc

Deprime praesentiam PVC/PV:

kubectl get pvc
kubectl get pv

Si vis tabulas et directoria in CephFS inspicere, hunc systema fasciculi alicubi conscendere potes. Exempli gratia ut infra patebit.

Eamus ad unum e Nodis botri Ceph et sequentes actiones exequi;

# Π’ΠΎΡ‡ΠΊΠ° монтирования
mkdir -p /mnt/cephfs

# Π‘ΠΎΠ·Π΄Π°Ρ‘ΠΌ Ρ„Π°ΠΉΠ» с ΠΊΠ»ΡŽΡ‡ΠΎΠΌ администратора
ceph auth get-key client.admin >/etc/ceph/secret.key

# ДобавляСм запись Π² /etc/fstab
# !! ИзмСняСм ip адрСс Π½Π° адрСс нашСго ΡƒΠ·Π»Π°
echo "172.18.8.6:6789:/ /mnt/cephfs ceph name=admin,secretfile=/etc/ceph/secret.key,noatime,_netdev    0       2" >> /etc/fstab

mount /mnt/cephfs

Utique, FS in Ceph nodo ascendens, hoc simile est, tantum ad proposita formanda apta, quae in nostro agimus. Slurm courses. Non puto quemquam in productione hoc facturum esse, magnum periculum est casuum imaginum momentis delens.

Et denique, quomodo res in libris CephFS resipiscendis laborent, scriptor inspiciamus. Redeamus ad Kubernetes et nostram manifestam pro PVC - augeamus ibi magnitudinem, exempli gratia, ad 7Gi.

Applicare ad edited lima:

kubectl apply -f pvc.yaml

Intueamur ad directorium equestrium quomodo numerus mutatus est:

getfattr -n ceph.quota.max_bytes <ΠΊΠ°Ρ‚Π°Π»ΠΎΠ³-с-Π΄Π°Π½Π½Ρ‹ΠΌΠΈ>

Hoc mandatum ad operandum, sarcinam in systemate tuo instituere debes attr.

Oculi timent, sed manus

Haec omnia carmina et longa YAML manifestat in superficie implicata videntur, sed in praxi, Slurm discipuli satis cito suspendunt.
In hoc articulo in silvestrem profunde non venimus - documentum est officiale pro illo. Si interesse in singulis rebus repositionis Ceph cum botro Kubernetes erigendi, hae nexus adiuvabunt:

Principia generalia Kubernetensium voluminibus laborantibus
RBD Documenta
Integrantes RBD et Kubernetes ex prospectu Ceph
Integrantes RBD et Kubernetes ex prospectu CSI
General CephFS Documenta
Integrant CephFS et Kubernetes ex prospectu CSI

In Slurm cursum Kubernetes Base paulo longius ire potes et veram applicationem in Kubernetes explicabis quae CephFS ut tabulariorum repositione utetur. Per petitiones GET/post tabellas transferre poteris easque e Ceph recipere.

Et si plus interest in notitia repono, signum pro cursum novum in Ceph. Dum beta probatio continuatur, cursus in infringo obtineri potest et eius contentum movere potes.

Auctor articuli: Alexander Shvalov, architectus practicus pontem meridianum, Certified Kubernetes Administrator, auctor et elit Slurm cursus.

Source: www.habr.com