CSE: Kubernetes ga waɗanda ke cikin vCloud

CSE: Kubernetes ga waɗanda ke cikin vCloud
Hello kowa da kowa!

Ya faru da cewa ƙananan ƙungiyarmu, ba don faɗi haka kwanan nan ba, kuma ba zato ba tsammani, ya girma don matsar da wasu (kuma a nan gaba duk) samfurori zuwa Kubernetes.

Akwai dalilai da yawa na wannan, amma labarinmu ba game da holivar ba ne.

Ba mu da zaɓi kaɗan game da tushen abubuwan more rayuwa. Daraktan vCloud da Daraktan vCloud. Mun zabi sabuwar kuma muka yanke shawarar farawa.

Bayan sake duba "Hanyar Hard" kuma, da sauri na kai ga ƙarshe cewa ana buƙatar kayan aiki don sarrafa aƙalla matakai na yau da kullun, kamar turawa da ƙima, jiya. Zurfafa nutsewa cikin Google ya kawo haske irin wannan samfur kamar VMware Container Service Extension (CSE) - samfurin buɗaɗɗen tushe wanda ke ba ku damar sarrafa ƙirƙira da girman gungu na k8s ga waɗanda ke cikin vCloud.

Disclaimer: CSE yana da iyakoki, amma don dalilanmu ya kasance cikakke. Hakanan, dole ne mai samar da girgije ya sami goyan bayan maganin, amma tunda sashin uwar garken shima buɗaɗɗe ne, tambayi manajan ku mafi kusa don samun shi :)

Don fara amfani da shi, kuna buƙatar asusun mai gudanarwa a cikin ƙungiyar vCloud da cibiyar sadarwar da aka ƙirƙira a baya don gungu (a yayin aikin turawa, kuna buƙatar shiga Intanet daga wannan hanyar sadarwar, kar ku manta da saita Firewall/NAT). Yin jawabi ba komai. A cikin wannan misalin, bari mu ɗauki 10.0.240.0/24

CSE: Kubernetes ga waɗanda ke cikin vCloud

Tun bayan halitta, gungu zai buƙaci a sarrafa ko ta yaya, ana ba da shawarar samun VPN tare da kewayawa zuwa cibiyar sadarwar da aka ƙirƙira. Muna amfani da daidaitaccen SSL VPN da aka saita akan Ƙofar ƙungiyarmu ta Edge.

Na gaba, kuna buƙatar shigar da abokin ciniki na CSE inda za a sarrafa gungu na k8s. A cikin yanayina, wannan kwamfutar tafi-da-gidanka ce da ke aiki da wasu kwantena da aka ɓoye da kyau waɗanda ke sarrafa sarrafa kansa.

Abokin ciniki yana buƙatar nau'in Python 3.7.3 kuma mafi girma da aka shigar kuma an shigar da tsarin vcd-cli, don haka bari mu shigar duka biyu.

pip3 install vcd-cli

pip3 install container-service-extension

Bayan shigarwa, muna duba sigar CSE kuma muna samun abubuwan masu zuwa:

# vcd cse version
Error: No such command "cse".

Ba zato ba tsammani, amma gyarawa. Kamar yadda ya juya, CSE yana buƙatar haɗawa azaman module zuwa vcd-cli.
Don yin wannan, dole ne ka fara shiga vcd-cli zuwa ƙungiyarmu:

# vcd login MyCloud.provider.com org-dev admin
Password: 
admin logged in, org: 'org-dev', vdc: 'org-dev_vDC01'

Bayan wannan, vcd-cli zai ƙirƙiri fayil ɗin sanyi ~/.vcd-cli/profiles.yaml
A ƙarshe kana buƙatar ƙara waɗannan abubuwa:

extensions:
  - container_service_extension.client.cse

Sannan mu sake dubawa:

# vcd cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

An kammala aikin shigarwa na abokin ciniki. Bari mu yi ƙoƙarin tura gungu na farko.
CSE tana da sigogin amfani da yawa, duka ana iya duba su a nan.

Da farko, bari mu ƙirƙiri maɓalli don samun damar shiga mara kalmar sirri zuwa gungu na gaba. Wannan batu yana da mahimmanci, tun da ta hanyar tsoho, kalmar sirri ta shiga cikin nodes za a kashe, kuma idan ba ku saita maɓallan ba, za ku iya samun aiki mai yawa ta hanyar na'urorin na'urori masu mahimmanci, wanda bai dace ba.

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.

Bari mu yi ƙoƙari mu fara ƙirƙirar tari:

vcd cse cluster create MyCluster --network k8s_cluster_net --ssh-key ~/.ssh/id_rsa.pub --nodes 3 --enable-nfs

Idan muka samu kuskure Kuskure: Zaman ya ƙare ko mai amfani bai shiga ba. Da fatan za a sake shiga - sake shiga vcd-cli zuwa vCloud kamar yadda aka bayyana a sama sannan a sake gwadawa.

A wannan lokacin komai yana da kyau kuma aikin ƙirƙirar gungu ya fara.

cluster operation: Creating cluster vApp 'MyCluster' (38959587-54f4-4a49-8f2e-61c3a3e879e0) from template 'photon-v2_k8-1.12_weave-2.3.0' (revision 1)

Zai ɗauki kimanin mintuna 20 don kammala aikin; a halin yanzu, bari mu kalli ainihin sigogin ƙaddamarwa.

— hanyar sadarwa — cibiyar sadarwar da muka ƙirƙira a baya.
-ssh-key - maɓallan da muka ƙirƙira, waɗanda za a rubuta su zuwa ga maɓallan tari
- nodes n - Yawan nodes na Ma'aikata a cikin tari. Za a kasance maigida ɗaya koyaushe, wannan iyakancewar CSE ce
-enable-nfs - ƙirƙirar ƙarin kumburi don hannun jarin NFS a ƙarƙashin juzu'i masu tsayi. Yana da ɗan zaɓin feda; za mu dawo don daidaita abin da yake yi kaɗan daga baya.

A halin yanzu, a cikin vCloud zaku iya saka idanu akan ƙirƙirar gungu
CSE: Kubernetes ga waɗanda ke cikin vCloud

Da zarar aikin ƙirƙirar gungu ya ƙare, an shirya don amfani.

Bari mu duba daidaiton turawa tare da umarnin vcd cse cluster info MyCluster

CSE: Kubernetes ga waɗanda ke cikin vCloud

Na gaba muna buƙatar samun tsarin gungu don amfani kubectl

# vcd cse cluster config MyCluster > ./.kube/config

Kuma zaku iya duba matsayin gungu ta amfani da shi:

CSE: Kubernetes ga waɗanda ke cikin vCloud

A wannan gaba, za a iya la'akari da gungu yana aiki da sharadi, idan ba don labarin ba tare da juzu'i masu tsayi. Tun da muna cikin vCloud, ba za mu iya amfani da mai ba da vSphere ba. Zabin --kunna-nfs An tsara shi don magance wannan ɓarna, amma bai yi aiki gaba ɗaya ba. Ana buƙatar daidaitawa da hannu.

Don farawa, kumburinmu yana buƙatar ƙirƙirar diski mai zaman kansa daban a cikin vCloud. Wannan yana ba da garantin cewa bayananmu ba za su ɓace tare da gungu ba idan an share su. Hakanan, haɗa faifai zuwa NFS

# vcd disk create nfs-shares-1 100g --description 'Kubernetes NFS shares'
# vcd vapp attach mycluster nfsd-9604 nfs-shares-1

Bayan haka, muna tafiya ta hanyar ssh (da gaske kun ƙirƙiri maɓallan?) zuwa kumburin NFS kuma a ƙarshe haɗa faifai:

root@nfsd-9604:~# parted /dev/sdb
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) unit GB
(parted) mkpart primary 0 100
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      0.00GB  100GB  100GB               primary

(parted) quit
root@nfsd-9604:~# mkfs -t ext4 /dev/sdb1
Creating filesystem with 24413696 4k blocks and 6111232 inodes
Filesystem UUID: 8622c0f5-4044-4ebf-95a5-0372256b34f0
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Ƙirƙiri kundin adireshi don bayanai kuma ɗaga sabon bangare a can:

mkdir /export
echo '/dev/sdb1  /export   ext4  defaults   0 0' >> /etc/fstab
mount -a

Bari mu ƙirƙiri ɓangarorin gwaji guda biyar kuma mu raba su don gungu:

>cd /export
>mkdir vol1 vol2 vol3 vol4 vol5
>vi /etc/exports
#Добавим это в конец файла
/export/vol1 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol2 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol3 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol4 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol5 *(rw,sync,no_root_squash,no_subtree_check)
#:wq! ;)
#Далее - экспортируем разделы
>exportfs -r

Bayan duk wannan sihirin, zamu iya ƙirƙirar PV da PVC a cikin tarin mu wani abu kamar haka:
Mai rahoto:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-vol1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # Same IP as the NFS host we ssh'ed to earlier.
    server: 10.150.200.22
    path: "/export/vol1"
EOF

pvc:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi
EOF

Anan ne labarin halittar kundi guda ya kare kuma labarin yanayin rayuwarsa ya fara. A matsayin kari, akwai ƙarin umarni CSE guda biyu masu amfani waɗanda ke ba ku damar adana albarkatu a wasu lokuta ko a'a:

#Увеличиваем размер кластера до 8 воркер нод
>cse cluster resize MyCluster --network k8s_cluster_net --nodes 8

#Выводим ненужные ноды из кластера с их последующим удалением
>vcd cse node delete MyCluster node-1a2v node-6685 --yes

Na gode duka don lokacinku, idan kuna da wasu tambayoyi, kuyi a cikin sharhi.

source: www.habr.com

Add a comment