CSE: Kubernetes for those in vCloud

CSE: Kubernetes for those in vCloud
Hi all!

It so happened that our small team, not to say that recently, and certainly not suddenly, has grown to port some (and in the future, all) products to Kubernetes.

There were many reasons for this, but our story is not about holivar.

From the infrastructure base, we had little choice. vCloud Director and vCloud Director. We chose the newer one and decided to start.

Once again, looking through The Hard Way, I very quickly came to the conclusion that a tool for automating at least basic processes, such as deployment and sizing, was needed yesterday. A deep dive into Google brought to light such a product as VMware Container Service Extension (CSE) - an open source product that allows you to automate the creation and sizing of k8s clusters for those who are in vCloud.

Disclaimer: CSE has its limitations, but for our purposes it was perfect. Also, the solution must be supported by a cloud provider, but since the server part is also open-source, ask your nearest manager to have it πŸ™‚

To start using, you need an administrator account in the vCloud organization and a pre-created routed network for the cluster (during the deployment process, you need Internet access from this network, do not forget to configure Firewall / NAT). Addressing doesn't matter. In this example, take 10.0.240.0/24

CSE: Kubernetes for those in vCloud

Since after creation, the cluster will need to be managed somehow, it is recommended to have a VPN with routing to the created network. We use a standard SSL VPN configured on our organization's Edge Gateway.

Next, you need to install the CSE client to where the k8s clusters will be managed from. In my case, this is a working laptop and a couple of well-hidden containers that drive automation.

The client requires Python version 3.7.3 and higher and the module installed vcd-cliso let's install both.

pip3 install vcd-cli

pip3 install container-service-extension

After installation, we check the CSE version and get the following:

# vcd cse version
Error: No such command "cse".

Unexpected, but fixable. As it turned out, CSE needs to be screwed as a module to vcd-cli.
To do this, you must first log in vcd-cli to our organization:

# vcd login MyCloud.provider.com org-dev admin
Password: 
admin logged in, org: 'org-dev', vdc: 'org-dev_vDC01'

After that, vcd-cli will create a config file ~/.vcd-cli/profiles.yaml
At the end, add the following:

extensions:
  - container_service_extension.client.cse

After that, we check again:

# vcd cse version
CSE, Container Service Extension for VMware vCloud Director, version 2.5.0

The client installation phase is complete. Let's try to deploy the first cluster.
CSE has several sets of usage options, all of them can be viewed here.

First, let's create keys for passwordless access to the future cluster. This point is important, since by default password entry to the nodes will be disabled and if you do not set the keys, you can get a lot of work through the consoles of virtual machines, which is not convenient.

# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.

Trying to start creating a cluster:

vcd cse cluster create MyCluster --network k8s_cluster_net --ssh-key ~/.ssh/id_rsa.pub --nodes 3 --enable-nfs

If we get an error Error: Session has expired or user not logged in. Please re-login. - log in vcd-cli to vCloud again as described above and try again.

This time everything is fine and the cluster creation task has started.

cluster operation: Creating cluster vApp 'MyCluster' (38959587-54f4-4a49-8f2e-61c3a3e879e0) from template 'photon-v2_k8-1.12_weave-2.3.0' (revision 1)

It will take about 20 minutes to complete the task, in the meantime we will analyze the main launch parameters.

--network - the network we created earlier.
--ssh-key - the keys we created that will be written to the cluster nodes
--nodes n - Number of Worker nodes in the cluster. The master will always be alone, this is a CSE limitation
--enable-nfs - create an additional node for NFS shares under persistent volumes. A bit of a pedal option, we'll come back to tweaking what it does in a bit.

Meanwhile, in vCloud, you can visually observe the creation of a cluster
CSE: Kubernetes for those in vCloud

Once the cluster creation task has completed, it is ready to go.

Check the correctness of the deployment with the command vcd cse cluster info MyCluster

CSE: Kubernetes for those in vCloud

Next, we need to get the cluster configuration to use kubectl

# vcd cse cluster config MyCluster > ./.kube/config

And you can check the status of the cluster already with the help of it:

CSE: Kubernetes for those in vCloud

At this point, the cluster can be considered conditionally working, if not for the story with persistent volumes. Since we are in vCloud, it will not work to use vSphere Provider. Option --enable-nfs designed to smooth out this trouble, but it did not work out to the end. Requires manual adjustment.

To get started, our node needs to create a separate Independent Disk in vCloud. This ensures that our data will not disappear with the cluster if it is deleted. Also, connect the disk to NFS

# vcd disk create nfs-shares-1 100g --description 'Kubernetes NFS shares'
# vcd vapp attach mycluster nfsd-9604 nfs-shares-1

After that, we go via ssh (did you really create the keys?) to our NFS node and finally connect the disk:

root@nfsd-9604:~# parted /dev/sdb
(parted) mklabel gpt
Warning: The existing disk label on /dev/sdb will be destroyed and all data on
this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) unit GB
(parted) mkpart primary 0 100
(parted) print
Model: VMware Virtual disk (scsi)
Disk /dev/sdb: 100GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name     Flags
 1      0.00GB  100GB  100GB               primary

(parted) quit
root@nfsd-9604:~# mkfs -t ext4 /dev/sdb1
Creating filesystem with 24413696 4k blocks and 6111232 inodes
Filesystem UUID: 8622c0f5-4044-4ebf-95a5-0372256b34f0
Superblock backups stored on blocks:
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
	4096000, 7962624, 11239424, 20480000, 23887872

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

Create a directory for data and mount a fresh partition there:

mkdir /export
echo '/dev/sdb1  /export   ext4  defaults   0 0' >> /etc/fstab
mount -a

Let's create five test partitions and share them for the cluster:

>cd /export
>mkdir vol1 vol2 vol3 vol4 vol5
>vi /etc/exports
#Π”ΠΎΠ±Π°Π²ΠΈΠΌ это Π² ΠΊΠΎΠ½Π΅Ρ† Ρ„Π°ΠΉΠ»Π°
/export/vol1 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol2 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol3 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol4 *(rw,sync,no_root_squash,no_subtree_check)
/export/vol5 *(rw,sync,no_root_squash,no_subtree_check)
#:wq! ;)
#Π”Π°Π»Π΅Π΅ - экспортируСм Ρ€Π°Π·Π΄Π΅Π»Ρ‹
>exportfs -r

After all this magic, we can create PV and PVC in our cluster like this:
Reporter:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-vol1
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # Same IP as the NFS host we ssh'ed to earlier.
    server: 10.150.200.22
    path: "/export/vol1"
EOF

PVC:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: ""
  resources:
    requests:
      storage: 10Gi
EOF

This is where the story of the creation of one cluster ends and the story of its life cycle begins. As a bonus, there are two more useful CSE commands that allow you to sometimes save a lot of resources or not:

#Π£Π²Π΅Π»ΠΈΡ‡ΠΈΠ²Π°Π΅ΠΌ Ρ€Π°Π·ΠΌΠ΅Ρ€ кластСра Π΄ΠΎ 8 Π²ΠΎΡ€ΠΊΠ΅Ρ€ Π½ΠΎΠ΄
>cse cluster resize MyCluster --network k8s_cluster_net --nodes 8

#Π’Ρ‹Π²ΠΎΠ΄ΠΈΠΌ Π½Π΅Π½ΡƒΠΆΠ½Ρ‹Π΅ Π½ΠΎΠ΄Ρ‹ ΠΈΠ· кластСра с ΠΈΡ… ΠΏΠΎΡΠ»Π΅Π΄ΡƒΡŽΡ‰ΠΈΠΌ ΡƒΠ΄Π°Π»Π΅Π½ΠΈΠ΅ΠΌ
>vcd cse node delete MyCluster node-1a2v node-6685 --yes

Thank you all for your time, if you have any questions - ask in the comments.

Source: habr.com

Add a comment