Chicken or Egg: Cracking IaC

Chicken or Egg: Cracking IaC
Which came first, the chicken or the egg? Pretty strange start for an Infrastructure-as-Code article, isn't it?

What is an egg?

Most often, Infrastructure-as-Code (IaC) is a declarative way to represent infrastructure. In it, we describe the state that we want to get, starting from the hardware part, ending with the software configuration. Therefore IaC is used for:

  1. resource provision. These are VMs, S3, VPCs, etc. Basic tools for work: terraform ΠΈ Cloud Formation.
  2. Software Configuration. Main tools: Ansible, Chef, etc.

Any code is in git repositories. And sooner or later, the team leader decides that it would be necessary to put things in order in them. And he will refactor. And create some structure. And he will see that it is good.

It's also good that it already exists GitLab ΠΈ GitHub-provider for Terraform (and this is Software Configuration). With their help, you can manage the entire project: team members, CI / CD, git-flow, etc.

Where did the egg come from?

Here we are gradually approaching the main question.

First of all, you need to start with a repository that describes the structure of other repositories, including yourself. And of course, within GitOps, you need to add CI so that changes are automatically implemented.

If Git is not created yet?

  1. How to store it in Git?
  2. How to screw CI?
  3. If we also deploy Gitlab using IaC, and even in Kubernetes?
  4. And GitLab Runner in Kubernetes too?
  5. And Kubernetes in a cloud provider?

What came first: GitLab, where I will upload my code, or code that describes what kind of GitLab I need?

Chicken with eggs

Β«Oyakodon3 with dinosaur" [src]

Let's try to cook a dish using as a cloud provider Managed Kubernetes Selector.

TL; DR

Is it possible to immediately and in one team?

$ export MY_SELECTEL_TOKEN=<token>
$ curl https://gitlab.com/chicken-or-egg/mks/make/-/snippets/2002106/raw | bash

Ingredients:

  • Account from my.selectel.ru;
  • Token from the account;
  • Kubernetes skills;
  • Helm skills;
  • Terraform skills;
  • Helm chart GitLab;
  • Helm chart GitLab Runner.

Recipe:

  1. Get MY_SELECTEL_TOKEN from panel my.selectel.com.
  2. Create a Kubernetes cluster by passing the token from the account to it.
  3. Get KUBECONFIG from the created cluster.
  4. Install GitLab in Kubernetes.
  5. Get GitLab-token from GitLab generated for user root.
  6. Create a project structure in GitLab using GitLab-token.
  7. Push existing code to GitLab.
  8. ??
  9. Profit!

Step 1. The token can be obtained in the section API keys.

Chicken or Egg: Cracking IaCStep 2. We prepare our Terraform for "baking" a cluster of 2 nodes. If you are sure that you have enough resources for all, then you can enable auto quotas:

provider "selectel" {
 token = var.my_selectel_token
}

variable "my_selectel_token" {}
variable "username" {}
variable "region" {}


resource "selectel_vpc_project_v2" "my-k8s" {
 name = "my-k8s-cluster"
 theme = {
   color = "269926"
 }
 quotas {
   resource_name = "compute_cores"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     value = 16
   }
 }
 quotas {
   resource_name = "network_floatingips"
   resource_quotas {
     region = var.region
     value = 1
   }
 }
 quotas {
   resource_name = "load_balancers"
   resource_quotas {
     region = var.region
     value = 1
   }
 }
 quotas {
   resource_name = "compute_ram"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     value = 32768
   }
 }
 quotas {
   resource_name = "volume_gigabytes_fast"
   resource_quotas {
     region = var.region
     zone = "${var.region}a"
     # (20 * 2) + 50 + (8 * 3 + 10)
     value = 130
   }
 }
}

resource "selectel_mks_cluster_v1" "k8s-cluster" {
 name         = "k8s-cluster"
 project_id   = selectel_vpc_project_v2.my-k8s.id
 region       = var.region
 kube_version = "1.17.9"
}

resource "selectel_mks_nodegroup_v1" "nodegroup_1" {
 cluster_id        = selectel_mks_cluster_v1.k8s-cluster.id
 project_id        = selectel_mks_cluster_v1.k8s-cluster.project_id
 region            = selectel_mks_cluster_v1.k8s-cluster.region
 availability_zone = "${var.region}a"
 nodes_count       = 2
 cpus              = 8
 ram_mb            = 16384
 volume_gb         = 15
 volume_type       = "fast.${var.region}a"
 labels            = {
   "project": "my",
 }
}

Adding a user to the project:

resource "random_password" "my-k8s-user-pass" {
 length = 16
 special = true
 override_special = "_%@"
}

resource "selectel_vpc_user_v2" "my-k8s-user" {
 password = random_password.my-k8s-user-pass.result
 name = var.username
 enabled  = true
}

resource "selectel_vpc_keypair_v2" "my-k8s-user-ssh" {
 public_key = file("~/.ssh/id_rsa.pub")
 user_id    = selectel_vpc_user_v2.my-k8s-user.id
 name = var.username
}

resource "selectel_vpc_role_v2" "my-k8s-role" {
 project_id = selectel_vpc_project_v2.my-k8s.id
 user_id    = selectel_vpc_user_v2.my-k8s-user.id
}

Output:

output "project_id" {
 value = selectel_vpc_project_v2.my-k8s.id
}

output "k8s_id" {
 value = selectel_mks_cluster_v1.k8s-cluster.id
}

output "user_name" {
 value = selectel_vpc_user_v2.my-k8s-user.name
}

output "user_pass" {
 value = selectel_vpc_user_v2.my-k8s-user.password
}

Launch:

$ env 
TF_VAR_region=ru-3 
TF_VAR_username=diamon 
TF_VAR_my_selectel_token=<token> 
terraform plan -out planfile

$ terraform apply -input=false -auto-approve planfile

Chicken or Egg: Cracking IaC
Step 3. We get the cubeconfig.

To programmatically download KUBECONFIG, you need to get a token from OpenStack:

openstack token issue -c id -f value > token

And with this token, make a request to the Managed Kubernetes Selectel API. k8s_id issues terra form:

curl -XGET -H "x-auth-token: $(cat token)" "https://ru-3.mks.selcloud.ru/v1/clusters/$(cat k8s_id)/kubeconfig" -o kubeConfig.yaml

The cubeconfig can also be accessed via the panel.

Chicken or Egg: Cracking IaC
Step 4. After the cluster is baked and we have access to it, we can add yaml on top to taste.

I prefer to add:

  • namespace
  • storage class
  • pod security policy and more.

storage class for Selectel can be taken from official repository.

Since I initially chose a cluster in the zone en-3a, then I also need the Storage Class from this zone.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
 name: fast.ru-3a
 annotations:
   storageclass.kubernetes.io/is-default-class: "true"
provisioner: cinder.csi.openstack.org
parameters:
 type: fast.ru-3a
 availability: ru-3a
allowVolumeExpansion: true

Step 5. Install a load balancer.

We will use the standard for many nginx-ingress. There are already plenty of instructions for installing it, so we won't dwell on it.

$ helm repo add nginx-stable https://helm.nginx.com/stable
$ helm upgrade nginx-ingress nginx-stable/nginx-ingress -n ingress --install -f ../internal/K8S-cluster/ingress/values.yml

We are waiting for it to receive an external IP for about 3-4 minutes:

Chicken or Egg: Cracking IaC
Get an external IP:

Chicken or Egg: Cracking IaC
Step 6. Install GitLab.

$ helm repo add gitlab https://charts.gitlab.io
$ helm upgrade gitlab gitlab/gitlab -n gitlab  --install -f gitlab/values.yml --set "global.hosts.domain=gitlab.$EXTERNAL_IP.nip.io"

Again, we are waiting for all the pods to rise.

kubectl get po -n gitlab
NAME                                      	READY   STATUS  	RESTARTS   AGE
gitlab-gitaly-0                           	0/1 	Pending 	0      	0s
gitlab-gitlab-exporter-88f6cc8c4-fl52d    	0/1 	Pending 	0      	0s
gitlab-gitlab-runner-6b6867c5cf-hd9dp     	0/1 	Pending 	0      	0s
gitlab-gitlab-shell-55cb6ccdb-h5g8x       	0/1 	Init:0/2	0      	0s
gitlab-migrations.1-2cg6n                 	0/1 	Pending 	0      	0s
gitlab-minio-6dd7d96ddb-zd9j6             	0/1 	Pending 	0      	0s
gitlab-minio-create-buckets.1-bncdp       	0/1 	Pending 	0      	0s
gitlab-postgresql-0                       	0/2 	Pending 	0      	0s
gitlab-prometheus-server-6cfb57f575-v8k6j 	0/2 	Pending 	0      	0s
gitlab-redis-master-0                     	0/2 	Pending 	0      	0s
gitlab-registry-6bd77b4b8c-pb9v9          	0/1 	Pending 	0      	0s
gitlab-registry-6bd77b4b8c-zgb6r          	0/1 	Init:0/2	0      	0s
gitlab-shared-secrets.1-pc7-5jgq4         	0/1 	Completed   0      	20s
gitlab-sidekiq-all-in-1-v1-54dbcf7f5f-qbq67   0/1 	Pending 	0      	0s
gitlab-task-runner-6fd6857db7-9x567       	0/1 	Pending 	0      	0s
gitlab-webservice-d9d4fcff8-hp8wl         	0/2 	Pending 	0      	0s
Waiting gitlab
./wait_gitlab.sh ../internal/gitlab/gitlab/.pods
waiting for pod...
waiting for pod...
waiting for pod...

The pods have risen:

Chicken or Egg: Cracking IaC
Step 7. We get a GitLab token.

First, find out the password to enter:

kubectl get secret -n gitlab gitlab-gitlab-initial-root-password -o jsonpath='{.data.password}' | base64 --decode

Now let's log in and get a token:

python3 get_gitlab_token.py root $GITLAB_PASSWORD http://gitlab.gitlab.$EXTERNAL_IP.nip.io

Step 8. We bring the Git repositories to the correct hierarchy using the Gitlab Provider.

cd ../internal/gitlab/hierarchy && terraform apply -input=false -auto-approve planfile

Unfortunately, the terraform GitLab provider has a floating a bug. Then you will have to delete the conflicting projects by hand so that tf.state can be fixed. Then re-run the `$ make all` command

Step 9. We transfer local repositories to the server.

$ make push

[master (root-commit) b61d977]  Initial commit
 3 files changed, 46 insertions(+)
 create mode 100644 .gitignore
 create mode 100644 values.yml
Enumerating objects: 5, done.
Counting objects: 100% (5/5), done.
Delta compression using up to 8 threads
Compressing objects: 100% (5/5), done.
Writing objects: 100% (5/5), 770 bytes | 770.00 KiB/s, done.
Total 5 (delta 0), reused 0 (delta 0)

Finish:

Chicken or Egg: Cracking IaC
Chicken or Egg: Cracking IaC
Chicken or Egg: Cracking IaC

Conclusion

We have achieved that we can declaratively manage everything from our local machine. Now I want to transfer all these tasks to CI and only press buttons. To do this, we need to transfer our local states (Terraform state) to CI. How to do this, in the next part.

Subscribe to our blogso as not to miss the release of new articles!

Source: habr.com

Add a comment