Good day
We have several cloud clusters with a large number of virtual machines in each. All this business at us is hosted in Hetzner'e. In each cluster, we have one master machine, a snapshot is taken from it and automatically distributed to all virtual machines within the cluster.
This scheme does not allow us to use gitlab-runners normally, since there are a lot of problems when many identical registered runners appear, which prompted us to find a workaround and write this article / manual.
This is probably not best practice, but this solution seemed to be the most convenient and simple.
For a tutorial, I ask under cat.
Required packages on the master machine:
- python
- git
- ssh key file
The general principle for implementing automatic gut pull on all virtual machines is that you need a machine on which Ansible will be installed. From this machine, ansible will send git pull commands and restart the service that has been updated. For these purposes, we created a hotel virtual machine outside the clusters, put on it:
- python
- responsive
- gitlab-runner
From organizational issues - you need to register gitlab-runner, make ssh-keygen, throw the public ssh key of this machine into .ssh/authorized_keys
on the master machine, open port 22 on the master machine for ansible.
Now let's configure ansible
Since our goal is to automate everything that is possible. In file /etc/ansible/ansible.cfg
we uncomment the line host_key_checking = False
so that ansible doesn't ask for confirmation of new machines.
Next, you need to automatically generate an inventory file for ansible, from where it will pick up the ip of the machines on which you need to do git pull.
We generate this file using Hetzner's API, but you can take a list of hosts from your AWS, Asure, database (you have an API somewhere to display your running machines, right?).
For Ansible, the structure of the inventory file is very important, it should look like this:
[Π³ΡΡΠΏΠΏΠ°]
ip-Π°Π΄ΡΠ΅Ρ
ip-Π°Π΄ΡΠ΅Ρ
[Π³ΡΡΠΏΠΏΠ°2]
ip-Π°Π΄ΡΠ΅Ρ
ip-Π°Π΄ΡΠ΅Ρ
To generate such a file, let's make a simple script (let's call it vm_list
):
#!/bin/bash
echo [group] > /etc/ansible/cloud_ip &&
"Π²Π°Ρ CLI Π·Π°ΠΏΡΠΎΡ Π½Π° ΠΏΠΎΠ»ΡΡΠ΅Π½ΠΈΠ΅ IP Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ
ΠΌΠ°ΡΠΈΠ½ Π² ΠΊΠ»Π°ΡΡΠ΅ΡΠ΅" >> /etc/ansible/cloud_ip
echo " " >> /etc/ansible/cloud_ip
echo [group2] > /etc/ansible/cloud_ip &&
"Π²Π°Ρ CLI Π·Π°ΠΏΡΠΎΡ Π½Π° ΠΏΠΎΠ»ΡΡΠ΅Π½ΠΈΠ΅ IP Π·Π°ΠΏΡΡΠ΅Π½Π½ΡΡ
ΠΌΠ°ΡΠΈΠ½ Π² Π΄ΡΡΠ³ΠΎΠΌ ΠΊΠ»Π°ΡΡΠ΅ΡΠ΅" >> /etc/ansible/cloud_ip
It's time to check that ansible works and is friends with the recipient of ip addresses:
/etc/ansible/./vm_list && ansible -i /etc/ansible/cloud_ip -m shell -a 'hostname' group
The output should get the hostnames of the machines on which the command was executed.
A couple of words about the syntax:
- /etc/ansible/./vm_list - generate a list of machines
- -i - absolute path to the inventory file
- -m tell ansible to use the shell module
- -a is an argument. Any command can be entered here.
- group is the name of your cluster. If you need to do it on all clusters, change group to all
Go ahead - let's try to do a git pull on our virtual machines:
/etc/ansible/./vm_list && ansible -i /etc/ansible/cloud_ip -m shell -a 'cd /path/to/project && git pull' group
If in the output we see already up to date or unloading from the repository, then everything is working.
Now what it's all about
Let's teach our script to be executed automatically when committing in the master branch in gitlab
First, let's make our script prettier and put it in an executable file (let's call it exec_pull) -
#!/bin/bash
/etc/ansible/./get_vms && ansible -i /etc/ansible/cloud_ip -m shell -a "$@"
We go to our gitlab and create a file in the project .gitlab-ci.yml
Put the following inside:
variables:
GIT_STRATEGY: none
VM_GROUP: group
stages:
- pull
- restart
run_exec_pull:
stage: pull
script:
- /etc/ansible/exec_pull 'cd /path/to/project/'$CI_PROJECT_NAME' && git pull' $VM_GROUP
only:
- master
run_service_restart:
stage: restart
script:
- /etc/ansible/exec_pull 'your_app_stop && your_app_start' $VM_GROUP
only:
- master
All is ready. Now -
- make a commit
- happy that everything is working
When transferring .yml to other projects, you only need to change the name of the service for restart and the name of the cluster on which the ansible commands will be executed.
Source: habr.com