Creating a CI / CD chain and automating work with Docker

I wrote my first websites in the late 90s. Then it was very easy to bring them into working condition. There was an Apache server on some shared hosting, this server could be accessed via FTP by writing in the browser line something like ftp://ftp.example.com. Then it was necessary to enter a name and password and upload the files to the server. There were other times, everything was simpler then than it is now.

Creating a CI / CD chain and automating work with Docker

Things have changed a lot in the past two decades. Sites have become more complex, they must be assembled before being released into production. One single server has become many servers running behind load balancers, the use of version control systems has become commonplace.

For my personal project, I had a special configuration. And I knew that I needed the ability to deploy a site in production, performing just one action: writing code to a branch master on GitHub. I also knew that I didn’t want to manage a huge Kubernetes cluster, or use Docker Swarm technology, or maintain a server park with pods, agents, and all sorts of other complexities, to run my small web application. In order to achieve the goal of making work as easy as possible, I needed to get acquainted with CI / CD.

If you have a small project (in our case, a Node.js project) and would like to learn how to automate the deployment of this project, while making sure that what is stored in the repository exactly matches what works in production, I think you might be interested in this article.

Prerequisites

The reader of this article is expected to have basic command line and Bash scripting knowledge. In addition, he will need accounts Travis C.I. и Docker hub.

Goals

I will not say that this article can unconditionally be called a "training guide". This is more of a document in which I talk about what I learned and describe the process that suits me for testing and deploying code to production, performed in one automated pass.

Here's what my workflow ended up looking like.

For code pushed to any branch of the repository other than master, the following actions are performed:

  • Project build on Travis CI starts.
  • All unit, integration and end-to-end tests are performed.

Only for code that ends up in master, the following is done:

  • All of the above, plus...
  • Building a Docker image based on the current code, settings, and environment.
  • Hosting the image on Docker Hub.
  • Connection to the production server.
  • Uploading an image from Docker Hub to the server.
  • Stop the current container and start a new one based on the new image.

If you know absolutely nothing about Docker, images and containers, don't worry. I'll tell you all about it.

What is CI/CD?

The abbreviation CI / CD stands for "continuous integration / continuous deployment" - "continuous integration / continuous deployment".

▍Continuous Integration

Continuous Integration is the process by which developers make commits to a project's main source code repository (usually a branch master). At the same time, the quality of the code is ensured through automated testing.

▍Continuous Deployment

Continuous deployment is the frequent automated deployment of code to production. The second part of the abbreviation CI / CD is sometimes revealed as "continuous delivery" ("continuous delivery"). This is basically the same as "continuous deployment", but "continuous delivery" implies that changes must be manually confirmed before starting the project deployment process.

Beginning of work

The application on which I mastered all this is called take note. This is a web project I'm working on for taking notes. First I tried to do JAMStack-project, or just a front-end application without a server, in order to take advantage of the standard hosting and project deployment options that it offers netlify. As the complexity of the application grew, I needed to create its back-end, which meant that I would have to form my own strategy for automated integration and automated deployment of the project.

In my case, the application is an Express server running in a Node.js environment, serving a single-page React application and supporting a secure server-side API. This architecture follows a strategy that can be found in This full stack authentication guide.

I consulted with other, who is an automation expert, and asked him what I need to do to make it all work the way I want. He gave me an idea of ​​what the automated workflow outlined in the Goals section of this article should look like. Setting goals like this meant I needed to figure out how to use Docker.

Docker

Docker is a tool that, thanks to containerization technology, makes it easy to distribute applications, as well as deploy and run them in the same environment, even if the Docker platform itself runs in different environments. First, I needed to get my hands on the Docker command line tools (CLI). Instruction The installation guide for Docker isn't very clear, but you can learn from it that in order to take the first step of the installation, you need to download Docker Desktop (for Mac or Windows).

Docker Hub is about the same as GitHub for git repositories, or registry npm for JavaScript packages. This is an online repository for Docker images. This is where Docker Desktop connects to.

So, in order to get started with Docker, you need to do two things:

After that, you can check if the Docker CLI is working by running the following command to check the Docker version:

docker -v

Next, login to Docker Hub by entering your username and password when asked:

docker login

In order to use Docker, you must understand the concepts of images and containers.

▍Images

An image is a kind of blueprint containing instructions for building a container. This is an immutable snapshot of the file system and application settings. Developers can easily share images.

# Вывод сведений обо всех образах
docker images

This command will output a table with the following title:

REPOSITORY     TAG     IMAGE ID     CREATED     SIZE
---

Next, we will consider some examples of commands in the same format - first there is a command with a comment, and then an example of what it can output.

▍Containers

A container is an executable package that contains everything needed to run an application. An application with this approach will always work the same, regardless of the infrastructure: in an isolated environment and in the same environment. We are talking about the fact that instances of the same image are launched in different environments.

# Перечисление всех контейнеров
docker ps -a
CONTAINER ID     IMAGE     COMMAND     CREATED     STATUS     PORTS     NAMES
---

▍Tags

A tag is an indication of a specific version of an image.

▍Quick reference for Docker commands

Here is an overview of some commonly used Docker commands.

Team

Context

Action

dockerbuild

Form

Building an image from a Dockerfile

docker tag

Form

Image tagging

docker images

Form

Displaying a list of images

dockerrun

Container

Running an image-based container

docker push

Form

Sending an Image to the Registry

docker-pull

Form

Loading an Image from the Registry

ps docker

Container

Listing containers

docker system prune

Image/Container

Removing unused containers and images

▍Dockerfile

I know how to run a production application locally. I have a webpack config to build a finished React app. Next, I have a command that starts a Node.js based server on the port 5000... It looks like this:

npm i         # установка зависимостей
npm run build # сборка React-приложения
npm run start # запуск Node-сервера

It should be noted that I do not have an example application for this material. But here, for experiments, any simple Node application will do.

In order to use the container, you will need to give instructions to Docker. This is done through a file called Dockerfilelocated in the root directory of the project. This file, at first, seems rather incomprehensible.

But what it contains only describes, in special commands, something like setting up a working environment. Here are some of those commands:

  • FROM — This command starts a file. It specifies the base image from which the container is built.
  • COPY - Copying files from a local source to a container.
  • WORKDIR - Setting the working directory for the following commands.
  • RUN - Run commands.
  • STATEMENT — Port setting.
  • ENTRY POINT — An indication of the command to be executed.

Dockerfile might look something like this:

# Загрузить базовый образ
FROM node:12-alpine

# Скопировать файлы из текущей директории в директорию app/
COPY . app/

# Использовать app/ в роли рабочей директории
WORKDIR app/

# Установить зависимости (команда npm ci похожа npm i, но используется для автоматизированных сборок)
RUN npm ci --only-production

# Собрать клиентское React-приложение для продакшна
RUN npm run build

# Прослушивать указанный порт
EXPOSE 5000

# Запустить Node-сервер
ENTRYPOINT npm run start

Depending on the base image you choose, you may need to install additional dependencies. The fact is that some base images (like Node Alpine Linux) are designed to be as compact as possible. As a result, they may not include some of the programs you expect.

▍Building, tagging and running a container

Local assembly and launch of the container is, after we have Dockerfilethe tasks are quite simple. Before pushing an image to Docker Hub, it needs to be tested locally.

▍Assembly

First you need to collect image, specifying a name, and, optionally, a tag (if no tag is specified, the system will automatically assign a tag to the image latest).

# Сборка образа
docker build -t <image>:<tag> .

After running this command, you can watch Docker build the image.

Sending build context to Docker daemon   2.88MB
Step 1/9 : FROM node:12-alpine
 ---> ...выполнение этапов сборки...
Successfully built 123456789123
Successfully tagged <image>:<tag>

Building can take a couple of minutes - it all depends on how many dependencies you have. After the build is complete, you can run the command docker images and take a look at the description of your new image.

REPOSITORY          TAG               IMAGE ID            CREATED              SIZE
<image>             latest            123456789123        About a minute ago   x.xxGB

▍Launch

The image has been created. And this means that on its basis you can run a container. Since I want to be able to access the application running in the container at localhost:5000, i, on the left side of the pair 5000:5000 in the following command set 5000. On the right side is the container port.

# Запуск с использованием локального порта 5000 и порта контейнера 5000
docker run -p 5000:5000 <image>:<tag>

Now that the container is created and running, you can use the command docker ps to look at information about this container (or you can use the command docker ps -a, which displays information about all containers, not just running ones).

CONTAINER ID        IMAGE               COMMAND                  CREATED              STATUS                      PORTS                    NAMES
987654321234        <image>             "/bin/sh -c 'npm run…"   6 seconds ago        Up 6 seconds                0.0.0.0:5000->5000/tcp   stoic_darwin

If you now go to localhost:5000 - you can see the page of the running application, which looks exactly the same as the page of the application running in the production environment.

▍Tag Assignment and Publication

In order to use one of the created images on the production server, we need to be able to download this image from Docker Hub. This means that you first need to create a repository for the project on Docker Hub. After that, we will have a place where we can send the image. The image needs to be renamed so that its name starts with our Docker Hub username. This should be followed by the name of the repository. Any tag can be placed at the end of the name. Below is an example of naming images according to this scheme.

Now you can build the image with a new name assigned to it and run the command docker push to push it to the Docker Hub repository.

docker build -t <username>/<repository>:<tag> .
docker tag <username>/<repository>:<tag> <username>/<repository>:latest
docker push <username>/<repository>:<tag>

# На практике это может выглядеть, например, так:
docker build -t user/app:v1.0.0 .
docker tag user/app:v1.0.0 user/app:latest
docker push user/app:v1.0.0

If everything goes well, the image will be available on Docker Hub and can be easily uploaded to the server or shared with other developers.

Next Steps

By now, we have verified that the application, in the form of a Docker container, is running locally. We have uploaded the container to Docker Hub. All this means that we have already made very good progress towards our goal. Now we need to solve two more questions:

  • Setting up a CI tool for testing and deploying code.
  • Setting up the production server so that it can download and run our code.

In our case, as a CI / CD solution, we use Travis C.I.. As a server - DigitalOcean.

It should be noted that here you can use another combination of services. For example, instead of Travis CI, you can use CircleCI or Github Actions. And instead of DigitalOcean - AWS or Linode.

We decided to work with Travis CI, and I already have something set up in this service. Therefore, now I will briefly talk about how to prepare it for work.

Travis C.I.

Travis CI is a tool for testing and deploying code. I don't want to go into the details of setting up Travis CI, as each project is unique and it won't do much good. But I will cover the basics to get you started if you decide to use Travis CI. Whatever you choose - Travis CI, CircleCI, Jenkins, or something else, similar configuration methods will apply everywhere.

To get started with Travis CI, go to Website and create an account. Then integrate Travis CI with your GitHub account. When setting up the system, you will need to specify the repository you want to automate and enable access to it. (I use GitHub, but I'm sure Travis CI can integrate with BitBucket, GitLab, and other similar services).

Every time Travis CI is started, a server is started that executes the commands specified in the configuration file, including deploying the appropriate branches of the repository.

▍Job lifecycle

Travis CI configuration file called .travis.yml and stored in the project root directory, supports the concept of events life cycle tasks. Here are the events, listed in the order in which they occur:

  • apt addons
  • cache components
  • before_install
  • install
  • before_script
  • script
  • before_cache
  • after_success или after_failure
  • before_deploy
  • deploy
  • after_deploy
  • after_script

▍Testing

In the config file, I am going to set up a local Travis CI server. I chose Node 12 as the language and told the system to install the dependencies needed to use Docker.

Everything listed in .travis.yml, will be executed on all pull requests to all branches of the repository, unless otherwise specified. This is a useful feature as it means we can test all the code that goes into the repository. This allows you to know if the code is ready to be written to the branch. master, and whether it will break the build process of the project. In this global configuration, I install everything locally, run the Webpack dev server in the background (this is a feature of my workflow), and run the tests.

If you want your repository to display code coverage icons, here you can find a quick tutorial on how to use Jest, Travis CI, and Coveralls to collect and display this information.

So here is the content of the file .travis.yml:

# Установить язык
language: node_js

# Установить версию Node.js
node_js:
  - '12'

services:
  # Использовать командную строку Docker
  - docker

install:
  # Установить зависимости для тестов
  - npm ci

before_script:
  # Запустить сервер и клиент для тестов
  - npm run dev &

script:
  # Запустить тесты
  - npm run test

This is where the actions that are performed for all branches of the repository and for pull requests end.

▍Deployment

Based on the assumption that all automated tests have completed successfully, we can optionally deploy the code to the production server. Since we only want to do this for branch code master, we give the system the appropriate instructions in the deployment settings. Before you try to use the code that we will look at next in your project, I would like to warn you that you must have an actual script that is called for deployment.

deploy:
  # Собрать Docker-контейнер и отправить его на Docker Hub
  provider: script
  script: bash deploy.sh
  on:
    branch: master

The deployment script does two things:

  • Building, tagging and sending the image to Docker Hub using a CI tool (in our case it is Travis CI).
  • Loading the image on the server, stopping the old container and starting a new one (in our case, the server runs on the DigitalOcean platform).

First, you need to set up an automatic process for building, tagging, and pushing the image to Docker Hub. All this is very similar to what we already did manually, except that here we need a strategy for assigning unique tags to images and automating login. I had difficulty with some details of the deployment script, such as tagging strategy, logging in, encoding SSH keys, establishing an SSH connection. But fortunately, my boyfriend is very good with bash, as well as with many other things. He helped me write this script.

So, the first part of the script is sending the image to Docker Hub. To do this is quite simple. The tagging scheme I've used involves combining the git hash and the git tag if it exists. This ensures that the tag is unique and makes it easier to identify the assembly it is based on. DOCKER_USERNAME и DOCKER_PASSWORD are user-defined environment variables that can be set using the Travis CI interface. Travis CI will automatically process sensitive data so that it does not fall into the wrong hands.

Here is the first part of the script deploy.sh.

#!/bin/sh
set -e # Остановить скрипт при наличии ошибок

IMAGE="<username>/<repository>"                             # Образ Docker
GIT_VERSION=$(git describe --always --abbrev --tags --long) # Git-хэш и теги

# Сборка и тегирование образа
docker build -t ${IMAGE}:${GIT_VERSION} .
docker tag ${IMAGE}:${GIT_VERSION} ${IMAGE}:latest

# Вход в Docker Hub и выгрузка образа
echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
docker push ${IMAGE}:${GIT_VERSION}

What the second part of the script will be depends entirely on which host you are using and how the connection to it is organized. In my case, since I use Digital Ocean, the commands are used to connect to the server doctl. When working with Aws, the utility will be used aws, etc.

Setting up the server was not particularly difficult. So, I set up a droplet based on the base image. It should be noted that the system I have chosen requires a one-time manual installation of Docker and a one-time manual start of Docker. I used Ubuntu 18.04 to install Docker, so if you are also using Ubuntu you can just follow this simple guidance.

I am not talking about specific commands for the service here, since this aspect can vary greatly in different cases. I will just give a general plan of action to be performed after connecting via SSH to the server where the project will be deployed:

  • You need to find the container that is currently running and stop it.
  • Then you need to start a new container in the background.
  • You will need to set the server's local port to 80 - this will allow you to enter the site at the address of the form example.com, without specifying a port, rather than using an address like example.com:5000.
  • And finally, you need to remove all old containers and images.

Here is the continuation of the script.

# Найти ID работающего контейнера
CONTAINER_ID=$(docker ps | grep takenote | cut -d" " -f1)

# Остановить старый контейнер, запустить новый, очистить систему
docker stop ${CONTAINER_ID}
docker run --restart unless-stopped -d -p 80:5000 ${IMAGE}:${GIT_VERSION}
docker system prune -a -f

Some things to watch out for

It is possible that when you connect to the server via SSH from Travis CI, you will see a warning that will not allow you to continue with the installation, as the system will wait for the user's response.

The authenticity of host '<hostname> (<IP address>)' can't be established.
RSA key fingerprint is <key fingerprint>.
Are you sure you want to continue connecting (yes/no)?

I learned that a string key can be encoded in base64 in order to store it in a form in which it can be conveniently and reliably worked with. At the installation stage, you can decode the public key and write it to a file known_hosts in order to get rid of the above error.

echo <public key> | base64 # выводит <публичный ключ, закодированный в base64>

In practice, this command might look like this:

echo "123.45.67.89 ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAklOUpkDHrfHY17SbrmTIpNLTGK9Tjom/BWDSU
GPl+nafzlHDTYW7hdI4yZ5ew18JH4JW9jbhUFrviQzM7xlELEVf4h9lFX5QVkbPppSwg0cda3
Pbv7kOdJ/MTyBlWXFCR+HAo3FXRitBqxiX1nKhXpHAZsMciLq8V6RjsNAQwdsdMFvSlVK/7XA
t3FaoJoAsncM1Q9x5+3V0Ww68/eIFmb1zuUFljQJKprrX88XypNDvjYNby6vw/Pb0rwert/En
mZ+AW4OZPnTPI89ZPmVMLuayrD2cE86Z/il8b+gw3r3+1nKatmIkjn2so1d01QraTlMqVSsbx
NrRFi9wrf+M7Q== [email protected]" | base64

And here is what it gives out - a base64 encoded string:

MTIzLjQ1LjY3Ljg5IHNzaC1yc2EgQUFBQUIzTnphQzF5YzJFQUFBQUJJd0FBQVFFQWtsT1Vwa0RIcmZIWTE3U2JybVRJcE5MVEdLOVRqb20vQldEU1UKR1BsK25hZnpsSERUWVc3aGRJNHlaNWV3MThKSDRKVzlqYmhVRnJ2aVF6TTd4bEVMRVZmNGg5bEZYNVFWa2JQcHBTd2cwY2RhMwpQYnY3a09kSi9NVHlCbFdYRkNSK0hBbzNGWFJpdEJxeGlYMW5LaFhwSEFac01jaUxxOFY2UmpzTkFRd2RzZE1GdlNsVksvN1hBCnQzRmFvSm9Bc25jTTFROXg1KzNWMFd3NjgvZUlGbWIxenVVRmxqUUpLcHJyWDg4WHlwTkR2allOYnk2dncvUGIwcndlcnQvRW4KbVorQVc0T1pQblRQSTg5WlBtVk1MdWF5ckQyY0U4NlovaWw4YitndzNyMysxbkthdG1Ja2puMnNvMWQwMVFyYVRsTXFWU3NieApOclJGaTl3cmYrTTdRPT0geW91QGV4YW1wbGUuY29tCg==

Here is the command mentioned above

install:
  - echo < публичный ключ, закодированный в base64> | base64 -d >> $HOME/.ssh/known_hosts

The same approach can be used with a private key when establishing a connection, since you may well need a private key to access the server. When working with a key, you only need to ensure that it is securely stored in a Travis CI environment variable, and that it is not displayed anywhere.

Another thing to note is that you may need to run the entire deployment script as a single line, for example with doctl. This may require some extra effort.

doctl compute ssh <droplet> --ssh-command "все команды будут здесь && здесь"

TLS/SSL and load balancing

After I did all of the above, the last problem I had was that the server didn't have SSL. Since I am using a Node.js server, in order to force work reverse proxy Nginx and Let's Encrypt, you need to tinker a lot.

I really didn't want to do all these SSL settings manually, so I just created a load balancer and recorded information about it in DNS. In the case of DigitalOcean, for example, creating an auto-renewing self-signed certificate on the load balancer is a simple, free, and fast procedure. This approach has the added benefit of making it very easy to set up SSL on multiple servers running behind a load balancer if needed. This allows the servers themselves to not "think" about SSL at all, but at the same time use, as usual, the port 80. So configuring SSL on a load balancer is much easier and more convenient than alternative SSL configuration methods.

Now you can close all ports on the server that accept incoming connections - except for the port 80, used to communicate with the load balancer, and the port 22 for SSH. As a result, an attempt to contact the server directly on any ports other than these two will fail.

Results

After I did everything I talked about in this article, neither the Docker platform nor the concept of automated CI / CD chains scared me anymore. I was able to set up a continuous integration chain, during which the code is tested before it goes into production and the code is automatically deployed to the server. All this is still relatively new to me, and I'm sure there are ways to improve my automated workflow and make it more efficient. So if you have any ideas about this - give me know. I hope this article has helped you in your endeavors. I want to believe that by reading it, you learned as much as I learned while I was dealing with everything that I told about in it.

PS In our marketplace there is an image Docker, which is installed in one click. You can check the containers work on VPS. All new customers are given 3 days of testing free of charge.

Dear Readers, Do you use CI/CD technologies in your projects?

Creating a CI / CD chain and automating work with Docker

Source: habr.com

Add a comment