Our implementation of Continuous Deployment to the customer's platform

We at True Engineering have set up a process for continuous delivery of updates to the customer's servers and want to share this experience.

To begin with, we developed an online system for the customer and deployed it in our own Kubernetes cluster. Now our high-load solution has moved to the customer's platform, for which we have set up a fully automatic Continuous Deployment process. Thanks to this, we have accelerated time-to-market - the delivery of changes to the product environment.

In this article, we will cover all stages of the Continuous Deployment (CD) process or delivery of updates to the customer's platform:

  1. How does this process start?
  2. synchronization with the customer's Git repository,
  3. assembly of backend and frontend,
  4. automatic deployment of the application in a test environment,
  5. automatic deployment on Prod.

In the process, we will share the details of the settings.

Our implementation of Continuous Deployment to the customer's platform

1. Start CD

Continuous Deployment starts with the developer pushing changes to the release branch of our Git repository.

Our application is based on a microservice architecture and all of its components are stored in one repository. Thanks to this, all microservices are collected and installed, even if one of them has changed.

We organized work through one repository for several reasons:

  • Ease of development - the application is actively developed, so you can work with all the code at once.
  • A single CI/CD pipeline that ensures that the application as a single system passes all tests and is delivered to the customer's prod environment.
  • We eliminate confusion in versions - we do not have to store a map of microservice versions and describe our configuration for each microservice in Helm scripts.

2. Synchronization with the Git repository of the customer's source code

The changes made are automatically synchronized with the customer's Git repository. There, the application assembly is configured, which starts after updating the branch, and deployment to the prod. Both processes happen in their environment from a Git repository.

We cannot work with the customer's repository directly, as we need our own environments for development and testing. We use our own Git repository for these purposes - it is synchronized with their Git repository. As soon as the developer uploads changes to the appropriate branch of our repository, GitLab immediately sends these changes to the customer.

Our implementation of Continuous Deployment to the customer's platform

After that, you need to do the assembly. It consists of several stages: backend and frontend assembly, testing and delivery to production.

3. Building backend and frontend

Backend and frontend builds are two parallel tasks that run in the GitLab Runner system. Its source assembly configuration is in the same repository.

Tutorial for writing a YAML script for building in GitLab.

GitLab Runner takes the code from the required repository, builds the Java application with the build command and sends it to the Docker registry. Here we assemble the backend and frontend, get Docker images, which we add to the repository on the customer's side. To manage Doker images we use gradle plugin.

We keep the versions of our images in sync with the version of the release that will be pushed to Docker. For smooth operation, we have made several settings:

1. Between the test environment and the product containers are not rebuilt. We made parameterizations so that the same container can work with all settings, environment variables and services without rebuilding both in the test environment and in production.

2. To update the application through Helm, you must specify its version. In our case, building the backend, frontend and updating the application are three different tasks, so it is important to use the same version of the application everywhere. For this task, we use data from the Git history, since we have a K8S cluster configuration and applications in the same Git repository.

We get the version of the application from the results of the command
git describe --tags --abbrev=7.

4. Automatic deployment of all changes in the test environment (UAT)

The next step in this build script is to automatically update the K8S cluster. This happens on the condition that the entire application has been built and all artifacts have been published to the Docker Registry. After that, the test environment is updated.

Cluster upgrade is triggered with Helm update. If as a result something did not go according to plan, then Helm will automatically and independently roll back all its changes. His work does not need to be controlled.

We ship a K8S cluster configuration with the build. Therefore, the next step is to update it: configMaps, deployments, services, secrets and any other K8S configurations that we have changed.

After that, Helm runs a RollOut update of the application itself in the test environment. Before the application is deployed to the prod. This is done so that users can manually test the business features that we have put into the test environment.

5. Automatic deployment of all changes to Prod

To deploy an update to the production environment, all that remains is to click one button in GitLab - and the containers are immediately delivered to the production environment.

The same application can work without rebuilding in different environments - test and production. We use the same artifacts without changing anything in the application, and we set the parameters from the outside.

Flexible parameterization of application settings depends on the environment in which this application will be executed. We moved all the settings of the environments outside: everything is parameterized through the K8S configuration and Helm parameters. When Helm deploys a build to a test environment, the test settings are applied to it, and the product settings are applied to the production environment.

The hardest part was parametrizing all the used services and variables that depend on the environment, and translating them into environment variables and description-configuration of environment parameters for Helm.

Application settings use environment variables. Their values ​​are set in containers using the K8S configmap, which is templated using Go templates. For example, setting an environment variable to the name of a domain can be done like this:

APP_EXTERNAL_DOMAIN: {{ (pluck .Values.global.env .Values.app.properties.app_external_domain | first) }}

.Values.global.env – this variable stores the name of the environment (prod, stage, UAT).
.values.app.properties.app_external_domain - in this variable we set the desired domain in the .Values.yaml file

When updating an application, Helm generates a configmap.yaml file from the templates and populates the APP_EXTERNAL_DOMAIN value with the appropriate value depending on the environment in which the application update starts. This variable is already set in the container. Access to it is from the application, respectively, in each application environment there will be a different value of this variable.

Relatively recently, Spring Cloud has added support for K8S, including working with configMaps: Spring Cloud Kubernetes. While the project is actively developing and changing dramatically, we cannot use it in production. But we actively monitor its state and use it in DEV configurations. As soon as it stabilizes, we will switch from using environment variables to it.

Total

So, Continuous Deployment is set up and working. All updates occur at the touch of a button. Delivery of changes to the production environment is automatic. And, importantly, updates do not stop the system.

Our implementation of Continuous Deployment to the customer's platform

Plans for the future: automatic database migration

We thought about upgrading the database and the possibility of rolling back these changes. After all, two different versions of the application work simultaneously: the old one works, and the new one rises. And we will turn off the old one only when we make sure that the new version works. Database migration should allow you to work with both versions of the application.

Therefore, we cannot just change the column name or other data. But we can create a new column, copy the data from the old column into it and write triggers that, when updating the data, will simultaneously copy and update them in another column. And after the successful deployment of the new version of the application, after the post launch support period has passed, we will be able to delete the old column and the trigger that has become unnecessary.

If the new version of the application does not work correctly, we can roll back to the previous version, including the previous version of the database. In a word, our changes will allow you to work simultaneously with several versions of the application.

We are planning to automate database migration through K8S job by embedding it into the CD process. And be sure to share this experience on HabrΓ©.

Source: habr.com

Add a comment