Trying new tools for building and automating deployment in Kubernetes

Trying new tools for building and automating deployment in Kubernetes

Hello! A lot of cool automation tools have been released lately for both building Docker images and deploying to Kubernetes. In this regard, I decided to play around with Gitlab, how to study its capabilities and, of course, set up a pipeline.

This site was inspired by kubernetes.iowhich is generated from source codes automatically, and for each pull request sent, the robot automatically generates a preview version of the site with your changes and provides a link for viewing.

I tried to build a similar process from scratch, but entirely built on Gitlab CI and free tools that I used to use to deploy applications to Kubernetes. Today I will finally tell you more about them.

The article will cover tools such as:
Hugo, qbec, kaniko, git-crypt и GitLab CI with the creation of dynamic environments.

Contents

  1. Getting to know Hugo
  2. Preparing the Dockerfile
  3. Getting to know kaniko
  4. Introduction to qbec
  5. Trying Gitlab-runner with Kubernetes-executor
  6. Deploy Helm charts with qbec
  7. Introduction to git-crypt
  8. Create a toolbox image
  9. Our first pipeline and assembly of images by tags
  10. Deploy automation
  11. Artifacts and assembly when pushing to master
  12. Dynamic environments
  13. Review Apps

1. Getting to know Hugo

As an example of our project, we will try to create a documentation publishing site built on Hugo. Hugo is a static content generator.

For those who are not familiar with static generators, I will tell you a little more about them. Unlike regular site engines with a database and some kind of php, which, when requested by the user, generate pages on the fly, static generators are arranged a little differently. They allow you to take the source code, usually a set of files in Markdown markup and theme templates, then compile them into a completely finished site.

That is, at the output you will get a directory structure and a set of generated html files that you can simply upload to any cheap hosting and get a working site.

You can install Hugo locally and try it out:

Initializing the new site:

hugo new site docs.example.org

And at the same time the git repository:

cd docs.example.org
git init

So far, our site is pristine and in order for something to appear on it, we first need to connect a theme, a theme is just a set of templates and set rules by which our site is generated.

As a theme we will use Learn, which, in my opinion, is the best suited for a site with documentation.

I would like to pay special attention to the fact that we do not need to save the theme files in the repository of our project, instead we can simply connect it using git submodule:

git submodule add https://github.com/matcornic/hugo-theme-learn themes/learn

Thus, only files directly related to our project will be in our repository, and the connected theme will remain as a link to a specific repository and a commit in it, that is, it can always be pulled from the original source and not be afraid of incompatible changes.

Let's fix the config config.toml:

baseURL = "http://docs.example.org/"
languageCode = "en-us"
title = "My Docs Site"
theme = "learn"

Already at this stage, you can run:

hugo server

And at the address http://localhost:1313/ check our newly created site, all changes made in the directory automatically update the open page in the browser, very convenient!

Let's try to create a title page in content/_index.md:

# My docs site

## Welcome to the docs!

You will be very smart :-)

Screenshot of the newly created page

Trying new tools for building and automating deployment in Kubernetes

To generate a site, just run:

hugo

Directory content public/ and will be your site.
Yes, by the way, let's immediately bring it into .gitignore:

echo /public > .gitignore

Don't forget to commit our changes:

git add .
git commit -m "New site created"

2. Preparing the Dockerfile

It's time to define the structure of our repository. Usually I use something like:

.
├── deploy
│   ├── app1
│   └── app2
└── dockerfiles
    ├── image1
    └── image2

  • dockerfiles/ - contain directories with Dockerfiles and everything needed to build our docker images.
  • deploy/ - contains directories for deploying our applications in Kubernetes

Thus, we will create our first Dockerfile along the way dockerfiles/website/dockerfile

FROM alpine:3.11 as builder
ARG HUGO_VERSION=0.62.0
RUN wget -O- https://github.com/gohugoio/hugo/releases/download/v${HUGO_VERSION}/hugo_${HUGO_VERSION}_linux-64bit.tar.gz | tar -xz -C /usr/local/bin
ADD . /src
RUN hugo -s /src

FROM alpine:3.11
RUN apk add --no-cache darkhttpd
COPY --from=builder /src/public /var/www
ENTRYPOINT [ "/usr/bin/darkhttpd" ]
CMD [ "/var/www" ]

As you can see, the Dockerfile contains two FROM, this possibility is called multi-stage build and allows you to exclude everything unnecessary from the final docker image.
Thus, the final image will contain only darkhttpd (lightweight HTTP server) and public/ - the content of our statically generated site.

Don't forget to commit our changes:

git add dockerfiles/website
git commit -m "Add Dockerfile for website"

3. Getting to know kaniko

As a builder of docker images, I decided to use kaniko, since it does not require a docker daemon to work, and the assembly itself can be carried out on any machine and store the cache directly in the registry, thereby getting rid of the need to have a full-fledged persistent storage.

To build the image, just run the container with kaniko executor and pass the current build context to it, you can do it locally, via docker:

docker run -ti --rm 
  -v $PWD:/workspace 
  -v ~/.docker/config.json:/kaniko/.docker/config.json:ro 
  gcr.io/kaniko-project/executor:v0.15.0 
  --cache 
  --dockerfile=dockerfiles/website/Dockerfile 
  --destination=registry.gitlab.com/kvaps/docs.example.org/website:v0.0.1

Where registry.gitlab.com/kvaps/docs.example.org/website - the name of your docker image, after building it will be automatically launched in the docker registry.

Parameter --cache allows you to cache layers in the docker registry, for the given example they will be stored in registry.gitlab.com/kvaps/docs.example.org/website/cache, but you can specify another path with the parameter --cache-repo.

Screenshot of docker-registry

Trying new tools for building and automating deployment in Kubernetes

4. Introduction to qbec

Qbec is a deployment tool that allows you to declaratively describe your application manifests and deploy them to Kubernetes. Using Jsonnet as the main syntax makes it very easy to describe the differences for multiple environments, and almost completely eliminates code repetition.

This can be especially true in cases where you need to deploy an application into several clusters with different parameters and want to declaratively describe them in Git.

Qbec also allows you to render Helm charts by passing them the necessary parameters and then operate on them in the same way as regular manifests, including the ability to apply various mutations to them, and this, in turn, eliminates the need to use ChartMuseum. That is, you can store and render charts directly from git, where they belong.

As I said before, we will store all deployments in the directory deploy/:

mkdir deploy
cd deploy

Let's initialize our first application:

qbec init website
cd website

Now the structure of our application looks like this:

.
├── components
├── environments
│   ├── base.libsonnet
│   └── default.libsonnet
├── params.libsonnet
└── qbec.yaml

look at the file qbec.yaml:

apiVersion: qbec.io/v1alpha1
kind: App
metadata:
  name: website
spec:
  environments:
    default:
      defaultNamespace: docs
      server: https://kubernetes.example.org:8443
  vars: {}

Here we are primarily interested in spec.environments, qbec has already created the default environment for us and took the server address and namespace from our current kubeconfig.
Now when deploying to default environment, qbec will always deploy only to the specified Kubernetes cluster and to the specified namespace, i.e. you no longer have to switch between contexts and namespaces in order to deploy.
If necessary, you can always update the settings in this file.

All your environments are described in qbec.yaml, and in the file params.libsonnet, which says where you need to take the parameters for them.

Next we see two directories:

  • components / - all manifests for our application will be stored here, they can be described both in jsonnet and in regular yaml files
  • environments/ - here we will describe all the variables (parameters) for our environments.

By default we have two files:

  • environments/base.libsonnet - it will contain common parameters for all environments
  • environments/default.libsonnet - contains parameters redefined for the environment default

let's open environments/base.libsonnet and add parameters for our first component there:

{
  components: {
    website: {
      name: 'example-docs',
      image: 'registry.gitlab.com/kvaps/docs.example.org/website:v0.0.1',
      replicas: 1,
      containerPort: 80,
      servicePort: 80,
      nodeSelector: {},
      tolerations: [],
      ingressClass: 'nginx',
      domain: 'docs.example.org',
    },
  },
}

Let's also create our first component components/website.jsonnet:

local env = {
  name: std.extVar('qbec.io/env'),
  namespace: std.extVar('qbec.io/defaultNs'),
};
local p = import '../params.libsonnet';
local params = p.components.website;

[
  {
    apiVersion: 'apps/v1',
    kind: 'Deployment',
    metadata: {
      labels: { app: params.name },
      name: params.name,
    },
    spec: {
      replicas: params.replicas,
      selector: {
        matchLabels: {
          app: params.name,
        },
      },
      template: {
        metadata: {
          labels: { app: params.name },
        },
        spec: {
          containers: [
            {
              name: 'darkhttpd',
              image: params.image,
              ports: [
                {
                  containerPort: params.containerPort,
                },
              ],
            },
          ],
          nodeSelector: params.nodeSelector,
          tolerations: params.tolerations,
          imagePullSecrets: [{ name: 'regsecret' }],
        },
      },
    },
  },
  {
    apiVersion: 'v1',
    kind: 'Service',
    metadata: {
      labels: { app: params.name },
      name: params.name,
    },
    spec: {
      selector: {
        app: params.name,
      },
      ports: [
        {
          port: params.servicePort,
          targetPort: params.containerPort,
        },
      ],
    },
  },
  {
    apiVersion: 'extensions/v1beta1',
    kind: 'Ingress',
    metadata: {
      annotations: {
        'kubernetes.io/ingress.class': params.ingressClass,
      },
      labels: { app: params.name },
      name: params.name,
    },
    spec: {
      rules: [
        {
          host: params.domain,
          http: {
            paths: [
              {
                backend: {
                  serviceName: params.name,
                  servicePort: params.servicePort,
                },
              },
            ],
          },
        },
      ],
    },
  },
]

In this file, we described three Kubernetes entities at once, these are: Deployment, Service и income. If desired, we could move them into different components, but at this stage, one is enough for us.

Syntax jsonnet very similar to regular json, in principle, regular json is already a valid jsonnet, so at first it may be easier for you to use online services like yaml2json to convert your usual yaml to json, or if your components do not contain any variables, then they can be described in the form of regular yaml.

When working with jsonnet I strongly advise you to install a plugin for your editor

For example, there is a plugin for vim vim-jsonnet, which turns on syntax highlighting and automatically executes jsonnet fmt on every save (requires jsonnet to be installed).

Everything is ready, now we can start the deployment:

To see what we got, let's run:

qbec show default

At the output, you will see the rendered yaml manifests that will be applied to the default cluster.

Great, now apply:

qbec apply default

On the output you will always see what will be done in your cluster, qbec will ask you to accept the changes by typing y you can confirm your intentions.

Done now our app is deployed!

If changes are made, you can always run:

qbec diff default

to see how these changes will affect the current deployment

Don't forget to commit our changes:

cd ../..
git add deploy/website
git commit -m "Add deploy for website"

5. Try Gitlab-runner with Kubernetes-executor

Until recently, I've only used regular gitlab-runner on a pre-prepared machine (LXC container) with shell- or docker-executor. Initially, we had several of these runners globally defined in our gitlab. They built docker images for all projects.

But as practice has shown, this option is not the most ideal, both in terms of practicality and in terms of security. It is much better and ideologically correct to have separate runners deployed for each project, and even for each environment.

Fortunately, this is not a problem at all, since now we will deploy gitlab-runner directly as part of our project right in Kubernetes.

Gitlab provides a ready-made helm chart for deploying gitlab-runner to Kubernetes. So all you need to know is registration token for our project in Settings -> CI / CD -> Runners and pass it to helm:

helm repo add gitlab https://charts.gitlab.io

helm install gitlab-runner 
  --set gitlabUrl=https://gitlab.com 
  --set runnerRegistrationToken=yga8y-jdCusVDn_t4Wxc 
  --set rbac.create=true 
  gitlab/gitlab-runner

Where:

  • https://gitlab.com is the address of your Gitlab server.
  • yga8y-jdCusVDn_t4Wxc - registration token for your project.
  • rbac.create=true - gives the runner the necessary number of privileges to be able to create pods to perform our tasks using kubernetes-executor.

If everything is done correctly, you should see the registered runner in the section Runners, in your project settings.

Screenshot of the added runner

Trying new tools for building and automating deployment in Kubernetes

Is it that simple? - yes, it's that simple! No more hassle with manually registering runners, from now on runners will be created and destroyed automatically.

6. Deploy Helm charts with QBEC

Since we have decided to consider gitlab-runner part of our project, it's time to describe it in our Git repository.

We could describe it as a separate component website, but in the future we plan to deploy different copies website very often, unlike gitlab-runner, which will be deployed only once per Kubernetes cluster. So let's initialize a separate application for it:

cd deploy
qbec init gitlab-runner
cd gitlab-runner

This time we will not describe Kubernetes entities manually, but take a ready-made Helm chart. One of the benefits of qbec is the ability to render Helm charts directly from a Git repository.

Let's enable it using git submodule:

git submodule add https://gitlab.com/gitlab-org/charts/gitlab-runner vendor/gitlab-runner

Now the directory vendor/gitlab-runner contains our repository with a chart for gitlab-runner.

Other repositories can be connected in a similar way, for example, the entire repository with official charts https://github.com/helm/charts

Let's describe the component components/gitlab-runner.jsonnet:

local env = {
  name: std.extVar('qbec.io/env'),
  namespace: std.extVar('qbec.io/defaultNs'),
};
local p = import '../params.libsonnet';
local params = p.components.gitlabRunner;

std.native('expandHelmTemplate')(
  '../vendor/gitlab-runner',
  params.values,
  {
    nameTemplate: params.name,
    namespace: env.namespace,
    thisFile: std.thisFile,
    verbose: true,
  }
)

The first argument to expandHelmTemplate we pass the path to the chart, then params.values, which we take from the environment parameters, then comes the object with

  • nameTemplate - release name
  • namespace - namespace passed to helm
  • thisFile - a required parameter that passes the path to the current file
  • verbose - shows command helm template with all arguments when rendering a chart

Now let's describe the parameters for our component in environments/base.libsonnet:

local secrets = import '../secrets/base.libsonnet';

{
  components: {
    gitlabRunner: {
      name: 'gitlab-runner',
      values: {
        gitlabUrl: 'https://gitlab.com/',
        rbac: {
          create: true,
        },
        runnerRegistrationToken: secrets.runnerRegistrationToken,
      },
    },
  },
}

Note runnerRegistrationToken we fetch from external file secrets/base.libsonnet, let's create it:

{
  runnerRegistrationToken: 'yga8y-jdCusVDn_t4Wxc',
}

Let's check if everything works:

qbec show default

if everything is in order, then we can remove our earlier deployed via Helm release:

helm uninstall gitlab-runner

and deploy it, but through qbec:

qbec apply default

7. Introduction to git-crypt

git-crypt is a tool that allows you to set up transparent encryption for your repository.

At the moment, our directory structure for gitlab-runner looks like this:

.
├── components
│   ├── gitlab-runner.jsonnet
├── environments
│   ├── base.libsonnet
│   └── default.libsonnet
├── params.libsonnet
├── qbec.yaml
├── secrets
│   └── base.libsonnet
└── vendor
    └── gitlab-runner (submodule)

But storing secrets in Git is not safe, is it? So we need to properly encrypt them.

Usually for the sake of one variable it doesn't always make sense. You can transfer secrets to qbec and through the environment variables of your CI system.
But it is worth noting that there are also more complex projects that can contain much more secrets, it will be extremely difficult to pass them all through environment variables.

In addition, in this case, I would not be able to tell you about such a wonderful tool as git-crypt.

git-crypt It is also convenient in that it allows you to save the entire history of secrets, as well as compare, merge and resolve conflicts in the same way as we used to do it in the case of Git.

First thing after installation git-crypt we need to generate keys for our repository:

git crypt init

If you have a PGP key, then you can immediately add yourself as a collaborator for this project:

git-crypt add-gpg-user [email protected]

This way you can always decrypt this repository using your private key.

If you do not have a PGP key and are not expected to, then you can go the other way and export the project key:

git crypt export-key /path/to/keyfile

Thus, anyone who possesses an exported keyfile will be able to decrypt your repository.

It's time to set up our first secret.
Let me remind you that we are still in the directory deploy/gitlab-runner/where we have a directory secrets/, let's encrypt all the files in it, for this we will create a file secrets/.gitattributes with content like this:

* filter=git-crypt diff=git-crypt
.gitattributes !filter !diff

As can be seen from the content, all files by mask * will run through git-crypt, with the exception of the .gitattributes

We can check this by running:

git crypt status -e

At the output, we get a list of all files in the repository for which encryption is enabled

That's it, now we can safely commit our changes:

cd ../..
git add .
git commit -m "Add deploy for gitlab-runner"

In order to block the repository, it is enough to execute:

git crypt lock

and immediately all encrypted files will turn into a binary something, it will be impossible to read them.
To decrypt the repository, run:

git crypt unlock

8. Create a toolbox image

A toolbox image is an image with all the tools that we will use to deploy our project. It will be used by the gitlab runner to perform typical deployment tasks.

Everything is simple here, we create a new dockerfiles/toolbox/Dockerfile with content like this:

FROM alpine:3.11

RUN apk add --no-cache git git-crypt

RUN QBEC_VER=0.10.3 
 && wget -O- https://github.com/splunk/qbec/releases/download/v${QBEC_VER}/qbec-linux-amd64.tar.gz 
     | tar -C /tmp -xzf - 
 && mv /tmp/qbec /tmp/jsonnet-qbec /usr/local/bin/

RUN KUBECTL_VER=1.17.0 
 && wget -O /usr/local/bin/kubectl 
      https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VER}/bin/linux/amd64/kubectl 
 && chmod +x /usr/local/bin/kubectl

RUN HELM_VER=3.0.2 
 && wget -O- https://get.helm.sh/helm-v${HELM_VER}-linux-amd64.tar.gz 
     | tar -C /tmp -zxf - 
 && mv /tmp/linux-amd64/helm /usr/local/bin/helm

As you can see, in this image we install all the utilities that we used to deploy our application. We don't need here unless kubectl, but you might want to play around with it when setting up the pipeline.

Also, in order to be able to communicate with Kubernetes and deploy to it, we need to set up a role for the pods generated by gitlab-runner.

To do this, go to the directory with gitlab-runner'om:

cd deploy/gitlab-runner

and add a new component components/rbac.jsonnet:

local env = {
  name: std.extVar('qbec.io/env'),
  namespace: std.extVar('qbec.io/defaultNs'),
};
local p = import '../params.libsonnet';
local params = p.components.rbac;

[
  {
    apiVersion: 'v1',
    kind: 'ServiceAccount',
    metadata: {
      labels: {
        app: params.name,
      },
      name: params.name,
    },
  },
  {
    apiVersion: 'rbac.authorization.k8s.io/v1',
    kind: 'Role',
    metadata: {
      labels: {
        app: params.name,
      },
      name: params.name,
    },
    rules: [
      {
        apiGroups: [
          '*',
        ],
        resources: [
          '*',
        ],
        verbs: [
          '*',
        ],
      },
    ],
  },
  {
    apiVersion: 'rbac.authorization.k8s.io/v1',
    kind: 'RoleBinding',
    metadata: {
      labels: {
        app: params.name,
      },
      name: params.name,
    },
    roleRef: {
      apiGroup: 'rbac.authorization.k8s.io',
      kind: 'Role',
      name: params.name,
    },
    subjects: [
      {
        kind: 'ServiceAccount',
        name: params.name,
        namespace: env.namespace,
      },
    ],
  },
]

We also describe the new parameters in environments/base.libsonnet, which now looks like this:

local secrets = import '../secrets/base.libsonnet';

{
  components: {
    gitlabRunner: {
      name: 'gitlab-runner',
      values: {
        gitlabUrl: 'https://gitlab.com/',
        rbac: {
          create: true,
        },
        runnerRegistrationToken: secrets.runnerRegistrationToken,
        runners: {
          serviceAccountName: $.components.rbac.name,
          image: 'registry.gitlab.com/kvaps/docs.example.org/toolbox:v0.0.1',
        },
      },
    },
    rbac: {
      name: 'gitlab-runner-deploy',
    },
  },
}

Note $.components.rbac.name refers to name for component rback

Let's check what has changed:

qbec diff default

and apply our changes to Kubernetes:

qbec apply default

Also, don't forget to commit our changes to git:

cd ../..
git add dockerfiles/toolbox
git commit -m "Add Dockerfile for toolbox"
git add deploy/gitlab-runner
git commit -m "Configure gitlab-runner to use toolbox"

9. Our first pipeline and assembly of images by tags

At the root of the project we will create .gitlab-ci.yml with content like this:

.build_docker_image:
  stage: build
  image:
    name: gcr.io/kaniko-project/executor:debug-v0.15.0
    entrypoint: [""]
  before_script:
    - echo "{"auths":{"$CI_REGISTRY":{"username":"$CI_REGISTRY_USER","password":"$CI_REGISTRY_PASSWORD"}}}" > /kaniko/.docker/config.json

build_toolbox:
  extends: .build_docker_image
  script:
    - /kaniko/executor --cache --context $CI_PROJECT_DIR/dockerfiles/toolbox --dockerfile $CI_PROJECT_DIR/dockerfiles/toolbox/Dockerfile --destination $CI_REGISTRY_IMAGE/toolbox:$CI_COMMIT_TAG
  only:
    refs:
      - tags

build_website:
  extends: .build_docker_image
  variables:
    GIT_SUBMODULE_STRATEGY: normal
  script:
    - /kaniko/executor --cache --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/dockerfiles/website/Dockerfile --destination $CI_REGISTRY_IMAGE/website:$CI_COMMIT_TAG
  only:
    refs:
      - tags

Please note we are using GIT_SUBMODULE_STRATEGY: normal for those jobs where you need to explicitly initialize submodules before execution.

Don't forget to commit our changes:

git add .gitlab-ci.yml
git commit -m "Automate docker build"

I think you can safely call it a version v0.0.1 and add a tag:

git tag v0.0.1

We will hang tags whenever we need to release a new version. Tags in Docker images will be mapped to Git tags. Each push with a new tag will initialize an image build with that tag.

perform git push --tags, and look at our first pipeline:

Screenshot of the first pipeline

Trying new tools for building and automating deployment in Kubernetes

It's worth pointing out that tag-based builds are good for building docker images, but not for deploying an application to Kubernetes. Since new tags can also be assigned to old commits, in this case, initializing the pipeline for them will lead to the deployment of the old version.

To solve this problem, the build of docker images is usually tied to tags, and the deployment of the application to the branch master, in which the versions of the collected images are hardcoded. It is in this case that you can initialize rollback with a simple revert master-branches.

10. Deploy automation

In order for Gitlab-runner to decrypt our secrets, we need to export the repository key and add it to our CI environment variables:

git crypt export-key /tmp/docs-repo.key
base64 -w0 /tmp/docs-repo.key; echo

we will save the resulting string in Gitlab, for this we will go to the settings of our project:
Settings —> CI / CD —> Variables

And create a new variable:

Type
Key
Value
protected
Masked
Scope

File
GITCRYPT_KEY
<your string>
true (at the time of training, you can false)
true
All environments

Screenshot of the added variable

Trying new tools for building and automating deployment in Kubernetes

Now let's update our .gitlab-ci.yml adding to it:

.deploy_qbec_app:
  stage: deploy
  only:
    refs:
      - master

deploy_gitlab_runner:
  extends: .deploy_qbec_app
  variables:
    GIT_SUBMODULE_STRATEGY: normal
  before_script:
    - base64 -d "$GITCRYPT_KEY" | git-crypt unlock -
  script:
    - qbec apply default --root deploy/gitlab-runner --force:k8s-context __incluster__ --wait --yes

deploy_website:
  extends: .deploy_qbec_app
  script:
    - qbec apply default --root deploy/website --force:k8s-context __incluster__ --wait --yes

Here we have enabled some new options for qbec:

  • --root some/app - allows you to define the directory of a specific application
  • --force:k8s-context __incluster__ - this is a magic variable that says that the deployment will occur in the same cluster in which gtilab-runner is running. This is necessary, otherwise qbec will try to find a suitable Kubernetes server in your kubeconfig
  • -wait - forces qbec to wait until the resources it creates go into the Ready state and only then complete with a successful exit-code.
  • -yes - just disables the interactive shell Are you sure? during deployment.

Don't forget to commit our changes:

git add .gitlab-ci.yml
git commit -m "Automate deploy"

And after git push we will see how our applications were deployed:

Screenshot of the second pipeline

Trying new tools for building and automating deployment in Kubernetes

11. Artifacts and assembly when pushing to master

Usually, the above steps are enough to build and deliver almost any microservice, but we don't want to add a tag every time we need to update the site. Therefore, we will go in a more dynamic way and set up a digest deployment in the master branch.

The idea is simple: now the image of our website will be rebuilt every time you push to master, and then automatically deploy to Kubernetes.

Let's update these two jobs in our .gitlab-ci.yml:

build_website:
  extends: .build_docker_image
  variables:
    GIT_SUBMODULE_STRATEGY: normal
  script:
    - mkdir -p $CI_PROJECT_DIR/artifacts
    - /kaniko/executor --cache --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/dockerfiles/website/Dockerfile --destination $CI_REGISTRY_IMAGE/website:$CI_COMMIT_REF_NAME --digest-file $CI_PROJECT_DIR/artifacts/website.digest
  artifacts:
    paths:
      - artifacts/
  only:
    refs:
      - master
      - tags

deploy_website:
  extends: .deploy_qbec_app
  script:
    - DIGEST="$(cat artifacts/website.digest)"
    - qbec apply default --root deploy/website --force:k8s-context __incluster__ --wait --yes --vm:ext-str digest="$DIGEST"

Please note that we have added a branch master к refs for job build_website and we now use $CI_COMMIT_REF_NAME instead $CI_COMMIT_TAG, that is, we get rid of the tags in Git and now we will push the image with the name of the commit branch that initialized your pipeline. It's worth noting that this will also work with tags, which will allow us to save snapshots of the site with a specific version in the docker-registry.

When the name of the docker tag for the new version of the site can be unchanged, we still have to describe the changes for Kubernetes, otherwise it will simply not redeploy the application from the new image, since it will not notice any changes in the deployment manifest.

Option --vm:ext-str digest="$DIGEST" for qbec - allows you to pass an external variable to jsonnet. We want our application to be redeployed in the cluster with each release. We can no longer use the tag name, which can now be unchanged, since we need to link to a specific version of the image and trigger the deployment when it changes.

Here, Kaniko's ability to save the digest of the image to a file will help us (option --digest-file)
Then we will transfer this file and read it at the time of deployment.

Let's update the parameters for our deploy/website/environments/base.libsonnet which will now look like this:

{
  components: {
    website: {
      name: 'example-docs',
      image: 'registry.gitlab.com/kvaps/docs.example.org/website@' + std.extVar('digest'),
      replicas: 1,
      containerPort: 80,
      servicePort: 80,
      nodeSelector: {},
      tolerations: [],
      ingressClass: 'nginx',
      domain: 'docs.example.org',
    },
  },
}

Done, now any commit in master initializes the build of the docker image for website, and then deploy it to Kubernetes.

Don't forget to commit our changes:

git add .
git commit -m "Configure dynamic build"

Check it out after git push we should see something like this:

Pipeline screenshot for master

Trying new tools for building and automating deployment in Kubernetes

In principle, we do not need to redeploy gitlab-runner with each push, unless, of course, nothing has changed in its configuration, let's fix this in .gitlab-ci.yml:

deploy_gitlab_runner:
  extends: .deploy_qbec_app
  variables:
    GIT_SUBMODULE_STRATEGY: normal
  before_script:
    - base64 -d "$GITCRYPT_KEY" | git-crypt unlock -
  script:
    - qbec apply default --root deploy/gitlab-runner --force:k8s-context __incluster__ --wait --yes
  only:
    changes:
      - deploy/gitlab-runner/**/*

changes will keep track of changes in deploy/gitlab-runner/ and will trigger our job only if there are any

Don't forget to commit our changes:

git add .gitlab-ci.yml
git commit -m "Reduce gitlab-runner deploy"

git push, that's better:

Screenshot of the updated pipeline

Trying new tools for building and automating deployment in Kubernetes

12. Dynamic environments

It's time to diversify our pipeline with dynamic environments.

First, let's update the job build_website in our .gitlab-ci.yml, removing the block from it only, which will force Gitlab to trigger it on any commit to any branch:

build_website:
  extends: .build_docker_image
  variables:
    GIT_SUBMODULE_STRATEGY: normal
  script:
    - mkdir -p $CI_PROJECT_DIR/artifacts
    - /kaniko/executor --cache --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/dockerfiles/website/Dockerfile --destination $CI_REGISTRY_IMAGE/website:$CI_COMMIT_REF_NAME --digest-file $CI_PROJECT_DIR/artifacts/website.digest
  artifacts:
    paths:
      - artifacts/

Then update the job deploy_website, add a block there environment:

deploy_website:
  extends: .deploy_qbec_app
  environment:
    name: prod
    url: https://docs.example.org
  script:
    - DIGEST="$(cat artifacts/website.digest)"
    - qbec apply default --root deploy/website --force:k8s-context __incluster__ --wait --yes --vm:ext-str digest="$DIGEST"

This will allow Gitlab to associate the job with prod environment and display the correct link to it.

Now let's add two more jobs:

deploy_website:
  extends: .deploy_qbec_app
  environment:
    name: prod
    url: https://docs.example.org
  script:
    - DIGEST="$(cat artifacts/website.digest)"
    - qbec apply default --root deploy/website --force:k8s-context __incluster__ --wait --yes --vm:ext-str digest="$DIGEST"

deploy_review:
  extends: .deploy_qbec_app
  environment:
    name: review/$CI_COMMIT_REF_NAME
    url: http://$CI_ENVIRONMENT_SLUG.docs.example.org
    on_stop: stop_review
  script:
    - DIGEST="$(cat artifacts/website.digest)"
    - qbec apply review --root deploy/website --force:k8s-context __incluster__ --wait --yes --vm:ext-str digest="$DIGEST" --vm:ext-str subdomain="$CI_ENVIRONMENT_SLUG" --app-tag "$CI_ENVIRONMENT_SLUG"
  only:
    refs:
    - branches
  except:
    refs:
      - master

stop_review:
  extends: .deploy_qbec_app
  environment:
    name: review/$CI_COMMIT_REF_NAME
    action: stop
  stage: deploy
  before_script:
    - git clone "$CI_REPOSITORY_URL" master
    - cd master
  script:
    - qbec delete review --root deploy/website --force:k8s-context __incluster__ --yes --vm:ext-str digest="$DIGEST" --vm:ext-str subdomain="$CI_ENVIRONMENT_SLUG" --app-tag "$CI_ENVIRONMENT_SLUG"
  variables:
    GIT_STRATEGY: none
  only:
    refs:
    - branches
  except:
    refs:
      - master
  when: manual

They will be triggered by push to any branches except master and will deploy a preview version of the site.

We see a new option for qbec: --app-tag - it allows you to tag deployed versions of the application and work only within this tag; when creating and destroying resources in Kubernetes, qbec will only operate on them.
Thus, we can not create a separate environment for each review, but simply reuse the same one.

Here we also use qbec apply review, instead of qbec apply default - this is exactly the moment when we will try to describe the differences for our environments (review and default):

Add review environment in deploy/website/qbec.yaml

spec:
  environments:
    review:
      defaultNamespace: docs
      server: https://kubernetes.example.org:8443

Then we declare it in deploy/website/params.libsonnet:

local env = std.extVar('qbec.io/env');
local paramsMap = {
  _: import './environments/base.libsonnet',
  default: import './environments/default.libsonnet',
  review: import './environments/review.libsonnet',
};

if std.objectHas(paramsMap, env) then paramsMap[env] else error 'environment ' + env + ' not defined in ' + std.thisFile

And write custom parameters for it in deploy/website/environments/review.libsonnet:

// this file has the param overrides for the default environment
local base = import './base.libsonnet';
local slug = std.extVar('qbec.io/tag');
local subdomain = std.extVar('subdomain');

base {
  components+: {
    website+: {
      name: 'example-docs-' + slug,
      domain: subdomain + '.docs.example.org',
    },
  },
}

Let's also take a closer look at job stop_review, it will be triggered when the branch is removed and so that gitlab does not try to checkout on it is used GIT_STRATEGY: none, later we clone master-branch and delete review through it.
A little confusing, but I have not yet found a more beautiful way.
An alternative option would be to deploy each review to a hotel namespace, which can always be demolished in its entirety.

Don't forget to commit our changes:

git add .
git commit -m "Enable automatic review"

git push, git checkout -b test, git push origin test, check:

Screenshot of created environments in Gitlab

Trying new tools for building and automating deployment in Kubernetes

Everything is working? - great, delete our test branch: git checkout master, git push origin :test, we check that the jobs for deleting the environment worked without errors.

Here I immediately want to clarify that any developer in the project can create branches, he can also change .gitlab-ci.yml file and access secret variables.
Therefore, it is strongly recommended to allow their use only for protected branches, for example in master, or create a separate set of variables for each environment.

13 Review Apps

Review Apps this is a gitlab feature that allows you to add a button for each file in the repository to quickly view it in the deployed environment.

In order for these buttons to appear, you need to create a file .gitlab/route-map.yml and describe in it all the transformations of the paths, in our case it will be very simple:

# Indices
- source: /content/(.+?)_index.(md|html)/ 
  public: '1'

# Pages
- source: /content/(.+?).(md|html)/ 
  public: '1/'

Don't forget to commit our changes:

git add .gitlab/
git commit -m "Enable review apps"

git push, and check:

Screenshot of the Review App Button

Trying new tools for building and automating deployment in Kubernetes

Job is done!

Project sources:

Thank you for your attention, I hope you liked it Trying new tools for building and automating deployment in Kubernetes

Source: habr.com

Add a comment