Support for monorepo and multirepo in werf and what does the Docker Registry have to do with it

Support for monorepo and multirepo in werf and what does the Docker Registry have to do with it

The topic of a mono-repository has been discussed more than once and, as a rule, causes very active controversy. By creating yard as an open source tool designed to improve the process of building application code from Git to Docker images (and then delivering them to Kubernetes), we don't think much about which choice is best. For us, it is primary to provide everything necessary for supporters of different opinions (if this does not contradict common sense, of course).

werf's recent mono-repo support is a good example of this. But first, let's figure out how this support is generally related to using werf and what the Docker Registry has to do with it ...

Problems

Let's imagine such a situation. The company has many development teams working on independent projects. Most applications run on Kubernetes and are therefore containerized. To store containers, images, you need a registry (registry). As such a registry, the company uses Docker Hub with a single account COMPANY. Similar to most source code storage systems, Docker Hub does not allow nested repository hierarchy, such as COMPANY/PROJECT/IMAGE. In that case… how can you store non-monolithic applications in the registry with this limitation without creating a separate account for each project?

Support for monorepo and multirepo in werf and what does the Docker Registry have to do with it

Perhaps, the described situation is familiar to someone firsthand, but let's consider the issue of organizing application storage in general, i.e. without reference to the above example and Docker Hub.

Ways of solution

If the application monolithic, comes in one image, then there are no questions and we simply save the images to the project's container registry.

When an application is presented as multiple components, microservices, then a certain approach is required. On the example of a typical web application consisting of two images: frontend ΠΈ backend - the possible options are:

  1. Store images in separate nested repositories:

    Support for monorepo and multirepo in werf and what does the Docker Registry have to do with it

  2. Store everything in one repository, and consider the image name in the tag, for example, as follows:

    Support for monorepo and multirepo in werf and what does the Docker Registry have to do with it

NB: Actually, there is another option with saving in different repositories, PROJECT-frontend ΠΈ PROJECT-backend, but we will not consider it because of the complexity of support, organization and distribution of rights between users.

werf support

Initially, werf limited itself to nested repositories - fortunately, most registries support this feature. Starting from version v1.0.4-alpha.3, added work with registries in which nesting is not supported, and Docker Hub is one of them. From that point on, the user has a choice of how to store the application images.

Implementation available under option --images-repo-mode=multirepo|monorepo (default multirepo, i.e. storage in nested repositories). It defines the patterns by which images are stored in the registry. It is enough to select the desired mode when using the basic commands, and everything else will remain unchanged.

Because most werf options can be set environment variables, in CI / CD systems, the storage mode is usually easy to set globally for the entire project. For example, in the case of GitLab just add an environment variable in the project settings: Settings -> CI / CD -> Variables: WERF_IMAGES_REPO_MODE: multirepo|monorepo.

If we talk about publishing images and rolling out applications (you can read about these processes in detail in the relevant documentation articles: Publish process ΠΈ Deploy process), then the mode solely determines the template by which you can work with the image.

The Devil in Detail

The difference and the main difficulty when adding a new storage method is in the process of cleaning the registry (for purge features supported by werf, see cleaning process).

When cleaning, werf takes into account the images used in Kubernetes clusters, as well as policies configured by the user. Policies are based on the division of tags into strategies. Currently supported strategies:

  1. 3 strategies linked by Git primitives such as tag, branch, and commit;
  2. 1 strategy for arbitrary custom tags.

We save information about the tag strategy when publishing the image in the labels of the final image. The meaning itself is the so-called meta tag - Required to apply some of the policies. For example, when deleting a branch or tag from a Git repository, it is logical to delete related unused images from the registry, which is covered by part of our policies.

When saved in one repository (monorepo), in the image tag, in addition to the meta tag, the name of the image can also be stored: PROJECT:frontend-META-TAG. To separate them, we did not introduce any specific separator, but simply added the necessary value to the label of the final image when publishing.

NB: If you are interested in looking at everything described in the werf source code, then the starting point can be PR 1684.

In this article, we will not pay more attention to the problems and justification of our approach: about tagging strategies, storing data in labels and the publishing process as a whole - all this is described in detail in a recent report by Dmitry Stolyarov: β€œwerf is our tool for CI/CD in KubernetesΒ».

Summarizing

The lack of support for unnested registries was not a blocking factor for us or the werf users known to us - after all, you can always raise a separate image registry (or switch to a conditional Container Registry in Google Cloud) ... However, removing such a restriction seemed logical in order for the tool to be more convenient the wider DevOps community. Implementing it, we faced the main difficulty in reworking the container registry cleanup mechanism. Now that everything is ready, it's nice to realize that it has become easier for someone, and we (as the main developers of the project) will not have any noticeable difficulties in further supporting this feature.

Stay with us and very soon we will tell you about other innovations in yard!

PS

Read also on our blog:

Source: habr.com

Add a comment