Introduction to Helm 3

Introduction to Helm 3

Note. transl.: May 16 this year is a significant milestone in the development of the package manager for Kubernetes - Helm. On this day, the first alpha release of the future major version of the project, 3.0, was presented. Its release will bring significant and long-awaited changes to Helm, which many in the Kubernetes community have high hopes for. We ourselves are among those, since we actively use Helm for deploying applications: we have integrated it into our tool for implementing CI / CD yard and from case to case we make a feasible contribution to the development of upstream. This translation combines 7 notes from the official Helm blog that are dedicated to the first alpha release of Helm 3 and tell about the history of the project and the main features of Helm 3. Their author is Matt "bacongobbler" Fisher, a Microsoft employee and one of the key maintainers of Helm.

On October 15, 2015, the project now known as Helm was born. Just one year after its founding, the Helm community joined Kubernetes while actively working on Helm 2. In June 2018, Helm joined the CNCF as an incubating project. Fast forward to the present and the first alpha release of the new Helm 3 is on the way. (this release has already taken place in the middle of May - approx. transl.).

In this article, I'll go over how it all started, how we've gotten to where it is today, introduce some of the unique features available in Helm 3's first alpha release, and explain how we plan to move forward.

Summary:

  • the history of the creation of Helm;
  • tender farewell to Tiller;
  • chart repositories;
  • release management;
  • changes in chart dependencies;
  • library charts;
  • what's next?

History of Helm

Birth

Helm 1 started out as an Open Source project created by Deis. We were a small startup absorbed Microsoft in the spring of 2017. Our other Open Source project, also named Deis, had a tool deisctlwhich was used (among other things) to install and operate the Deis platform in Fleet cluster. At the time, Fleet was one of the first container orchestration platforms.

In mid-2015, we decided to change course and migrated Deis (then renamed Deis Workflow) from Fleet to Kubernetes. One of the first was the redesigned installation tool deisctl. We used it to install and manage Deis Workflow on a Fleet cluster.

Helm 1 was created in the image and likeness of well-known package managers such as Homebrew, apt and yum. Its main goal was to simplify tasks such as packaging and installing applications in Kubernetes. Helm was officially presented in 2015 at the KubeCon conference in San Francisco.

Our first attempt with Helm worked, but was not without serious limitations. He took a set of Kubernetes manifests flavored with generators as introductory YAML blocks. (front matter)* and uploaded the results to Kubernetes.

* Note. transl.: From the first version of Helm, YAML syntax was chosen to describe Kubernetes resources, and Jinja templates and Python scripts were supported when writing configurations. We wrote more about this and the device of the first version of Helm in general in the chapter β€œA Brief History of Helm” of this material.

For example, to replace a field in a YAML file, you would add the following construct to your manifest:

#helm:generate sed -i -e s|ubuntu-debootstrap|fluffy-bunny| my/pod.yaml

It's cool that templating engines exist today, isn't it?

For many reasons, this early Kubernetes installer required a hard-coded list of manifest files and only executed a small, fixed sequence of events. It was so difficult to use that the Deis Workflow R&D team had a hard time when they tried to transfer their product to this platform - however, the seeds of the idea had already been sown. Our first attempt was a great learning opportunity as we realized we were truly passionate about building pragmatic tools that solve day-to-day problems for our users.

Based on the experience of past mistakes, we started developing Helm 2.

Making Helm 2

At the end of 2015, we were contacted by the Google team. They were working on a similar tool for Kubernetes. The Deployment Manager for Kubernetes was a port of an existing tool used for the Google Cloud Platform. β€œAre we willing,” they asked, β€œto spend a few days discussing similarities and differences?”

In January 2016, the Helm and Deployment Manager teams met in Seattle to exchange ideas. The talks culminated in an ambitious plan to merge both projects to create Helm 2. Together with Deis and Google, the guys from SkipBox (now part of Bitnami - approx. transl.), and we started working on Helm 2.

We wanted to keep the ease of use of Helm, but add the following:

  • chart templates for customization;
  • intracluster management for teams;
  • top-notch chart repository;
  • stable package format with signing capability;
  • a strong commitment to semantic versioning and maintaining backward compatibility between versions.

To achieve these goals, a second element has been added to the Helm ecosystem. This intra-cluster component was called Tiller and was responsible for installing and managing Helm charts.

Since the release of Helm 2 in 2016, Kubernetes has seen several major innovations. Role based access control has been introduced (RBAC), which eventually replaced attribute-based access control (ABAC). New resource types were introduced (Deployments were still in beta at the time). Custom Resource Definitions were invented (originally called Third Party Resources or TPRs). And most importantly, a set of best practices has appeared.

Amid all these changes, Helm has continued to serve Kubernetes users faithfully. After three years and many new additions, it was clear that it was time to make significant changes to the code base so that Helm could continue to meet the growing needs of an evolving ecosystem.

Tender farewell to Tiller

During the development of Helm 2, we introduced Tiller as part of our integration with Google's Deployment Manager. Tiller played an important role for teams working within a common cluster: it allowed different specialists operating the infrastructure to interact with the same set of releases.

Since role-based access control (RBAC) was enabled by default in Kubernetes 1.6, working with Tiller in production became more difficult. Due to the sheer number of possible security policies, our position was to offer a permissive configuration by default. This allowed beginners to experiment with Helm and Kubernetes without having to dive into security settings first. Unfortunately, this permissive configuration could give the user too wide a range of permissions that they didn't need. DevOps and SRE engineers had to learn additional operational steps when installing Tiller in a multi-tenant cluster.

By learning how the community uses Helm in specific situations, we realized that Tiller's release management system doesn't need to rely on an in-cluster component to maintain states or act as a central hub for release information. Instead, we could just get information from the Kubernetes API server, generate a chart on the client side, and save a Kubernetes installation record.

The main goal of Tiller could be done without Tiller, so one of our first decisions regarding Helm 3 was to completely abandon Tiller.

With the departure of Tiller, Helm's security model has been radically simplified. Helm 3 now supports all of the modern security, identification, and authorization features of today's Kubernetes. Helm permissions are defined with kubeconfig file. Cluster administrators can restrict user rights to any granularity. Releases are still stored within the cluster, the rest of the Helm functionality is preserved.

Chart repositories

At a high level, a chart repository is a place where charts can be stored and shared. The Helm client packages and pushes the charts to the repository. Simply put, a charts repository is a primitive HTTP server with an index.yaml file and some packaged charts.

While there are some advantages to the Charts Repository API meeting most basic storage requirements, it also has a few disadvantages:

  • Chart repositories are not compatible with most of the security implementations needed in a production environment. Having a standard API for authentication and authorization is essential in production scenarios.
  • Helm's chart origin tracking tools, used to sign, verify integrity, and chart origin, are an optional part of the Chart's publishing process.
  • In multi-user scenarios, the same chart can be loaded by another user, doubling the amount of space needed to store the same content. Smarter repositories have been developed to address this issue, however they are not part of the formal specification.
  • The use of a single index file for searching, storing metadata, and retrieving charts has made it difficult to develop secure multi-user implementations.

Project Docker distribution (also known as Docker Registry v2) is the successor to Docker Registry and is actually a set of tools for packaging, shipping, storing and distributing Docker images. Many large cloud services offer products based on Distribution. With this increased attention, the Distribution project has benefited from years of refinements, security best practices, and field testing that have turned it into one of the most successful unsung heroes of the Open Source world.

But did you know that the Distribution project was designed to distribute any form of content, not just container images?

Thanks to the efforts Open Container Initiative (or OCI), Helm charts can be placed on any Distribution instance. So far, this process is experimental. Work on login support and other features required for a full Helm 3 is still ongoing, but we're excited to learn from the discoveries made by the OCI and Distribution teams over the years. And through their mentorship and guidance, we're learning what it's like to operate a highly available service at scale.

A more detailed description of some of the upcoming changes in the Helm Chart repositories is available. here to register:.

Release Management

In Helm 3, application state is tracked within the cluster by a couple of objects:

  • release object - represents an application instance;
  • release version secret - represents the desired state of the application at a particular point in time (for example, the release of a new version).

Π’Ρ‹Π·ΠΎΠ² helm install creates a release object and a release version secret. Call helm upgrade requires a release object (which it can modify) and creates a new release version secret containing the new values ​​and a prepared manifest.

The release object contains information about the release, where release is a specific installation of the named chart and values. This object describes top-level metadata about the release. The release object persists throughout the application life cycle and is the owner of all release version secrets, as well as all objects that are directly created by the Helm chart.

The release version secret associates a release with a series of revisions (installation, updates, rollbacks, deletions).

In Helm 2, the revisions were extremely consistent. Call helm install created v1, the subsequent update (upgrade) - v2, and so on. Release and release version secret have been folded into a single entity known as revision. Revisions were kept in the same namespace as Tiller, which meant that each release was "global" in terms of namespace; as a result, only one instance of the name could be used.

In Helm 3, each release is associated with one or more release version secrets. The release object always describes the current release deployed to Kubernetes. Each release version secret describes only one version of that release. An upgrade, for example, will create a new release version secret and then change the release object to point to that new version. In the case of a rollback, you can use the previous release version secrets to roll back the release to a previous state.

After deprecating Tiller, Helm 3 stores release data in the same namespace as the release. Such a change allows you to install a chart with the same release name in a different namespace, and the data is saved between cluster updates / restarts in etcd. For example, you can install WordPress in the "foo" namespace and then in the "bar" namespace, and both releases can be called "wordpress".

Chart dependency changes

Charts packed (using helm package) for use with Helm 2 can be installed with Helm 3, however the chart development workflow has been completely overhauled so some changes need to be made in order to continue developing charts with Helm 3. In particular, the chart dependency management system has changed.

Chart's dependency management system moved from requirements.yaml ΠΈ requirements.lock on Chart.yaml ΠΈ Chart.lock. This means that the charts that used the command helm dependency, require some configuration to work in Helm 3.

Let's look at an example. Let's add a dependency to the chart in Helm 2 and see what changes when moving to Helm 3.

In Helm 2 requirements.yaml looked like this:

dependencies:
- name: mariadb
  version: 5.x.x
  repository: https://kubernetes-charts.storage.googleapis.com/
  condition: mariadb.enabled
  tags:
    - database

In Helm 3 the same dependency will be reflected in your Chart.yaml:

dependencies:
- name: mariadb
  version: 5.x.x
  repository: https://kubernetes-charts.storage.googleapis.com/
  condition: mariadb.enabled
  tags:
    - database

Charts are still downloaded and placed in the directory charts/, so the subcharts (subcharts), located in the directory charts/will continue to work without change.

Introducing Library Charts

Helm 3 supports a class of charts called library charts (library chart). This chart is used by other charts, but does not generate any release artifacts on its own. Library chart templates can only declare elements define. Other content is simply ignored. This allows users to reuse and share snippets of code that can be used in many charts, thereby avoiding duplication and adhering to the principle DRY.

Library charts are declared in the section dependencies in file Chart.yaml. Installing and managing them is no different from other charts.

dependencies:
  - name: mylib
    version: 1.x.x
    repository: quay.io

We look forward to the use cases that this component will open up for chart developers, as well as the best practices that can emerge from library charts.

What's next?

Helm 3.0.0-alpha.1 is the basis on which we begin to create a new version of Helm. In the article, I described some interesting features of Helm 3. Many of them are still in the early stages of development, and this is normal; the point of an alpha release is to test the idea, gather feedback from early adopters, and validate our assumptions.

As soon as the alpha version is released (remember that this already happened - approx. transl.), we will start accepting patches for Helm 3 from the community. A solid foundation needs to be built to allow new functionality to be developed and adopted, and users to feel involved in the process by opening tickets and making corrections.

In this article, I've tried to highlight some of the major improvements coming to Helm 3, but this list is by no means exhaustive. The full plan for Helm 3 includes new features such as improved update strategies, deeper integration with OCI registries, and the use of JSON schemas to validate chart values. We also plan to clean up the codebase and update those parts of it that have been neglected for the past three years.

If you feel like we've missed something, we'd love to hear your thoughts!

Join the discussion in our Slack channels:

  • #helm-users for questions and simple communication with the community;
  • #helm-dev to discuss pull requests, code, and bugs.

You can also chat on our weekly Public Developer Calls on Thursdays at 19:30 MSK. The meetings are dedicated to discussing the tasks that key developers and the community are working on, as well as the topics of discussion for the week. Anyone can join and take part in the meeting. Link available in Slack channel #helm-dev.

PS from translator

Read also on our blog:

Source: habr.com

Add a comment