Kubernetes tips & tricks: about local development and Telepresence

Kubernetes tips & tricks: about local development and Telepresence

We are increasingly being asked about the development of microservices in Kubernetes. Developers, especially of interpreted languages, want to quickly fix the code in their favorite IDE and see the result without waiting for the build / deployment - by simply pressing F5. And when it came to a monolithic application, it was enough to locally raise the database and the web server (in Docker, VirtualBox ...), after which - immediately enjoy the development. With the sawing of monoliths into microservices and the advent of Kubernetes, with the advent of dependencies on each other, everything it got a little harder. The more of these microservices, the more problems. In order to enjoy development again, you need to raise more than one or two Docker containers, and sometimes even more than a dozen ... In general, all this can take a lot of time, since it also needs to be kept up to date.

At different times we tried different solutions to the problem. And I'll start with the accumulated workarounds, or simply "crutches".

1. Crutches

Most IDEs have the ability to edit code directly on the server using FTP/SFTP. This path is very obvious and we immediately decided to use it. Its essence is as follows:

  1. In the pod of the development environments (dev/review), an additional container is launched with SSH access and forwarding the public SSH key of the developer who will commit/deploy the application.
  2. At the init stage (within the container prepare-app) move the code to emptyDirto be able to access code from app containers and SSH server.

Kubernetes tips & tricks: about local development and Telepresence

For a better understanding of the technical implementation of such a scheme, I will give fragments of the involved YAML configurations in Kubernetes.

Configurations

1.1. values.yaml

ssh_pub_key:
  vasya.pupkin: <ssh public key in base64> 

Here vasya.pupkin is the value of the variable ${GITLAB_USER_LOGIN}.

1.2. deployment.yaml

...
{{ if eq .Values.global.debug "yes" }}
      volumes:
      - name: ssh-pub-key
        secret:
          defaultMode: 0600
          secretName: {{ .Chart.Name }}-ssh-pub-key
      - name: app-data
        emptyDir: {}
      initContainers:
      - name: prepare-app
{{ tuple "backend" . | include "werf_container_image" | indent 8 }}
        volumeMounts:
        - name: app-data
          mountPath: /app-data
        command: ["bash", "-c", "cp -ar /app/* /app-data/" ]
{{ end }}
      containers:
{{ if eq .Values.global.debug "yes" }}
      - name: ssh
        image: corbinu/ssh-server
        volumeMounts:
        - name: ssh-pub-key
          readOnly: true
          mountPath: /root/.ssh/authorized_keys
          subPath: authorized_keys
        - name: app-data
          mountPath: /app
        ports:
        - name: ssh
          containerPort: 22
          protocol: TCP
{{ end }}
      - name: backend
        volumeMounts:
{{ if eq .Values.global.debug "yes" }}
        - name: app-data
          mountPath: /app
{{ end }}
        command: ["/usr/sbin/php-fpm7.2", "--fpm-config", "/etc/php/7.2/php-fpm.conf", "-F"]
...

1.3. secret.yaml

{{ if eq .Values.global.debug "yes" }}
apiVersion: v1
kind: Secret
metadata:
  name: {{ .Chart.Name }}-ssh-pub-key
type: Opaque
data:
  authorized_keys: "{{ first (pluck .Values.global.username .Values.ssh_pub_key) }}"
{{ end }}

Final touch

After that, it remains only to transfer required gitlab-ci.yml variables:

dev:
  stage: deploy
  script:
   - type multiwerf && source <(multiwerf use 1.0 beta)
   - type werf && source <(werf ci-env gitlab --tagging-strategy tag-or-branch --verbose)
   - werf deploy
     --namespace ${CI_PROJECT_NAME}-stage
     --set "global.env=stage"
     --set "global.git_rev=${CI_COMMIT_SHA}"
     --set "global.debug=yes"
     --set "global.username=${GITLAB_USER_LOGIN}"
 tags:
   - build

Voila: the developer who launched the deployment can connect by the name of the service (how to securely issue access to the cluster, we already told) from your desktop via SFTP and edit the code without waiting for it to be delivered to the cluster.

This is a completely working solution, however, from the point of view of implementation, it has obvious disadvantages:

  • the need to refine the Helm-chart, which further complicates its reading;
  • can only be used by the person who deployed the service;
  • you need to remember to synchronize it with the local directory with the code later and commit it to Git.

2. Telepresence

Project telepresence known for a long time, but seriously try it in practice, we, as they say, "did not reach the hands." However, the demand has done its job and now we are happy to share experience that may be useful to readers of our blog - especially since there have not yet been other materials about Telepresence on HabrΓ©.

In short, it wasn't all that scary. We have placed all actions that require execution by the developer in a Helm-chart text file called NOTES.txt. Thus, the developer, after deploying the service in Kubernetes, sees the instructions for launching the local dev environment in the GitLab job log:

!!! Π Π°Π·Ρ€Π°Π±ΠΎΡ‚ΠΊΠ° сСрвиса локально, Π² составС Kubernetes !!!

* Настройка окруТСния
* * Π”ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ доступ Π΄ΠΎ кластСра Ρ‡Π΅Ρ€Π΅Π· VPN
* * На локальном ПК установлСн kubectl ( https://kubernetes.io/docs/tasks/tools/install-kubectl/ )
* * ΠŸΠΎΠ»ΡƒΡ‡ΠΈΡ‚ΡŒ config-Ρ„Π°ΠΉΠ» для kubectl (ΡΠΊΠΎΠΏΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π² ~/.kube/config)
* * На локальном ПК установлСн telepresence ( https://www.telepresence.io/reference/install )
* * Π”ΠΎΠ»ΠΆΠ΅Π½ Π±Ρ‹Ρ‚ΡŒ установлСн Docker
* * НСобходим доступ уровня reporter ΠΈΠ»ΠΈ Π²Ρ‹ΡˆΠ΅ ΠΊ Ρ€Π΅ΠΏΠΎΠ·ΠΈΡ‚ΠΎΡ€ΠΈΡŽ https://gitlab.site.com/group/app
* * НСобходимо залогинится Π² registry с Π»ΠΎΠ³ΠΈΠ½ΠΎΠΌ/ΠΏΠ°Ρ€ΠΎΠ»Π΅ΠΌ ΠΎΡ‚ GitLab (дСлаСтся ΠΎΠ΄ΠΈΠ½ Ρ€Π°Π·):

#########################################################################
docker login registry.site.com
#########################################################################

* Запуск окруТСния

#########################################################################
telepresence --namespace {{ .Values.global.env }} --swap-deployment {{ .Chart.Name  }}:backend --mount=/tmp/app --docker-run -v `pwd`:/app -v /tmp/app/var/run/secrets:/var/run/secrets -ti registry.site.com/group/app/backend:v8
#########################################################################

We will not dwell on the steps described in this manual ... except for the last one. What happens during the launch of Telepresence?

Working with Telepresence

At start (on the last command specified in the instructions above), we set:

  • the namespace in which the microservice is running;
  • the names of the deployment and the container we want to infiltrate.

The rest of the arguments are optional. If our service interacts with the Kubernetes API and for it ServiceAccount created, we need to mount certificates/tokens on our desktop. To do this, use the option --mount=true (or --mount=/dst_path) which will mount the root (/) from the Kubernetes container to our desktop. After that, we can (depending on the OS and how the application is launched) use the "keys" from the cluster.

First, let's look at the most versatile way to run an application - in a Docker container. To do this, we use the key --docker-run and mount the directory with the code in the container: -v `pwd`:/app

Please note that this implies launching from the project directory. The application code will be mounted to the directory /app in a container.

Next: -v /tmp/app/var/run/secrets:/var/run/secrets β€” to mount a directory with a certificate/token into a container.

This option is finally followed by the image in which the application will run. NB: When building an image, be sure to specify CMD or ENTRYPOINT!

What actually happens next?

  • In Kubernetes, for the specified Deployment, the number of replicas will be changed to 0. Instead, a new Deployment will start - with a changed container backend.
  • 2 containers will be launched on the desktop: the first one with Telepresence (it will proxy requests from / to Kubernetes), the second one with the application being developed.
  • If you exec into the container with the application, then all the ENV variables passed by Helm during deployment will be available to us, and all services will also be available. It remains only to edit the code in your favorite IDE and enjoy the result.
  • At the end of the work, it is enough to simply close the terminal in which Telepresence is running (terminate the session by Ctrl + C), - Docker containers will stop on the desktop, and everything will return to its initial state in Kubernetes. All that remains is to commit, issue the MR and submit it to review/merge/… (depending on your workflows).

In case we don’t want to run the application in a Docker container β€” for example, we develop not in PHP, but in Go, and still build it locally β€” launching Telepresence will be even easier:

telepresence --namespace {{ .Values.global.env }} --swap-deployment {{ .Chart.Name  }}:backend --mount=true

If the application accesses the Kubernetes API, you will need to mount the keys directory (https://www.telepresence.io/howto/volumes). There is a utility for Linux root:

proot -b $TELEPRESENCE_ROOT/var/run/secrets/:/var/run/secrets bash

After launching Telepresence without option --docker-run all environment variables will be available in the current terminal, so the application must be launched in it.

NB: When using, for example, PHP, you need to remember to disable various op_cache, apc and other accelerators for development - otherwise editing the code will not lead to the desired result.

Results

Local development with Kubernetes is a problem that needs to be solved in proportion to the spread of this platform. Receiving relevant requests from developers (from our clients), we began to solve them with the first available means, which, however, did not prove themselves in the long run. Fortunately, this has become obvious not only now and not only to us, so more suitable means have already appeared in the world, and Telepresence is the most famous of them (by the way, there are more scaffold from Google). Our experience of using it is not yet so great, but it already gives reason to recommend it to β€œcolleagues in the shop” - try it!

PS

Others from the K8s tips & tricks series:

Source: habr.com

Add a comment