Running Camunda BPM on Kubernetes

Running Camunda BPM on Kubernetes

Are you using Kubernetes? Ready to move your Camunda BPM instances from virtual machines, or maybe just try running them on Kubernetes? Let's take a look at some common configurations and individual items that can be tailored to your specific needs.

It assumes that you have used Kubernetes before. If not, why not take a look at guide and not start your first cluster?

Authors

  • Alastair Firth (Alastair Firth) - Senior Site Reliability Engineer on the Camunda Cloud team;
  • Lars Lange (Lars Lange) is a DevOps engineer at Camunda.

In short, then:

git clone https://github.com/camunda-cloud/camunda-examples.git
cd camunda-examples/camunda-bpm-demo
make skaffold

Okay, it probably didn't work because you don't have skaffold and kustomize installed. Well then, read on!

What is Camunda BPM

Camunda BPM is an open source business process management and decision automation platform that connects business users and software developers. It's perfect for coordinating and bringing people, (micro) services or even bots together! Read more about the different use cases at link.

Why use Kubernetes

Kubernetes has become the de facto standard for running modern applications on Linux. By using system calls instead of hardware emulation, and by using the kernel's memory management and task switching capabilities, boot and startup times are kept to a minimum. However, the biggest benefit can come from the standard API that Kubernetes provides for setting up the infrastructure that all applications need: storage, networking, and monitoring. It turned 2020 years old in June 6 and is perhaps the second largest open source project (after Linux). Recently, it has been actively stabilizing its functionality after a rapid iteration of the past few years, as it becomes critical for production workloads around the world.

Camunda BPM Engine can easily connect to other applications running on the same cluster, and Kubernetes provides excellent scalability, allowing you to increase infrastructure costs only when you really need it (and easily reduce them as needed).

The quality of monitoring is also greatly improved with tools such as Prometheus, Grafana, Loki, Fluentd, and Elasticsearch that allow you to centrally view all cluster workloads. Today we'll look at how to embed the Prometheus exporter into the Java Virtual Machine (JVM).

Goals

Let's look at a few areas where we can customize the Camunda BPM Docker image (github) so that it interacts well with Kubernetes.

  1. Logs and metrics;
  2. Database connections;
  3. Authentication;
  4. Session management.

We will look at several ways to achieve these goals and visually show the whole process.

Note: Are you using the Enterprise edition? look here and update image links as needed.

Workflow development

In this demo, we will use Skaffold to build Docker images using Google Cloud Build. It has good support for various tools (such as Kustomize and Helm), CI and build tools, and infrastructure providers. File skaffold.yaml.tmpl includes settings for Google Cloud Build and GKE, which provides a very easy way to run production-grade infrastructure.

make skaffold will upload the Dockerfile context to Cloud Build, build the image and store it in GCR, and then apply the manifests to your cluster. This is what makes make skaffold, but Skaffold has many more options.

For yaml templates in Kubernetes, we use kustomize to manage yaml overlays without forking the entire manifest, allowing you to use git pull --rebase for further improvements. It's in kubectl now and it works pretty well for things like this.

We also use envsubst to populate the hostname and GCP project ID in the *.yaml.tmpl files. You can see how it works in makefile or just keep going.

The necessary conditions

Workflow with Manifests

If you don't want to use kustomize or skaffold you can refer to the manifests in generated-manifest.yaml and tailor them to the workflow of your choice.

Logs and Metrics

Prometheus has become the standard for collecting metrics in Kubernetes. It occupies the same niche as AWS Cloudwatch Metrics, Cloudwatch Alerts, Stackdriver Metrics, StatsD, Datadog, Nagios, vSphere Metrics and others. It is open source and has a powerful query language. Visualization will be left to Grafana - it comes with a large number of dashboards available out of the box. They are connected to each other and are relatively easy to install with prometheus-operator.

By default, Prometheus uses the extraction model <service>/metrics, and adding sidecar containers for this is common. Unfortunately, JMX metrics are best logged within the JVM, so sidecar containers are not as efficient. Let's connect jmx_exporter open source from Prometheus to the JVM by adding it to the container image which will provide the path /metrics on a different port.

Add Prometheus jmx_exporter to container

-- images/camunda-bpm/Dockerfile
FROM camunda/camunda-bpm-platform:tomcat-7.11.0

## Add prometheus exporter
RUN wget https://repo1.maven.org/maven2/io/prometheus/jmx/
jmx_prometheus_javaagent/0.11.0/jmx_prometheus_javaagent-0.11.0.jar -P lib/
#9404 is the reserved prometheus-jmx port
ENV CATALINA_OPTS -javaagent:lib/
jmx_prometheus_javaagent-0.11.0.jar=9404:/etc/config/prometheus-jmx.yaml

Well, it was easy. The exporter will monitor tomcat and display its metrics in Prometheus format at <svc>:9404/metrics

Exporter setup

The attentive reader may wonder where the prometheus-jmx.yaml? There are many different things that can work in the JVM, and tomcat is just one of them, so the exporter needs some additional configuration. Standard configurations for tomcat, wildfly, kafka and so on are available here. We will add tomcat as ConfigMap in Kubernetes and then mount it as a volume.

First, we add an exporter configuration file to our platform/config/ directory

platform/config
└── prometheus-jmx.yaml

We then add ConfigMapGenerator в kustomization.yaml.tmpl:

-- platform/kustomization.yaml.tmpl
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
[...] configMapGenerator:
- name: config
files:
- config/prometheus-jmx.yaml

This will add each element files[] as a ConfigMap configuration element. The good thing about ConfigMapGenerators is that they hash the data in the config and trigger a Pod restart if it changes. They also reduce the volume of configuration in Deployment, since you can mount an entire "folder" of configuration files in one VolumeMount.

Finally, we need to mount the ConfigMap as a volume to the pod:

-- platform/deployment.yaml
apiVersion: apps/v1
kind: Deployment
[...] spec:
template:
spec:
[...] volumes:
- name: config
configMap:
name: config
defaultMode: 0744
containers:
- name: camunda-bpm
volumeMounts:
- mountPath: /etc/config/
name: config
[...]

Wonderful. If Prometheus is not configured for a full cleanup, you may need to tell it to clean up the pods. Prometheus Operator users can use service-monitor.yaml to get started. Explore Service-monitor.yaml, operator design и ServiceMonitorSpec before starting.

Extending this pattern to other use cases

All files we add to ConfigMapGenerator will be available in the new directory /etc/config. You can extend this template to mount any other configuration files you need. You can even mount a new startup script. you can use subPath to mount individual files. For updating xml files consider using xmlstarlet instead of sed. It's already included in the image.

Journals

Great news! Application logs are already available on stdout, for example with kubectl logs. Fluentd (it's installed by default in GKE) will redirect your logs to Elasticsearch, Loki, or your enterprise logging platform. If you want to use jsonify for logs then you can follow the above pattern to set logback.

Data Bank

By default, the image will have an H2 database. This does not suit us, and we will use Google Cloud SQL with Cloud SQL Proxy - this will be needed later for solving internal problems. This is an easy and reliable option if you don't have your own database setup preferences. AWS RDS provides a similar service.

Whatever database you choose, unless it's H2, you'll need to set the appropriate environment variables to platform/deploy.yaml. It looks something like this:

-- platform/deployment.yaml
apiVersion: apps/v1
kind: Deployment
[...] spec:
template:
spec:
[...] containers:
- name: camunda-bpm
env:
- name: DB_DRIVER
value: org.postgresql.Driver
- name: DB_URL
value: jdbc:postgresql://postgres-proxy.db:5432/process-engine
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: cambpm-db-credentials
key: db_username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cambpm-db-credentials
key: db_password
[...]

Note: You can use Kustomize to deploy to different environments using the overlay: example.

Note: usage valueFrom: secretKeyRef. Please, use this Kubernetes function even during development to keep your secrets safe.

It is likely that you already have a preferred Kubernetes secret management system. If not, here are some options: encrypting them with your cloud provider's KMS and then injecting them into K8S as secrets via CD pipeline − Mozilla SOPS - will work very well in combination with the secrets of Kustomize. There are other tools like dotGPG that perform similar functions: HashiCorp Vault, Customize Secret Value Plugins.

income

Unless you choose to use local port forwarding, you will need a configured Ingress Controller. If you are not using ingress-nginx (Helm chart) then you probably already know that you need to set the necessary annotations in ingress-patch.yaml.tmpl or platform/ingress.yaml. If you're using ingress-nginx and you see an nginx ingress class with a load balancer pointing to it and an external DNS or wildcard DNS entry, you're done. Otherwise, set up an Ingress Controller and DNS, or skip these steps and keep a direct connection to the pod.

TLS

If you are using cert-manager or kube-lego and letsencrypt - certificates for a new login will be obtained automatically. Otherwise, open ingress-patch.yaml.tmpl and customize it to your needs.

Launch!

If you followed everything written above, then the command make skaffold HOSTNAME=<you.example.com> should launch an available instance at <hostname>/camunda

If you haven't set login via public URL, you can redirect it with localhost: kubectl port-forward -n camunda-bpm-demo svc/camunda-bpm 8080:8080 on localhost:8080/camunda

Wait a few minutes for tomcat to be completely ready. Cert-manager will take some time to verify the domain name. After that, you can follow the logs using available tools, such as a tool like kubetail, or just using kubectl:

kubectl logs -n camunda-bpm-demo $(kubectl get pods -o=name -n camunda-bpm-demo) -f

Next Steps

Authorization

This is more related to Camunda BPM setup than Kubernetes, but it's important to note that authentication is disabled by default in the REST API. Can enable basic authentication or use another method like JWT. You can use configmaps and volumes to load xml, or xmlstarlet (see above) to edit existing files in the image, and either use wget or load them using an init container and a shared volume.

Session Management

Like many other applications, Camunda BPM handles sessions in the JVM, so if you want to run multiple replicas, you can enable sticky sessions (e.g. for ingress-nginx) that will exist until the replica disappears, or set the Max-Age attribute on cookies. As a more robust solution, you can deploy the Session Manager in Tomcat. Lars has separate post on this topic, but something like:

wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager/
2.3.2/memcached-session-manager-2.3.2.jar -P lib/ &&
wget http://repo1.maven.org/maven2/de/javakaffee/msm/memcached-session-manager-tc9/
2.3.2/memcached-session-manager-tc9-2.3.2.jar -P lib/ &&

sed -i '/^</Context>/i
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="redis://redis-proxy.db:22121"
sticky="false"
sessionBackupAsync="false"
storageKeyPrefix="context"
lockingMode="auto"
/>' conf/context.xml

Note: can use xmlstarlet instead of sed

We used twemproxy before Google Cloud Memorystore, with memcached-session-manager (supports Redis) to run it.

Scaling

If you've already figured out sessions, then the first (and often the last) limitation for scaling Camunda BPM can be a database connection. Partial customization is already availablefrom the box". We will also disable intialSize in the settings.xml file. Add HorizontalPod Autoscaler (HPA) and you can easily automatically scale the number of pods.

Requests and restrictions

В platform/deployment.yaml you'll see that we've hard-coded the resource field. This works well with HPA, but some additional tweaking may be needed. For this, the customize patch is suitable. Cm. ingress-patch.yaml.tmpl и ./kustomization.yaml.tmpl

Hack and predictor Aviator

Here we have installed Camunda BPM on Kubernetes with Prometheus metrics, logs, H2 database, TLS and Ingress. We added jar files and config files using ConfigMaps and Dockerfile. We talked about exchanging data with volumes and directly to environment variables from secrets. In addition, we provided an overview of Camunda setup for multiple replicas and authenticated API.

references

github.com/camunda-cloud/camunda-examples/camunda-bpm-kubernetes

├── generated-manifest.yaml <- manifest for use without kustomize
├── images
│ └── camunda-bpm
│ └── Dockerfile <- overlay docker image
├── ingress-patch.yaml.tmpl <- site-specific ingress configuration
├── kustomization.yaml.tmpl <- main Kustomization
├── Makefile <- make targets
├── namespace.yaml
├── platform
│ ├── config
│ │ └── prometheus-jmx.yaml <- prometheus exporter config file
│ ├── deployment.yaml <- main deployment
│ ├── ingress.yaml
│ ├── kustomization.yaml <- "base" kustomization
│ ├── service-monitor.yaml <- example prometheus-operator config
│ └── service.yaml
└── skaffold.yaml.tmpl <- skaffold directives

05.08.2020/XNUMX/XNUMX, translation Articles Alastair Firth, Lars Lange

Source: habr.com

Add a comment