Monitoring postgres inside Openshift

Good day residents of Habr!

Today I want to tell you how we really wanted to monitor postgres and a couple of other entities inside the OpenShift cluster and how we did it.

At the entrance they had:

  • Openshift
  • Helmet
  • Prometheus


To work with a java application, everything was quite simple and transparent, and to be more precise, then:

1) Adding to build.gradle

 implementation "io.micrometer:micrometer-registry-prometheus"

2) Run prometheus with config

 - job_name: 'job-name'
    metrics_path: '/actuator/prometheus'
    scrape_interval: 5s
    kubernetes_sd_configs:
    - role: pod
      namespaces:
        names: 
          - 'name'

3) Adding display in Grafana

Everything was quite simple and prosaic until the moment came to monitor the databases that are next to us in the namespace (yes, this is bad, no one does this, but it happens differently).

How does this work?

In addition to the postgres pod and prometheus itself, we need one more entity - exporter.

An exporter in the abstract is an agent that collects metrics from an application or even a server. For postgres, the exporter is written in Go, works on the principle of running inside sql scripts on the basis, and then prometheus takes the results. It also allows you to expand the collected metrics by adding your own.

Let's deploy it like this (example deployment.yaml, non-binding):


---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: postgres-exporter
  labels:
    app: {{ .Values.name }}
    monitoring: prometheus
spec:
  serviceName: {{ .Values.name }}
  replicas: 1
  revisionHistoryLimit: 5
  template:
    metadata:
      labels:
        app: postgres-exporter
        monitoring: prometheus
    spec:
      containers:
      - env:
        - name: DATA_SOURCE_URI
          value: postgresdb:5432/pstgr?sslmode=disable
        - name: DATA_SOURCE_USER
          value: postgres
        - name: DATA_SOURCE_PASS
          value: postgres
        resources:
          limits:
            cpu: 100m
            memory: 50Mi
          requests:
            cpu: 100m
            memory: 50Mi
        livenessProbe:
          tcpSocket:
            port: metrics
          initialDelaySeconds: 30
          periodSeconds: 30
        readinessProbe:
          tcpSocket:
            port: metrics
          initialDelaySeconds: 10
          periodSeconds: 30
        image: exporter
        name: postgres-exporter
        ports:
        - containerPort: 9187
          name: metrics

He also needed a service and an image stream

After the deployment, we really want everyone to see each other.

Add the following piece to the Prometheus config:

  - job_name: 'postgres_exporter'
    metrics_path: '/metrics'
    scrape_interval: 5s
    dns_sd_configs:
    - names:
      - 'postgres-exporter'
      type: 'A'
      port: 9187

And here it all worked, it remains to add all this goodness to the grafana and enjoy the result.

In addition to the ability to add your own queries, you can change the setting in prometheus, collecting more targeted metrics.

In a similar way it was done for:

  • Kafka
  • Elasticsearch
  • Mongo

PS All data on names, ports and the rest are taken from the ceiling and do not carry any information.

Useful links:
List of various exporters

Source: habr.com

Add a comment