Monitoring and logging external services to a Kubernetes cluster

Monitoring and logging external services to a Kubernetes cluster

Good to all.

I did not find a generalized guide on logging and collecting metrics from third-party services in systems deployed in Kubernetes. I am posting my solution. This article assumes that you already have a working Prometheus and other services. The DBMS will be used as an example of a data source for an external stateful service. PostgreSQL in a container Docker. The company uses a package manager Helmet, below in the text there will be examples on it. For the whole solution, we are preparing our own chart, which includes nested charts of all used services.

Logging

Many companies use a stack of technologies to collect and view and centralize logs Elasticsearch + logstash + kibana, ELK for short. In our case, there is no need to index the content and I applied a more lightweight Loki. It is available as a Helm package, we added it as a subchart by changing the values ​​for ingress and pv to suit our system.

values.yaml

ingress:
  enabled: true
  annotations:
     kubernetes.io/ingress.class: nginx
  hosts:
    - host: kube-loki.example.domain
      paths: 
        - /
  tls: []

....

persistence:
  type: pvc
  enabled: true
  accessModes:
    - ReadWriteOnce
  size: 100Gi
  finalizers:
    - kubernetes.io/pvc-protection
  existingClaim: "pv-loki"

To send logs to the instance Loki use Loki Docker Logging Driver.

You need to install this add-on on everything Docker hosts from which you want to receive logs. There are several ways to tell the daemon how to use the addon. I make driver selection in yaml Docker Compose, which is part of Ansible playbook.

postgres.yaml

    - name: Run containers
      docker_compose:
        project_name: main-postgres
        definition:
          version: '3.7'
          services:
            p:
              image: "{{ postgres_version }}"
              container_name: postgresql
              restart: always
              volumes:
                - "{{ postgres_dir }}/data:/var/lib/postgresql/data"
                - "{{ postgres_dir }}/postgres_init_scripts:/docker-entrypoint-initdb.d"
              environment:
                POSTGRES_PASSWORD: {{ postgres_pass }}
                POSTGRES_USER: {{ postgres_user }}
              ports:
                - "{{ postgres_ip }}:{{ postgres_port }}:5432"
              logging:
                driver: "loki"
                options:
                  loki-url: "{{ loki_url }}"
                  loki-batch-size: "{{ loki_batch_size }}"
                  loki-retries: "{{ loki_retries }}"
...

where loki_url: kube-loki.example.domain/loki/api/v1/push

Metrics

Metrics are collected from PostgreSQL using postgres_exporter for Prometheus. Continuation of the above file Ansible playbook.

postgres.yaml

...
            pexp:
              image: "wrouesnel/postgres_exporter"
              container_name: pexporter
              restart: unless-stopped
              environment:
                DATA_SOURCE_NAME: "postgresql://{{ postgres_user }}:{{ postgres_pass }}@p:5432/postgres?sslmode=disable"
              ports:
                - "{{ postgres_ip }}:{{ postgres_exporter_port }}:9187"
              logging:
                driver: "json-file"
                options:
                  max-size: "5m"
...

For greater clarity, the names of external stateful services will be written through Endpoints.

postgres-service.yaml

apiVersion: v1
kind: Endpoints
metadata:
  name: postgres-exporter
subsets:
  - addresses:
      - ip: {{ .Values.service.postgres.ip }}
    ports:
      - port: {{ .Values.service.postgres.port }}
        protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
  name: postgres-exporter
  labels:
    chart:  "{{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}"
spec:
  ports:
    - protocol: TCP
      port: {{ .Values.service.postgres.port }}
      targetPort: {{ .Values.service.postgres.port }}

Setting up Prometheus to receive postgres_exporter data is done by editing values ​​in subchart.

values.yaml

scrape_configs:
...
  - job_name: postgres-exporter
    static_configs:
      - targets: 
         - postgres-exporter.applicationnamespace.svc.cluster.local:9187
        labels:
          alias: postgres
...

To visualize the received data, install the appropriate Dashboard in
grafana and set up data sources. This can also be done through values ​​in the Grafana subchart.

What it looks like
Monitoring and logging external services to a Kubernetes cluster

I hope that this short article has helped you understand the main ideas behind this solution and will save you time when setting up monitoring and logging external services for Loki / Prometheus in a Kubernetes cluster.

Source: habr.com

Add a comment