Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Vector, ad colligendas, transformandas et mittendi notitias, metricas et eventus.

→ Github

Cum in lingua Rubigo conscripta, altae operationis et humilis RAM sumptio cum analogis comparatur. Multum praeterea attenditur ad munera quae ad rectitudinem pertinentia, praesertim, facultas salvandi eventus inexplicabiles ad quiddam in disco et in lima roto.

Architecture, Vector est eventus itineris qui nuntios accipit ab uno vel pluribus fontibus, Ad libitum applicandis his nuntiis transformationeseosque ad unum vel plures exhaurit ".

Vector subrogetur pro fasciculo et logstash, in utroque munere agere potest (accipiendo et mittendo omnia), plura in iis singula. website.

Si in Logstash constructum est sicut catena input → filter → output tunc in Vector est fontibusIs commutatadlisam iri summersum

Exempla in documentis reperiuntur.

Haec instructio recognita est ex instructione Vyacheslav Rakhinsky. In mandatis originalibus processus processus continet geoip. Cum tentaret geoip ab retis internis, vector errorem dedit.

Aug 05 06:25:31.889 DEBUG transform{name=nginx_parse_rename_fields type=rename_fields}: vector::transforms::rename_fields: Field did not exist field=«geoip.country_name» rate_limit_secs=30

Si quis geoip ad processum indiget, inde ad mandatum originale referendum est Vyacheslav Rakhinsky.

Configurabimus compositionem Nginx (Access logs) → Vector (Client | Filebeat) → Vector (Server | Logstash) → separatim in Clickhouse et separatim in Elasticsearch. IV servers install erimus. Licet eam cum 4 servientibus praeterire potes.

Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Consilium est aliquid simile.

Inactivare Selinux super omnia servers

sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
reboot

HTTP servo aemulator utilitates in omnibus servientibus instituimus

Ut HTTP server aemula nos utimur nodejs-stub-server ex Maximus Ignatenko

Nodejs-stub-servo rpm non habet. est RPM partum pro eo. RPM componantur utens Fedora Copr

Adde antonpatsev/nodejs-stub-server repositorium

yum -y install yum-plugin-copr epel-release
yes | yum copr enable antonpatsev/nodejs-stub-server

Install nodejs-stub-server, Apache accumsan ac velum terminalem multiplexer in omnibus servientibus

yum -y install stub_http_server screen mc httpd-tools screen

In pagina /var/lib/stub_http_server/stub_http_server.js emendavi tempus responsionis ut plura ligna essent.

var max_sleep = 10;

Lorem stub_http_servo.

systemctl start stub_http_server
systemctl enable stub_http_server

Clickhouse institutionem in servo III "

ClickHouse utitur statuto SSE 4.2 instructionis, ut, nisi aliud certum sit, subsidium quod in processu adhibito fiat accessio postulationis systematis. Hic mandatum est ut reprimatur si processus processus SSE 4.2:

grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"

Primum repositorium officialem coniungere debes:

sudo yum install -y yum-utils
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64

Fasciculi installare debes currere haec mandata:

sudo yum install -y clickhouse-server clickhouse-client

Patitur clickhouse-servo ut audias retis card in tabella /etc/clickhouse-server/config.xml

<listen_host>0.0.0.0</listen_host>

Explotación de gradu vestigium lusione

debug

Compressio Latin occasus:

min_compress_block_size  65536
max_compress_block_size  1048576

Ad compressionem activate Zstd, monitum est non config tangere, sed DDL uti.

Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Non potui invenire quomodo pressionibus zstd utendi per DDL in Google. Ita discessi ut.

Collegae, qui compressione zstd utuntur in Clickhouse, mandata communicant.

Ut a daemone servo incipias, curre;

service clickhouse-server start

Nunc transeamus ad constituendum Clickhouse

Ad Clickhouse

clickhouse-client -h 172.26.10.109 -m

172.26.10.109 — IP ministri ubi Clickhouse inauguratus est.

Faciamus vector database

CREATE DATABASE vector;

Sit scriptor reprehendo quod database existit.

show databases;

Vector.logs mensam crea.

/* Это таблица где хранятся логи как есть */

CREATE TABLE vector.logs
(
    `node_name` String,
    `timestamp` DateTime,
    `server_name` String,
    `user_id` String,
    `request_full` String,
    `request_user_agent` String,
    `request_http_host` String,
    `request_uri` String,
    `request_scheme` String,
    `request_method` String,
    `request_length` UInt64,
    `request_time` Float32,
    `request_referrer` String,
    `response_status` UInt16,
    `response_body_bytes_sent` UInt64,
    `response_content_type` String,
    `remote_addr` IPv4,
    `remote_port` UInt32,
    `remote_user` String,
    `upstream_addr` IPv4,
    `upstream_port` UInt32,
    `upstream_bytes_received` UInt64,
    `upstream_bytes_sent` UInt64,
    `upstream_cache_status` String,
    `upstream_connect_time` Float32,
    `upstream_header_time` Float32,
    `upstream_response_length` UInt64,
    `upstream_response_time` Float32,
    `upstream_status` UInt16,
    `upstream_content_type` String,
    INDEX idx_http_host request_http_host TYPE set(0) GRANULARITY 1
)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY timestamp
TTL timestamp + toIntervalMonth(1)
SETTINGS index_granularity = 8192;

Reprehendimus factas esse tabulas. Lorem scriptor clickhouse-client et rogamus.

Eamus ad vector database.

use vector;

Ok.

0 rows in set. Elapsed: 0.001 sec.

Inspice tabulas.

show tables;

┌─name────────────────┐
│ logs                │
└─────────────────────┘

Insertis elasticis investigationibus in 4 servo ut eadem notitia elastica investigationi mittat ad comparationem cum Clickhouse

Addere publicam rpm clavem

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

II repo faciamus:

/etc/yum.repos.d/elasticsearch.repo

[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

/etc/yum.repos.d/kibana.repo

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

Install elasticsearch et kibana

yum install -y kibana elasticsearch

Cum in 1 exemplari erit, sequentia documenta /etc/elasticsearch/elasticsearch.yml addere debes:

discovery.type: single-node

Ut vector notitias elasticas ab alio servo mittere potest, network mutatio est.

network.host: 0.0.0.0

Coniungere ad kibana, mutare server.host parametri in tabella /etc/kibana/kibana.yml

server.host: "0.0.0.0"

Vetus et includere elastica investigationis in autostart

systemctl enable elasticsearch
systemctl start elasticsearch

et kibana

systemctl enable kibana
systemctl start kibana

Elasticationem configurans pro simplici nodi modo 1 testa, 0 replica. Verisimile erit botrum habebitis in numero servientium et hoc non indigetis.

Ad futuras indices, defaltam imaginem renovare:

curl -X PUT http://localhost:9200/_template/default -H 'Content-Type: application/json' -d '{"index_patterns": ["*"],"order": -1,"settings": {"number_of_shards": "1","number_of_replicas": "0"}}' 

occasum Vector ut tortor Logstash in servo II "

yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm mc httpd-tools screen

Vector ut tortor Logstash constituamus. Edere tabella /etc/vector/vector.toml

# /etc/vector/vector.toml

data_dir = "/var/lib/vector"

[sources.nginx_input_vector]
  # General
  type                          = "vector"
  address                       = "0.0.0.0:9876"
  shutdown_timeout_secs         = 30

[transforms.nginx_parse_json]
  inputs                        = [ "nginx_input_vector" ]
  type                          = "json_parser"

[transforms.nginx_parse_add_defaults]
  inputs                        = [ "nginx_parse_json" ]
  type                          = "lua"
  version                       = "2"

  hooks.process = """
  function (event, emit)

    function split_first(s, delimiter)
      result = {};
      for match in (s..delimiter):gmatch("(.-)"..delimiter) do
          table.insert(result, match);
      end
      return result[1];
    end

    function split_last(s, delimiter)
      result = {};
      for match in (s..delimiter):gmatch("(.-)"..delimiter) do
          table.insert(result, match);
      end
      return result[#result];
    end

    event.log.upstream_addr             = split_first(split_last(event.log.upstream_addr, ', '), ':')
    event.log.upstream_bytes_received   = split_last(event.log.upstream_bytes_received, ', ')
    event.log.upstream_bytes_sent       = split_last(event.log.upstream_bytes_sent, ', ')
    event.log.upstream_connect_time     = split_last(event.log.upstream_connect_time, ', ')
    event.log.upstream_header_time      = split_last(event.log.upstream_header_time, ', ')
    event.log.upstream_response_length  = split_last(event.log.upstream_response_length, ', ')
    event.log.upstream_response_time    = split_last(event.log.upstream_response_time, ', ')
    event.log.upstream_status           = split_last(event.log.upstream_status, ', ')

    if event.log.upstream_addr == "" then
        event.log.upstream_addr = "127.0.0.1"
    end

    if (event.log.upstream_bytes_received == "-" or event.log.upstream_bytes_received == "") then
        event.log.upstream_bytes_received = "0"
    end

    if (event.log.upstream_bytes_sent == "-" or event.log.upstream_bytes_sent == "") then
        event.log.upstream_bytes_sent = "0"
    end

    if event.log.upstream_cache_status == "" then
        event.log.upstream_cache_status = "DISABLED"
    end

    if (event.log.upstream_connect_time == "-" or event.log.upstream_connect_time == "") then
        event.log.upstream_connect_time = "0"
    end

    if (event.log.upstream_header_time == "-" or event.log.upstream_header_time == "") then
        event.log.upstream_header_time = "0"
    end

    if (event.log.upstream_response_length == "-" or event.log.upstream_response_length == "") then
        event.log.upstream_response_length = "0"
    end

    if (event.log.upstream_response_time == "-" or event.log.upstream_response_time == "") then
        event.log.upstream_response_time = "0"
    end

    if (event.log.upstream_status == "-" or event.log.upstream_status == "") then
        event.log.upstream_status = "0"
    end

    emit(event)

  end
  """

[transforms.nginx_parse_remove_fields]
    inputs                              = [ "nginx_parse_add_defaults" ]
    type                                = "remove_fields"
    fields                              = ["data", "file", "host", "source_type"]

[transforms.nginx_parse_coercer]

    type                                = "coercer"
    inputs                              = ["nginx_parse_remove_fields"]

    types.request_length = "int"
    types.request_time = "float"

    types.response_status = "int"
    types.response_body_bytes_sent = "int"

    types.remote_port = "int"

    types.upstream_bytes_received = "int"
    types.upstream_bytes_send = "int"
    types.upstream_connect_time = "float"
    types.upstream_header_time = "float"
    types.upstream_response_length = "int"
    types.upstream_response_time = "float"
    types.upstream_status = "int"

    types.timestamp = "timestamp"

[sinks.nginx_output_clickhouse]
    inputs   = ["nginx_parse_coercer"]
    type     = "clickhouse"

    database = "vector"
    healthcheck = true
    host = "http://172.26.10.109:8123" #  Адрес Clickhouse
    table = "logs"

    encoding.timestamp_format = "unix"

    buffer.type = "disk"
    buffer.max_size = 104900000
    buffer.when_full = "block"

    request.in_flight_limit = 20

[sinks.elasticsearch]
    type = "elasticsearch"
    inputs   = ["nginx_parse_coercer"]
    compression = "none"
    healthcheck = true
    # 172.26.10.116 - сервер где установен elasticsearch
    host = "http://172.26.10.116:9200" 
    index = "vector-%Y-%m-%d"

Accommodare potes sectionem transforms.nginx_parse_add_defaults.

Quia Vyacheslav Rakhinsky his configiis pro parvo CDN utitur et plures valores in adverso flumine esse possunt.

For example:

"upstream_addr": "128.66.0.10:443, 128.66.0.11:443, 128.66.0.12:443"
"upstream_bytes_received": "-, -, 123"
"upstream_status": "502, 502, 200"

Si haec condicio tua non est, haec sectio simplicior fieri potest

Creamus opera occasus pro systemd/etc/systemd/system/vector.service

# /etc/systemd/system/vector.service

[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target

[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector

[Install]
WantedBy=multi-user.target

Post tabulas creandas, vector currere potes

systemctl enable vector
systemctl start vector

Tigna vector sic aspici possunt:

journalctl -f -u vector

Viscus sic debet esse in acta

INFO vector::topology::builder: Healthcheck: Passed.
INFO vector::topology::builder: Healthcheck: Passed.

In clientelam (Web server) - 1st server

In servo cum nginx, debes ipv6 inactivandi, quia tigna mensa in strepitali agro utitur. upstream_addr IPv4, quia ipv6 intra ornatum non utor. Si ipv6 non flexerit, errores erunt;

DB::Exception: Invalid IPv4 value.: (while read the value of key upstream_addr)

Fortasse lectores addunt ipv6 subsidium.

Tabella crea /etc/sysctl.d/98-disable-ipv6.conf

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

Applicando occasus

sysctl --system

Sit scriptor nginx install.

Addidit nginx repositorium file /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

Install nginx sarcina

yum install -y nginx

Primum, necesse est ut in Nginx in tabella /etc/nginx/nginx.conf figuram configurare.

user  nginx;
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

log_format vector escape=json
    '{'
        '"node_name":"nginx-vector",'
        '"timestamp":"$time_iso8601",'
        '"server_name":"$server_name",'
        '"request_full": "$request",'
        '"request_user_agent":"$http_user_agent",'
        '"request_http_host":"$http_host",'
        '"request_uri":"$request_uri",'
        '"request_scheme": "$scheme",'
        '"request_method":"$request_method",'
        '"request_length":"$request_length",'
        '"request_time": "$request_time",'
        '"request_referrer":"$http_referer",'
        '"response_status": "$status",'
        '"response_body_bytes_sent":"$body_bytes_sent",'
        '"response_content_type":"$sent_http_content_type",'
        '"remote_addr": "$remote_addr",'
        '"remote_port": "$remote_port",'
        '"remote_user": "$remote_user",'
        '"upstream_addr": "$upstream_addr",'
        '"upstream_bytes_received": "$upstream_bytes_received",'
        '"upstream_bytes_sent": "$upstream_bytes_sent",'
        '"upstream_cache_status":"$upstream_cache_status",'
        '"upstream_connect_time":"$upstream_connect_time",'
        '"upstream_header_time":"$upstream_header_time",'
        '"upstream_response_length":"$upstream_response_length",'
        '"upstream_response_time":"$upstream_response_time",'
        '"upstream_status": "$upstream_status",'
        '"upstream_content_type":"$upstream_http_content_type"'
    '}';

    access_log  /var/log/nginx/access.log  main;
    access_log  /var/log/nginx/access.json.log vector;      # Новый лог в формате json

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

Ut configurationem tuam frangere non possis, Nginx permittit tibi habere plures accessus_logorum normas

access_log  /var/log/nginx/access.log  main;            # Стандартный лог
access_log  /var/log/nginx/access.json.log vector;      # Новый лог в формате json

Noli oblivisci regulam addere ad logrotate ligna nova (si stipes lima non finitur cum .log)

Aufer default.conf ex /etc/nginx/conf.d/

rm -f /etc/nginx/conf.d/default.conf

Adde virtualem exercitum /etc/nginx/conf.d/vhost1.conf

server {
    listen 80;
    server_name vhost1;
    location / {
        proxy_pass http://172.26.10.106:8080;
    }
}

Adde virtualem exercitum /etc/nginx/conf.d/vhost2.conf

server {
    listen 80;
    server_name vhost2;
    location / {
        proxy_pass http://172.26.10.108:8080;
    }
}

Adde virtualem exercitum /etc/nginx/conf.d/vhost3.conf

server {
    listen 80;
    server_name vhost3;
    location / {
        proxy_pass http://172.26.10.109:8080;
    }
}

Adde virtualem exercitum /etc/nginx/conf.d/vhost4.conf

server {
    listen 80;
    server_name vhost4;
    location / {
        proxy_pass http://172.26.10.116:8080;
    }
}

Virtutes virtuales addere (172.26.10.106 ip servi ubi nginx installatur) omnibus servientibus ad fasciculum /etc/hostium adde:

172.26.10.106 vhost1
172.26.10.106 vhost2
172.26.10.106 vhost3
172.26.10.106 vhost4

Et si omnia parata sunt

nginx -t 
systemctl restart nginx

Nunc instituamus nos ipsi Vector

yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm

Faciamus lima occasus pro systemd /etc/systemd/system/vector.service

[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target

[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector

[Install]
WantedBy=multi-user.target

Et postea Filebeat in /etc/vector/vector.toml config configurare. IP inscriptio 172.26.10.108 est IP oratio servo iniuriae (Vector-Server)

data_dir = "/var/lib/vector"

[sources.nginx_file]
  type                          = "file"
  include                       = [ "/var/log/nginx/access.json.log" ]
  start_at_beginning            = false
  fingerprinting.strategy       = "device_and_inode"

[sinks.nginx_output_vector]
  type                          = "vector"
  inputs                        = [ "nginx_file" ]

  address                       = "172.26.10.108:9876"

Noli oblivisci vectorem usorem addere catervae inquisitae ut tabellas logas legere possit. Exempli gratia, nginx in centos ligna cum adm coetus iura creat.

usermod -a -G adm vector

Sit scriptor satus vector opera

systemctl enable vector
systemctl start vector

Tigna vector sic aspici possunt:

journalctl -f -u vector

Ingressus talis sit in lignis

INFO vector::topology::builder: Healthcheck: Passed.

Suspendisse Testis

Portamus experimur Apache Probatio utens.

In httpd instrumenta sarcina erat installed in omnibus servers

Incipimus tentare Apache uti velit fermentum ab 4 diversis ministris in screen. Primum screen multiplexer terminale deducimus et deinde Apache velit fermentum incipiunt utentes. Quomodo operari cum screen invenire potes in articulus.

Ex 1 servo

while true; do ab -H "User-Agent: 1server" -c 100 -n 10 -t 10 http://vhost1/; sleep 1; done

Ex 2 servo

while true; do ab -H "User-Agent: 2server" -c 100 -n 10 -t 10 http://vhost2/; sleep 1; done

Ex 3 servo

while true; do ab -H "User-Agent: 3server" -c 100 -n 10 -t 10 http://vhost3/; sleep 1; done

Ex 4 servo

while true; do ab -H "User-Agent: 4server" -c 100 -n 10 -t 10 http://vhost4/; sleep 1; done

Sit scriptor reprehendo in in Clickhouse data

Ad Clickhouse

clickhouse-client -h 172.26.10.109 -m

Faciens SQL query

SELECT * FROM vector.logs;

┌─node_name────┬───────────timestamp─┬─server_name─┬─user_id─┬─request_full───┬─request_user_agent─┬─request_http_host─┬─request_uri─┬─request_scheme─┬─request_method─┬─request_length─┬─request_time─┬─request_referrer─┬─response_status─┬─response_body_bytes_sent─┬─response_content_type─┬───remote_addr─┬─remote_port─┬─remote_user─┬─upstream_addr─┬─upstream_port─┬─upstream_bytes_received─┬─upstream_bytes_sent─┬─upstream_cache_status─┬─upstream_connect_time─┬─upstream_header_time─┬─upstream_response_length─┬─upstream_response_time─┬─upstream_status─┬─upstream_content_type─┐
│ nginx-vector │ 2020-08-07 04:32:42 │ vhost1      │         │ GET / HTTP/1.0 │ 1server            │ vhost1            │ /           │ http           │ GET            │             66 │        0.028 │                  │             404 │                       27 │                       │ 172.26.10.106 │       45886 │             │ 172.26.10.106 │             0 │                     109 │                  97 │ DISABLED              │                     0 │                0.025 │                       27 │                  0.029 │             404 │                       │
└──────────────┴─────────────────────┴─────────────┴─────────┴────────────────┴────────────────────┴───────────────────┴─────────────┴────────────────┴────────────────┴────────────────┴──────────────┴──────────────────┴─────────────────┴──────────────────────────┴───────────────────────┴───────────────┴─────────────┴─────────────┴───────────────┴───────────────┴─────────────────────────┴─────────────────────┴───────────────────────┴───────────────────────┴──────────────────────┴──────────────────────────┴────────────────────────┴─────────────────┴───────────────────────

Invenire magnitudinem tabularum in Clickhouse

select concat(database, '.', table)                         as table,
       formatReadableSize(sum(bytes))                       as size,
       sum(rows)                                            as rows,
       max(modification_time)                               as latest_modification,
       sum(bytes)                                           as bytes_size,
       any(engine)                                          as engine,
       formatReadableSize(sum(primary_key_bytes_in_memory)) as primary_keys_size
from system.parts
where active
group by database, table
order by bytes_size desc;

Inueniamus quantum tigna tulerunt in Clickhouse.

Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Tabulae magnitudinis acta est 857.19 MB.

Mittens Nginx json omnia per vector ad Clickhouse et Elasticsearch

Magnitudo earundem notitiarum in indice in Elastica investigatione est 4,5GB.

Si notas notas in vectore in parametris non indicas, Clickhouse accipit 4500/857.19 = 5.24 minus quam in Elastica inquisitione.

In vector, pressio agri per defaltam adhibetur.

Curabitur per telegraphum clickhouse
Curabitur per telegraphum Elasticsearch
Curabitur per telegraphum "Collectio et analysis systematis nuntiis"

Source: www.habr.com

Add a comment