Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

vektor, amelyet naplĂładatok, mutatĂłk Ă©s esemĂ©nyek gyƱjtĂ©sĂ©re, ĂĄtalakĂ­tĂĄsĂĄra Ă©s kĂŒldĂ©sĂ©re terveztek.

→ GitHub

Rust nyelven Ă­rĂłdott, analĂłgjaihoz kĂ©pest nagy teljesĂ­tmĂ©ny Ă©s alacsony RAM-fogyasztĂĄs jellemzi. Ezen tĂșlmenƑen nagy figyelmet fordĂ­tanak a helyessĂ©ggel kapcsolatos funkciĂłkra, kĂŒlönösen az el nem kĂŒldött esemĂ©nyek lemezen lĂ©vƑ pufferbe mentĂ©sĂ©re Ă©s a fĂĄjlok elforgatĂĄsĂĄra.

ÉpĂ­tĂ©szetileg a Vector esemĂ©nyĂștvĂĄlasztĂł, amely egy vagy több ĂŒzenetet fogad forrĂĄsok, opcionĂĄlisan alkalmazva ezekre az ĂŒzenetekre ĂĄtalakulĂĄsok, Ă©s elkĂŒldi Ƒket egy vagy többnek lefolyĂłk.

A Vector a filebeat Ă©s a logstash helyettesĂ­tƑje, mindkĂ©t szerepkörben kĂ©pes mƱködni (naplĂłk fogadĂĄsa Ă©s kĂŒldĂ©se), tovĂĄbbi rĂ©szletek rĂłluk Online.

Ha a Logstash-ban a lĂĄnc bemenetkĂ©nt → szƱrƑ → kimenetkĂ©nt Ă©pĂŒl fel, akkor Vectorban az forrĂĄsok → transzformĂĄciĂł → mosogatĂłk

PĂ©ldĂĄk talĂĄlhatĂłk a dokumentĂĄciĂłban.

Ez az utasĂ­tĂĄs egy felĂŒlvizsgĂĄlt utasĂ­tĂĄs Vjacseszlav Rahinszkij. Az eredeti utasĂ­tĂĄsok geoip feldolgozĂĄst tartalmaznak. A geoip belsƑ hĂĄlĂłzatrĂłl törtĂ©nƑ tesztelĂ©sekor a vektor hibĂĄt adott.

Aug 05 06:25:31.889 DEBUG transform{name=nginx_parse_rename_fields type=rename_fields}: vector::transforms::rename_fields: Field did not exist field=«geoip.country_name» rate_limit_secs=30

Ha valakinek fel kell dolgoznia a geoip-et, nézze meg az eredeti utasítåsokat Vjacseszlav Rahinszkij.

Az Nginx (HozzĂĄfĂ©rĂ©si naplĂłk) → Vector (Client | Filebeat) → Vector (Server | Logstash) → kombinĂĄciĂłt kĂŒlön fogjuk konfigurĂĄlni a Clickhouse-ban Ă©s kĂŒlön az Elasticsearch-ben. 4 szervert telepĂ­tĂŒnk. BĂĄr 3 szerverrel ki lehet kerĂŒlni.

Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

A séma valami ilyesmi.

Tiltsa le a Selinuxot az összes szerverén

sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
reboot

Minden szerverre telepĂ­tĂŒnk egy HTTP szerver emulĂĄtort + segĂ©dprogramokat

HTTP szerver emulåtorként fogjuk hasznålni nodejs-stub-server -tól Maxim Ignatenko

A Nodejs-stub-server nem rendelkezik fordulatszåmmal. Itt fordulatszåmot hozzon létre hozzå. rpm segítségével lesz lefordítva Fedora Copr

Adja hozzĂĄ az antonpatsev/nodejs-stub-server lerakat

yum -y install yum-plugin-copr epel-release
yes | yum copr enable antonpatsev/nodejs-stub-server

A nodejs-stub-server, az Apache benchmark Ă©s a kĂ©pernyƑterminĂĄl multiplexer telepĂ­tĂ©se minden kiszolgĂĄlĂłra

yum -y install stub_http_server screen mc httpd-tools screen

JavĂ­tottam a stub_http_server vĂĄlaszidƑt a /var/lib/stub_http_server/stub_http_server.js fĂĄjlban, hogy több naplĂł legyen.

var max_sleep = 10;

IndĂ­tsuk el a stub_http_servert.

systemctl start stub_http_server
systemctl enable stub_http_server

Clickhouse telepítés a 3-as szerveren

A ClickHouse az SSE 4.2-es utasĂ­tĂĄskĂ©szletĂ©t hasznĂĄlja, Ă­gy ha nincs mĂĄskĂ©pp megadva, tovĂĄbbi rendszerkövetelmĂ©ny lesz a tĂĄmogatĂĄs a hasznĂĄlt processzorban. A következƑ parancs segĂ­tsĂ©gĂ©vel ellenƑrizheti, hogy a jelenlegi processzor tĂĄmogatja-e az SSE 4.2-t:

grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"

ElƑször csatlakoztatnia kell a hivatalos adattĂĄrat:

sudo yum install -y yum-utils
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64

A csomagok telepĂ­tĂ©sĂ©hez a következƑ parancsokat kell futtatnia:

sudo yum install -y clickhouse-server clickhouse-client

Engedélyezze a clickhouse-server szåmåra a hålózati kårtya meghallgatåsåt az /etc/clickhouse-server/config.xml fåjlban

<listen_host>0.0.0.0</listen_host>

A naplĂłzĂĄsi szint mĂłdosĂ­tĂĄsa nyomkövetĂ©srƑl hibakeresĂ©sre

hibakeresés

Standard tömörítési beållítåsok:

min_compress_block_size  65536
max_compress_block_size  1048576

A Zstd tömörĂ­tĂ©s aktivĂĄlĂĄsĂĄhoz azt tanĂĄcsoltuk, hogy ne Ă©rintsĂŒk meg a konfigurĂĄciĂłt, inkĂĄbb hasznĂĄljunk DDL-t.

Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

Nem talĂĄltam meg a zstd tömörĂ­tĂ©s hasznĂĄlatĂĄt DDL-n keresztĂŒl a Google-ban. SzĂłval hagytam Ășgy ahogy van.

Azok a kollégåk, akik zstd tömörítést hasznålnak a Clickhouse-ban, ossza meg az utasítåsokat.

A szerver démonként való indítåsåhoz futtassa:

service clickhouse-server start

Most tĂ©rjĂŒnk ĂĄt a Clickhouse beĂĄllĂ­tĂĄsĂĄra

Menj a Clickhouse-ba

clickhouse-client -h 172.26.10.109 -m

172.26.10.109 — Annak a szervernek az IP-címe, amelyre a Clickhouse telepítve van.

Hozzunk létre egy vektoros adatbåzist

CREATE DATABASE vector;

EllenƑrizzĂŒk, hogy lĂ©tezik-e az adatbĂĄzis.

show databases;

Hozzon létre egy vector.logs tåblåt.

/* Đ­Ń‚ĐŸ таблОца гЎД Ń…Ń€Đ°ĐœŃŃ‚ŃŃ Đ»ĐŸĐłĐž ĐșĐ°Đș Đ”ŃŃ‚ŃŒ */

CREATE TABLE vector.logs
(
    `node_name` String,
    `timestamp` DateTime,
    `server_name` String,
    `user_id` String,
    `request_full` String,
    `request_user_agent` String,
    `request_http_host` String,
    `request_uri` String,
    `request_scheme` String,
    `request_method` String,
    `request_length` UInt64,
    `request_time` Float32,
    `request_referrer` String,
    `response_status` UInt16,
    `response_body_bytes_sent` UInt64,
    `response_content_type` String,
    `remote_addr` IPv4,
    `remote_port` UInt32,
    `remote_user` String,
    `upstream_addr` IPv4,
    `upstream_port` UInt32,
    `upstream_bytes_received` UInt64,
    `upstream_bytes_sent` UInt64,
    `upstream_cache_status` String,
    `upstream_connect_time` Float32,
    `upstream_header_time` Float32,
    `upstream_response_length` UInt64,
    `upstream_response_time` Float32,
    `upstream_status` UInt16,
    `upstream_content_type` String,
    INDEX idx_http_host request_http_host TYPE set(0) GRANULARITY 1
)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY timestamp
TTL timestamp + toIntervalMonth(1)
SETTINGS index_granularity = 8192;

EllenƑrizzĂŒk, hogy a tĂĄblĂĄk lĂ©trejöttek-e. IndĂ­tsuk el clickhouse-client Ă©s kĂ©rjen.

MenjĂŒnk a vektoros adatbĂĄzishoz.

use vector;

Ok.

0 rows in set. Elapsed: 0.001 sec.

NĂ©zzĂŒk a tĂĄblĂĄzatokat.

show tables;

┌─name────────────────┐
│ logs                │
└─────────────────────┘

Az elasticsearch telepĂ­tĂ©se a 4. szerverre, hogy ugyanazokat az adatokat elkĂŒldje az Elasticsearch-nek a Clickhouse-szal valĂł összehasonlĂ­tĂĄshoz

Adjon hozzĂĄ egy nyilvĂĄnos fordulatszĂĄm-kulcsot

rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch

Hozzunk létre 2 repót:

/etc/yum.repos.d/elasticsearch.repo

[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

/etc/yum.repos.d/kibana.repo

[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

TelepĂ­tse az elaszticsearch-ot Ă©s a kibanĂĄt

yum install -y kibana elasticsearch

Mivel 1 pĂ©ldĂĄnyban lesz, a következƑket kell hozzĂĄadnia az /etc/elasticsearch/elasticsearch.yml fĂĄjlhoz:

discovery.type: single-node

Ahhoz, hogy ez a vektor adatokat kĂŒldhessen az elasticsearch-nek egy mĂĄsik szerverrƑl, vĂĄltoztassuk meg a network.host-ot.

network.host: 0.0.0.0

A kibanåhoz való csatlakozåshoz módosítsa a server.host paramétert az /etc/kibana/kibana.yml fåjlban

server.host: "0.0.0.0"

RĂ©gi Ă©s tartalmazza az elaszticsearch-t az automatikus indĂ­tĂĄsban

systemctl enable elasticsearch
systemctl start elasticsearch

Ă©s kibana

systemctl enable kibana
systemctl start kibana

Az Elasticsearch konfigurĂĄlĂĄsa egycsomĂłpontos mĂłdhoz 1 szilĂĄnk, 0 replika. ValĂłszĂ­nƱleg nagyszĂĄmĂș kiszolgĂĄlĂłbĂłl ĂĄllĂł fĂŒrtje lesz, Ă©s ezt nem kell megtennie.

A jövƑbeli indexekhez frissĂ­tse az alapĂ©rtelmezett sablont:

curl -X PUT http://localhost:9200/_template/default -H 'Content-Type: application/json' -d '{"index_patterns": ["*"],"order": -1,"settings": {"number_of_shards": "1","number_of_replicas": "0"}}' 

TelepĂ­tĂ©s vektor a Logstash helyettesĂ­tƑjekĂ©nt a 2-es szerveren

yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm mc httpd-tools screen

Állítsuk be a Vectort a Logstash helyettesítésére. Az /etc/vector/vector.toml fåjl szerkesztése

# /etc/vector/vector.toml

data_dir = "/var/lib/vector"

[sources.nginx_input_vector]
  # General
  type                          = "vector"
  address                       = "0.0.0.0:9876"
  shutdown_timeout_secs         = 30

[transforms.nginx_parse_json]
  inputs                        = [ "nginx_input_vector" ]
  type                          = "json_parser"

[transforms.nginx_parse_add_defaults]
  inputs                        = [ "nginx_parse_json" ]
  type                          = "lua"
  version                       = "2"

  hooks.process = """
  function (event, emit)

    function split_first(s, delimiter)
      result = {};
      for match in (s..delimiter):gmatch("(.-)"..delimiter) do
          table.insert(result, match);
      end
      return result[1];
    end

    function split_last(s, delimiter)
      result = {};
      for match in (s..delimiter):gmatch("(.-)"..delimiter) do
          table.insert(result, match);
      end
      return result[#result];
    end

    event.log.upstream_addr             = split_first(split_last(event.log.upstream_addr, ', '), ':')
    event.log.upstream_bytes_received   = split_last(event.log.upstream_bytes_received, ', ')
    event.log.upstream_bytes_sent       = split_last(event.log.upstream_bytes_sent, ', ')
    event.log.upstream_connect_time     = split_last(event.log.upstream_connect_time, ', ')
    event.log.upstream_header_time      = split_last(event.log.upstream_header_time, ', ')
    event.log.upstream_response_length  = split_last(event.log.upstream_response_length, ', ')
    event.log.upstream_response_time    = split_last(event.log.upstream_response_time, ', ')
    event.log.upstream_status           = split_last(event.log.upstream_status, ', ')

    if event.log.upstream_addr == "" then
        event.log.upstream_addr = "127.0.0.1"
    end

    if (event.log.upstream_bytes_received == "-" or event.log.upstream_bytes_received == "") then
        event.log.upstream_bytes_received = "0"
    end

    if (event.log.upstream_bytes_sent == "-" or event.log.upstream_bytes_sent == "") then
        event.log.upstream_bytes_sent = "0"
    end

    if event.log.upstream_cache_status == "" then
        event.log.upstream_cache_status = "DISABLED"
    end

    if (event.log.upstream_connect_time == "-" or event.log.upstream_connect_time == "") then
        event.log.upstream_connect_time = "0"
    end

    if (event.log.upstream_header_time == "-" or event.log.upstream_header_time == "") then
        event.log.upstream_header_time = "0"
    end

    if (event.log.upstream_response_length == "-" or event.log.upstream_response_length == "") then
        event.log.upstream_response_length = "0"
    end

    if (event.log.upstream_response_time == "-" or event.log.upstream_response_time == "") then
        event.log.upstream_response_time = "0"
    end

    if (event.log.upstream_status == "-" or event.log.upstream_status == "") then
        event.log.upstream_status = "0"
    end

    emit(event)

  end
  """

[transforms.nginx_parse_remove_fields]
    inputs                              = [ "nginx_parse_add_defaults" ]
    type                                = "remove_fields"
    fields                              = ["data", "file", "host", "source_type"]

[transforms.nginx_parse_coercer]

    type                                = "coercer"
    inputs                              = ["nginx_parse_remove_fields"]

    types.request_length = "int"
    types.request_time = "float"

    types.response_status = "int"
    types.response_body_bytes_sent = "int"

    types.remote_port = "int"

    types.upstream_bytes_received = "int"
    types.upstream_bytes_send = "int"
    types.upstream_connect_time = "float"
    types.upstream_header_time = "float"
    types.upstream_response_length = "int"
    types.upstream_response_time = "float"
    types.upstream_status = "int"

    types.timestamp = "timestamp"

[sinks.nginx_output_clickhouse]
    inputs   = ["nginx_parse_coercer"]
    type     = "clickhouse"

    database = "vector"
    healthcheck = true
    host = "http://172.26.10.109:8123" #  АЎрДс Clickhouse
    table = "logs"

    encoding.timestamp_format = "unix"

    buffer.type = "disk"
    buffer.max_size = 104900000
    buffer.when_full = "block"

    request.in_flight_limit = 20

[sinks.elasticsearch]
    type = "elasticsearch"
    inputs   = ["nginx_parse_coercer"]
    compression = "none"
    healthcheck = true
    # 172.26.10.116 - сДрĐČДр гЎД ŃƒŃŃ‚Đ°ĐœĐŸĐČĐ”Đœ elasticsearch
    host = "http://172.26.10.116:9200" 
    index = "vector-%Y-%m-%d"

BeĂĄllĂ­thatja a transforms.nginx_parse_add_defaults szakaszt.

Mint Vjacseszlav Rahinszkij ezeket a konfiguråciókat hasznålja egy kis CDN-hez, és több érték is lehet az upstream_*-ban

PĂ©ldĂĄul:

"upstream_addr": "128.66.0.10:443, 128.66.0.11:443, 128.66.0.12:443"
"upstream_bytes_received": "-, -, 123"
"upstream_status": "502, 502, 200"

Ha nem ez a helyzet, akkor ez a szakasz leegyszerƱsĂ­thetƑ

Hozzuk létre a systemd /etc/systemd/system/vector.service szolgåltatås beållítåsait

# /etc/systemd/system/vector.service

[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target

[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector

[Install]
WantedBy=multi-user.target

A tåblåzatok létrehozåsa utån futtathatja a Vectort

systemctl enable vector
systemctl start vector

A vektornaplĂłkat a következƑkĂ©ppen tekintheti meg:

journalctl -f -u vector

Ilyen bejegyzĂ©seknek kell lenniĂŒk a naplĂłkban

INFO vector::topology::builder: Healthcheck: Passed.
INFO vector::topology::builder: Healthcheck: Passed.

A kliensen (webszerver) - 1. szerver

Az nginx-szel rendelkezƑ szerveren le kell tiltani az ipv6-ot, mivel a clickhouse naplĂłtĂĄblĂĄzata a mezƑt hasznĂĄlja upstream_addr IPv4, mivel nem hasznĂĄlok ipv6-ot a hĂĄlĂłzaton belĂŒl. Ha az ipv6 nincs kikapcsolva, hibĂĄk lĂ©pnek fel:

DB::Exception: Invalid IPv4 value.: (while read the value of key upstream_addr)

TalĂĄn olvasĂłk, adjĂĄk hozzĂĄ az ipv6 tĂĄmogatĂĄst.

Hozza létre az /etc/sysctl.d/98-disable-ipv6.conf fåjlt

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1

A beĂĄllĂ­tĂĄsok alkalmazĂĄsa

sysctl --system

TelepĂ­tsĂŒk az nginx-et.

nginx adattĂĄrfĂĄjl hozzĂĄadva /etc/yum.repos.d/nginx.repo

[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true

TelepĂ­tse az nginx csomagot

yum install -y nginx

ElƑször is be kell ĂĄllĂ­tanunk a naplĂłformĂĄtumot az Nginxben az /etc/nginx/nginx.conf fĂĄjlban

user  nginx;
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically

# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

# provides the configuration file context in which the directives that affect connection processing are specified.
events {
    # determines how much clients will be served per worker
    # max clients = worker_connections * worker_processes
    # max clients is also limited by the number of socket connections available on the system (~64k)
    worker_connections 4000;

    # optimized to serve many clients with each thread, essential for linux -- for testing environment
    use epoll;

    # accept as many connections as possible, may flood worker connections if set too low -- for testing environment
    multi_accept on;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

log_format vector escape=json
    '{'
        '"node_name":"nginx-vector",'
        '"timestamp":"$time_iso8601",'
        '"server_name":"$server_name",'
        '"request_full": "$request",'
        '"request_user_agent":"$http_user_agent",'
        '"request_http_host":"$http_host",'
        '"request_uri":"$request_uri",'
        '"request_scheme": "$scheme",'
        '"request_method":"$request_method",'
        '"request_length":"$request_length",'
        '"request_time": "$request_time",'
        '"request_referrer":"$http_referer",'
        '"response_status": "$status",'
        '"response_body_bytes_sent":"$body_bytes_sent",'
        '"response_content_type":"$sent_http_content_type",'
        '"remote_addr": "$remote_addr",'
        '"remote_port": "$remote_port",'
        '"remote_user": "$remote_user",'
        '"upstream_addr": "$upstream_addr",'
        '"upstream_bytes_received": "$upstream_bytes_received",'
        '"upstream_bytes_sent": "$upstream_bytes_sent",'
        '"upstream_cache_status":"$upstream_cache_status",'
        '"upstream_connect_time":"$upstream_connect_time",'
        '"upstream_header_time":"$upstream_header_time",'
        '"upstream_response_length":"$upstream_response_length",'
        '"upstream_response_time":"$upstream_response_time",'
        '"upstream_status": "$upstream_status",'
        '"upstream_content_type":"$upstream_http_content_type"'
    '}';

    access_log  /var/log/nginx/access.log  main;
    access_log  /var/log/nginx/access.json.log vector;      # ĐĐŸĐČыĐč Đ»ĐŸĐł ĐČ Ń„ĐŸŃ€ĐŒĐ°Ń‚Đ” json

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}

Annak Ă©rdekĂ©ben, hogy ne törje meg a jelenlegi konfigurĂĄciĂłt, az Nginx lehetƑvĂ© teszi több access_log direktĂ­va hasznĂĄlatĂĄt

access_log  /var/log/nginx/access.log  main;            # ĐĄŃ‚Đ°ĐœĐŽĐ°Ń€Ń‚ĐœŃ‹Đč Đ»ĐŸĐł
access_log  /var/log/nginx/access.json.log vector;      # ĐĐŸĐČыĐč Đ»ĐŸĐł ĐČ Ń„ĐŸŃ€ĐŒĐ°Ń‚Đ” json

Ne felejtsen el egy szabĂĄlyt hozzĂĄadni az Ășj naplĂłk logrotate-jĂĄhoz (ha a naplĂłfĂĄjl nem vĂ©gzƑdik .log-al)

TĂĄvolĂ­tsa el a default.conf fĂĄjlt az /etc/nginx/conf.d/ fĂĄjlbĂłl

rm -f /etc/nginx/conf.d/default.conf

Adja hozzå az /etc/nginx/conf.d/vhost1.conf virtuålis gazdagépet

server {
    listen 80;
    server_name vhost1;
    location / {
        proxy_pass http://172.26.10.106:8080;
    }
}

Adja hozzå az /etc/nginx/conf.d/vhost2.conf virtuålis gazdagépet

server {
    listen 80;
    server_name vhost2;
    location / {
        proxy_pass http://172.26.10.108:8080;
    }
}

Adja hozzå az /etc/nginx/conf.d/vhost3.conf virtuålis gazdagépet

server {
    listen 80;
    server_name vhost3;
    location / {
        proxy_pass http://172.26.10.109:8080;
    }
}

Adja hozzå az /etc/nginx/conf.d/vhost4.conf virtuålis gazdagépet

server {
    listen 80;
    server_name vhost4;
    location / {
        proxy_pass http://172.26.10.116:8080;
    }
}

Adjon hozzå virtuålis gazdagépeket (az nginx telepített kiszolgåló 172.26.10.106 ip-je) az összes kiszolgålóhoz az /etc/hosts fåjlba:

172.26.10.106 vhost1
172.26.10.106 vhost2
172.26.10.106 vhost3
172.26.10.106 vhost4

És ha minden kĂ©szen van, akkor

nginx -t 
systemctl restart nginx

Most telepĂ­tsĂŒk mi magunk vektor

yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm

Hozzon létre egy beållítåsi fåjlt a systemd /etc/systemd/system/vector.service szåmåra

[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target

[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector

[Install]
WantedBy=multi-user.target

És ĂĄllĂ­tsa be a Filebeat cserĂ©t az /etc/vector/vector.toml konfigurĂĄciĂłban. A 172.26.10.108 IP-cĂ­m a naplĂłszerver IP-cĂ­me (Vector-Server)

data_dir = "/var/lib/vector"

[sources.nginx_file]
  type                          = "file"
  include                       = [ "/var/log/nginx/access.json.log" ]
  start_at_beginning            = false
  fingerprinting.strategy       = "device_and_inode"

[sinks.nginx_output_vector]
  type                          = "vector"
  inputs                        = [ "nginx_file" ]

  address                       = "172.26.10.108:9876"

Ne felejtse el hozzåadni a vektor felhasznålót a kívånt csoporthoz, hogy naplófåjlokat olvashasson. Példåul az nginx in centos naplókat hoz létre adm csoportjogokkal.

usermod -a -G adm vector

IndĂ­tsuk el a vektorszolgĂĄltatĂĄst

systemctl enable vector
systemctl start vector

A vektornaplĂłkat a következƑkĂ©ppen tekintheti meg:

journalctl -f -u vector

Ilyen bejegyzésnek kell lennie a naplókban

INFO vector::topology::builder: Healthcheck: Passed.

Stressz tesztelés

A tesztelĂ©st Apache benchmark segĂ­tsĂ©gĂ©vel vĂ©gezzĂŒk.

A httpd-tools csomag minden kiszolgĂĄlĂłra telepĂ­tve volt

ElkezdjĂŒk a tesztelĂ©st az Apache benchmark hasznĂĄlatĂĄval 4 kĂŒlönbözƑ szerverrƑl a kĂ©pernyƑn. ElƑször elindĂ­tjuk a kĂ©pernyƑterminĂĄl multiplexert, majd megkezdjĂŒk a tesztelĂ©st az Apache benchmark segĂ­tsĂ©gĂ©vel. A kĂ©pernyƑvel valĂł munkavĂ©gzĂ©s mĂłdja itt talĂĄlhatĂł cikk.

1. szerverrƑl

while true; do ab -H "User-Agent: 1server" -c 100 -n 10 -t 10 http://vhost1/; sleep 1; done

2. szerverrƑl

while true; do ab -H "User-Agent: 2server" -c 100 -n 10 -t 10 http://vhost2/; sleep 1; done

3. szerverrƑl

while true; do ab -H "User-Agent: 3server" -c 100 -n 10 -t 10 http://vhost3/; sleep 1; done

4. szerverrƑl

while true; do ab -H "User-Agent: 4server" -c 100 -n 10 -t 10 http://vhost4/; sleep 1; done

NĂ©zzĂŒk meg az adatokat a Clickhouse-ban

Menj a Clickhouse-ba

clickhouse-client -h 172.26.10.109 -m

SQL lekérdezés készítése

SELECT * FROM vector.logs;

┌─node_name────┬───────────timestamp─┬─server_name─┬─user_id─┬─request_full───┬─request_user_agent─┬─request_http_host─┬─request_uri─┬─request_scheme─┬─request_method─┬─request_length─┬─request_time─┬─request_referrer─┬─response_status─┬─response_body_bytes_sent─┬─response_content_type─┬───remote_addr─┬─remote_port─┬─remote_user─┬─upstream_addr─┬─upstream_port─┬─upstream_bytes_received─┬─upstream_bytes_sent─┬─upstream_cache_status─┬─upstream_connect_time─┬─upstream_header_time─┬─upstream_response_length─┬─upstream_response_time─┬─upstream_status─┬─upstream_content_type─┐
│ nginx-vector │ 2020-08-07 04:32:42 │ vhost1      │         │ GET / HTTP/1.0 │ 1server            │ vhost1            │ /           │ http           │ GET            │             66 │        0.028 │                  │             404 │                       27 │                       │ 172.26.10.106 │       45886 │             │ 172.26.10.106 │             0 │                     109 │                  97 │ DISABLED              │                     0 │                0.025 │                       27 │                  0.029 │             404 │                       │
└──────────────┮─────────────────────┮─────────────┮─────────┮────────────────┮────────────────────┮───────────────────┮─────────────┮────────────────┮────────────────┮────────────────┮──────────────┮──────────────────┮─────────────────┮──────────────────────────┮───────────────────────┮───────────────┮─────────────┮─────────────┮───────────────┮───────────────┮─────────────────────────┮─────────────────────┮───────────────────────┮───────────────────────┮──────────────────────┮──────────────────────────┮────────────────────────┮─────────────────┮───────────────────────

Tudja meg a Clickhouse asztalainak méretét

select concat(database, '.', table)                         as table,
       formatReadableSize(sum(bytes))                       as size,
       sum(rows)                                            as rows,
       max(modification_time)                               as latest_modification,
       sum(bytes)                                           as bytes_size,
       any(engine)                                          as engine,
       formatReadableSize(sum(primary_key_bytes_in_memory)) as primary_keys_size
from system.parts
where active
group by database, table
order by bytes_size desc;

NĂ©zzĂŒk meg, mennyi rönk foglalt el a Clickhouse-ban.

Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

A naplótåblåzat mérete 857.19 MB.

Nginx json naplĂłk kĂŒldĂ©se Vector segĂ­tsĂ©gĂ©vel a Clickhouse Ă©s az Elasticsearch szĂĄmĂĄra

Ugyanennek az adatnak a mérete az Elasticsearch indexében 4,5 GB.

Ha nem ad meg adatokat a vektorban a paraméterekben, akkor a Clickhouse 4500/857.19 = 5.24-szer kevesebbet vesz igénybe, mint az Elasticsearch-ben.

A vektorban alapĂ©rtelmezĂ©s szerint a tömörĂ­tĂ©si mezƑt hasznĂĄljĂĄk.

Telegram chat by clickhouse
Telegram chat by Elasticsearch
Telegram chat a következƑtƑl: "A rendszer összegyƱjtĂ©se Ă©s elemzĂ©se ĂŒzenetek"

ForrĂĄs: will.com

HozzĂĄszĂłlĂĄs