β
ααααΌαααΆααααααααΆααΆααΆ Rust ααΆααααΌαααΆαααααααααααααααααααΎαααΆαααααα αα·αααΆαααααΎααααΆαα RAM ααΆα ααΎαααααααααα ααΉα analogues ααααααΆα ααΎαααΈααα ααΆαααα α·ααααα»αααΆααα αααΎαααααΌαααΆααααα αααααα»αααΆααααααΆααααααΉαααΆαααααΉαααααΌα ααΆαα·ααααααααααΆααααα»αααΆααααααΆαα»αααααΉαααα·ααΆαααααααα·αααΆαααααΎαα ααΆααααα·ααααααα’αΆαααααα ααΎααΆα αα·αααααα·αα―αααΆαα
ααΆαααααΆαααααααα ααα·α ααα ααΊααΆαααααααααααΉαααα·ααΆααααααααα½αααΆαααΈαα½α α¬α αααΎαα αααααααΆαααααΎαα’αα»ααααααΎααΆαααΆααααα ααΆαααααΆααααααΌαα αΎααααααΌααα½ααααα αα½α α¬α αααΎαα αααα αΌα.
ααα·α
αααααΊααΆααΆααααα½ααααααΆαα filebeat αα·α logstash ααΆα’αΆα
ααΎααα½ααΆααααΈα (ααα½α αα·αααααΎαααααα ααα») ααααααΆααααα’α·αααααααααΎαα½αααΆ
ααααα·αααΎαα
αααα»α Logstash αααααααααΆααααααΌαααΆααααααΎαααΆ input β filter β output αααααΆαααααα
αααα»α Vector ααΆααΊ
α§ααΆα αααα’αΆα ααααΆααα αααα»αα―αααΆαα
ααα
ααααΈααααΆαβαααβααΊααΆβααα
ααααΈααααΆαβαααβααΆαβααααααβααΈβ
Aug 05 06:25:31.889 DEBUG transform{name=nginx_parse_rename_fields type=rename_fields}: vector::transforms::rename_fields: Field did not exist field=Β«geoip.country_nameΒ» rate_limit_secs=30
ααααα·αααΎα’αααααΆααααΆααααααΌαααΆαααααΎαααΆα geoip αααααΆααααααααα
ααΆαααααΆαααΎαααΈ
ααΎαααΉαααααααα ααΆααααααααααΆααα½ααααα αΌαααααΆαα Nginx (Access logs) β Vector (Client | Filebeat) β Vector (Server | Logstash) β αααα‘ααααΈααααΆαα αααα»α Clickhouse αα·αααΆα ααααα‘ααααΈααααΆαα αααα»α Elasticsearch α ααΎαααΉαααα‘αΎααααΆαααΈαααα ααα½α 4 α αααααΈααΆα’αααα’αΆα ααααααΆααααΆααΆαα½ααααΆαααΈααα 3 αααααα
αααααααΆαααααΊααΌα αααα
αα·α Selinux αα ααΎαααΆαααΈαααααααα’αααααΆααα’ααα
sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
reboot
ααΎαααα‘αΎααααααα·ααΈααααΆααααΆααααΆαααΈααα HTTP + α§αααααααααΎααααΆαααα ααΎαααΆαααΈαααααΆααα’ααα
ααΆαααααα·ααΈααααΆααααΆααααΆαααΈααα HTTP ααΎαααΉαααααΎ
Nodejs-stub-server αα·αααΆα rpm ααα
αααααααααααααααα»α antonpatsev/nodejs-stub-server
yum -y install yum-plugin-copr epel-release
yes | yum copr enable antonpatsev/nodejs-stub-server
ααα‘αΎα nodejs-stub-server, Apache benchmark αα·α screen terminal multiplexer αα ααΎ servers ααΆααα’αα
yum -y install stub_http_server screen mc httpd-tools screen
αααα»αααΆααααααααΌααααααααΆααααΎααα stub_http_server αα αααα»αα―αααΆα /var/lib/stub_http_server/stub_http_server.js ααΌα ααααααΆααααααα ααα»α αααΎααααα
var max_sleep = 10;
αααααΎαααααΎαααΆα stub_http_server α
systemctl start stub_http_server
systemctl enable stub_http_server
ααΆαααα‘αΎα Clickhouse αα
ααΎαααΆαααΈααα 3
ClickHouse ααααΎαααα»αααΆαααααΆα SSE 4.2 ααΌα αααααα»αααααΆααααΆαααΆααααααΆαααααααααΈααα ααΆαααΆαααααααααΆααααΆαα αααα»ααα½αααααΆααααααΆαααααΎααααΆαααΆαααααΌαααΆαααααααααααααααα αααααΊααΆααΆααααααααΆααΎααααΈαα·αα·αααααΎαααΆααΎ processor αα αα α»ααααααααΆαααα SSE 4.2 αααα¬ααα
grep -q sse4_2 /proc/cpuinfo && echo "SSE 4.2 supported" || echo "SSE 4.2 not supported"
ααααΌαα’αααααααΌαααααΆααααααΆααααααΌαααΆαα
sudo yum install -y yum-utils
sudo rpm --import https://repo.clickhouse.tech/CLICKHOUSE-KEY.GPG
sudo yum-config-manager --add-repo https://repo.clickhouse.tech/rpm/stable/x86_64
ααΎααααΈααα‘αΎααααα ααα’αααααααΌαααααΎαααΆαααΆααααααααΆααΆααααααα
sudo yum install -y clickhouse-server clickhouse-client
α’αα»ααααΆαα±αα clickhouse-server ααααΆααααΆααααααΆααα αααα»αα―αααΆα /etc/clickhouse-server/config.xml
<listen_host>0.0.0.0</listen_host>
ααΆαααααΆααααααΌαααααα·αααααΆααααααααΆααΈααΆααα ααΆααΆαααααΆααααα α»α
ααααΆααααα α»α
ααΆααααααααΆααααα αΆαααααααααΆαα
min_compress_block_size 65536
max_compress_block_size 1048576
ααΎααααΈααααΎα±ααααΆααααα αΆαα Zstd ααααα ααΆααααΌαααΆαααααΆααα·αα±αααααααΆαααααα ααα»ααααααααΌαααααΎ DDL α
αααα»ααααα·αααΎαααΈααααααααΎααΆααααα αΆαα zstd ααΆαααα DDL αα αααα»α Google ααα ααΌα ααααβαααα»αβααΆαβαα»αβααΆβααΌα βααΆα
αα·ααααα½αααΆαααΆααααααααΎααΆααααα αΆαα zstd αα αααα»α Clickhouse ααΌαα αααααααααΆαααααΆαα
ααΎααααΈα αΆααααααΎααααΆαααΈαααααΆαααα·α ααΌαααααΎαααΆαα
service clickhouse-server start
α₯α‘αΌααααααΌααααααα ααΆαααα‘αΎα Clickhouse
α αΌααα ααΆαα Clickhouse
clickhouse-client -h 172.26.10.109 -m
172.26.10.109 β IP αααααααΆαααΈααα ααα Clickhouse ααααΌαααΆαααα‘αΎαα
ααααααααΎαααΌαααααΆααα·ααααααααα·α ααα
CREATE DATABASE vector;
ααΌααα·αα·αααααΎαααΆααΌαααααΆααα·ααααααααΆαα
show databases;
αααααΎαααΆααΆα vector.logs α
/* ΠΡΠΎ ΡΠ°Π±Π»ΠΈΡΠ° Π³Π΄Π΅ Ρ
ΡΠ°Π½ΡΡΡΡ Π»ΠΎΠ³ΠΈ ΠΊΠ°ΠΊ Π΅ΡΡΡ */
CREATE TABLE vector.logs
(
`node_name` String,
`timestamp` DateTime,
`server_name` String,
`user_id` String,
`request_full` String,
`request_user_agent` String,
`request_http_host` String,
`request_uri` String,
`request_scheme` String,
`request_method` String,
`request_length` UInt64,
`request_time` Float32,
`request_referrer` String,
`response_status` UInt16,
`response_body_bytes_sent` UInt64,
`response_content_type` String,
`remote_addr` IPv4,
`remote_port` UInt32,
`remote_user` String,
`upstream_addr` IPv4,
`upstream_port` UInt32,
`upstream_bytes_received` UInt64,
`upstream_bytes_sent` UInt64,
`upstream_cache_status` String,
`upstream_connect_time` Float32,
`upstream_header_time` Float32,
`upstream_response_length` UInt64,
`upstream_response_time` Float32,
`upstream_status` UInt16,
`upstream_content_type` String,
INDEX idx_http_host request_http_host TYPE set(0) GRANULARITY 1
)
ENGINE = MergeTree()
PARTITION BY toYYYYMMDD(timestamp)
ORDER BY timestamp
TTL timestamp + toIntervalMonth(1)
SETTINGS index_granularity = 8192;
ααΎααα·αα·αααααΎαααΆααΆααΆαααααΌαααΆααααααΎαα‘αΎαα αααα
αΆααααααΎα clickhouse-client
αα·αααααΎααΆαααααΎαα»αα
ααααα ααΌαααααΆααα·ααααααααα·α αααα
use vector;
Ok.
0 rows in set. Elapsed: 0.001 sec.
αααααΎαααΆααΆαα
show tables;
ββnameβββββββββββββββββ
β logs β
βββββββββββββββββββββββ
ααΆαααα‘αΎα elasticsearch αα ααΎαααΆαααΈαααααΈ 4 ααΎααααΈααααΎαα·ααααααααΌα ααααΆαα Elasticsearch αααααΆααααΆαααααααααααΆαα½α Clickhouse
ααααααααααΆααα α»α rpm ααΆααΆααα
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
ααααααααΎα 2 repoα
/etc/yum.repos.d/elasticsearch.repo
[elasticsearch]
name=Elasticsearch repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md
/etc/yum.repos.d/kibana.repo
[kibana-7.x]
name=Kibana repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
ααα‘αΎα elasticsearch αα·α kibana
yum install -y kibana elasticsearch
αααααΆαβααΆβααΉαβααΆαβαααα»α 1 α αααΆααα αααα α’αααβααααΌαβααααααβα―αααΆαβααΆααααααβαα ααΆααβα―αααΆα /etc/elasticsearch/elasticsearch.ymlα
discovery.type: single-node
ααΌα ααααααα·α ααααααα’αΆα ααααΎαα·αααααααα elasticsearch ααΈαααΆαααΈααααααααααα ααΌαααααΆααααααΌα network.host α
network.host: 0.0.0.0
ααΎααααΈααααΆαααα kibana ααΌαααααΌααααΆαααΆαααααα server.host αα αααα»αα―αααΆα /etc/kibana/kibana.yml
server.host: "0.0.0.0"
α αΆαα αα·ααα½ααααα αΌαααΆααααααααααΊααα αααα»α autostart
systemctl enable elasticsearch
systemctl start elasticsearch
αα·αααΈααΆααΆ
systemctl enable kibana
systemctl start kibana
ααααααα ααΆαααααααα Elasticsearch αααααΆααααααααααΆαααααα½α 1 shard, 0 α ααααα ααΆαα αααΎαα’αααααΉαααΆαα ααααααααααΆαααΈααααα½αα ααα½ααα α αΎαα’ααααα·αα αΆαααΆα αααααΎααααααααα
αααααΆαααα·αα·ααααααΆαααα’ααΆαα ααααΎαα αα α»ααααααααΆαααααΌααααΆαααΎαα
curl -X PUT http://localhost:9200/_template/default -H 'Content-Type: application/json' -d '{"index_patterns": ["*"],"order": -1,"settings": {"number_of_shards": "1","number_of_replicas": "0"}}'
ααΆαααααα ααα·α
ααα ααΆααΆααααα½α Logstash αα
ααΎ server 2
yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm mc httpd-tools screen
ααααααα αααα·α αααααΆααΆααααα½α Logstash α ααΆαααααααα½αα―αααΆα /etc/vector/vector.toml
# /etc/vector/vector.toml
data_dir = "/var/lib/vector"
[sources.nginx_input_vector]
# General
type = "vector"
address = "0.0.0.0:9876"
shutdown_timeout_secs = 30
[transforms.nginx_parse_json]
inputs = [ "nginx_input_vector" ]
type = "json_parser"
[transforms.nginx_parse_add_defaults]
inputs = [ "nginx_parse_json" ]
type = "lua"
version = "2"
hooks.process = """
function (event, emit)
function split_first(s, delimiter)
result = {};
for match in (s..delimiter):gmatch("(.-)"..delimiter) do
table.insert(result, match);
end
return result[1];
end
function split_last(s, delimiter)
result = {};
for match in (s..delimiter):gmatch("(.-)"..delimiter) do
table.insert(result, match);
end
return result[#result];
end
event.log.upstream_addr = split_first(split_last(event.log.upstream_addr, ', '), ':')
event.log.upstream_bytes_received = split_last(event.log.upstream_bytes_received, ', ')
event.log.upstream_bytes_sent = split_last(event.log.upstream_bytes_sent, ', ')
event.log.upstream_connect_time = split_last(event.log.upstream_connect_time, ', ')
event.log.upstream_header_time = split_last(event.log.upstream_header_time, ', ')
event.log.upstream_response_length = split_last(event.log.upstream_response_length, ', ')
event.log.upstream_response_time = split_last(event.log.upstream_response_time, ', ')
event.log.upstream_status = split_last(event.log.upstream_status, ', ')
if event.log.upstream_addr == "" then
event.log.upstream_addr = "127.0.0.1"
end
if (event.log.upstream_bytes_received == "-" or event.log.upstream_bytes_received == "") then
event.log.upstream_bytes_received = "0"
end
if (event.log.upstream_bytes_sent == "-" or event.log.upstream_bytes_sent == "") then
event.log.upstream_bytes_sent = "0"
end
if event.log.upstream_cache_status == "" then
event.log.upstream_cache_status = "DISABLED"
end
if (event.log.upstream_connect_time == "-" or event.log.upstream_connect_time == "") then
event.log.upstream_connect_time = "0"
end
if (event.log.upstream_header_time == "-" or event.log.upstream_header_time == "") then
event.log.upstream_header_time = "0"
end
if (event.log.upstream_response_length == "-" or event.log.upstream_response_length == "") then
event.log.upstream_response_length = "0"
end
if (event.log.upstream_response_time == "-" or event.log.upstream_response_time == "") then
event.log.upstream_response_time = "0"
end
if (event.log.upstream_status == "-" or event.log.upstream_status == "") then
event.log.upstream_status = "0"
end
emit(event)
end
"""
[transforms.nginx_parse_remove_fields]
inputs = [ "nginx_parse_add_defaults" ]
type = "remove_fields"
fields = ["data", "file", "host", "source_type"]
[transforms.nginx_parse_coercer]
type = "coercer"
inputs = ["nginx_parse_remove_fields"]
types.request_length = "int"
types.request_time = "float"
types.response_status = "int"
types.response_body_bytes_sent = "int"
types.remote_port = "int"
types.upstream_bytes_received = "int"
types.upstream_bytes_send = "int"
types.upstream_connect_time = "float"
types.upstream_header_time = "float"
types.upstream_response_length = "int"
types.upstream_response_time = "float"
types.upstream_status = "int"
types.timestamp = "timestamp"
[sinks.nginx_output_clickhouse]
inputs = ["nginx_parse_coercer"]
type = "clickhouse"
database = "vector"
healthcheck = true
host = "http://172.26.10.109:8123" # ΠΠ΄ΡΠ΅Ρ Clickhouse
table = "logs"
encoding.timestamp_format = "unix"
buffer.type = "disk"
buffer.max_size = 104900000
buffer.when_full = "block"
request.in_flight_limit = 20
[sinks.elasticsearch]
type = "elasticsearch"
inputs = ["nginx_parse_coercer"]
compression = "none"
healthcheck = true
# 172.26.10.116 - ΡΠ΅ΡΠ²Π΅Ρ Π³Π΄Π΅ ΡΡΡΠ°Π½ΠΎΠ²Π΅Π½ elasticsearch
host = "http://172.26.10.116:9200"
index = "vector-%Y-%m-%d"
α’αααα’αΆα αααααααΌαααααα transforms.nginx_parse_add_defaults α
α
αΆααααΆααααΈ
α§ααΆα ααα:
"upstream_addr": "128.66.0.10:443, 128.66.0.11:443, 128.66.0.12:443"
"upstream_bytes_received": "-, -, 123"
"upstream_status": "502, 502, 200"
ααααα·αααΎααααα·ααααααΆααααΆαααΆαααααα’ααααααααααααααααα’αΆα ααααΌαααΆαααααΎα±ααααΆαααα
ααααααααΎαααΆααααααααααΆαααααααααΆαα systemd /etc/systemd/system/vector.service
# /etc/systemd/system/vector.service
[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target
[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector
[Install]
WantedBy=multi-user.target
αααααΆααααΈαααααΎαααΆααΆα α’αααα’αΆα ααααΎαααΆαααα·α αααααΆαα
systemctl enable vector
systemctl start vector
αααααα ααα»ααα·α αααα’αΆα ααΎαααΌα αααα
journalctl -f -u vector
αα½αααααΆαααΆαα»ααΌα ααααα αααα»ααααααα ααα»
INFO vector::topology::builder: Healthcheck: Passed.
INFO vector::topology::builder: Healthcheck: Passed.
αα ααΎαααΆαααΈαααααα (αααΆαααΈααααααααΆα) - αααΆαααΈαααααΈ 1
αα
ααΎαααΆαααΈααααααααΆα nginx α’αααααααΌααα·α ipv6 α
αΆααααΆααααΈααΆααΆααααααα ααα»αα
αααα»α clickhouse ααααΎααΆα upstream_addr
IPv4 ααααααααα»ααα·αααααΎ ipv6 αα
αααα»ααααααΆαα ααααα·αααΎ ipv6 αα·αααααΌαααΆααα·αααΆααΉαααΆαααα α»αα
DB::Exception: Invalid IPv4 value.: (while read the value of key upstream_addr)
αααα ααααΆα’αααα’αΆαααααααααΆαααΆαααα ipv6 α
αααααΎαα―αααΆα /etc/sysctl.d/98-disable-ipv6.conf
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
ααΆαα’αα»ααααααΆαααααα
sysctl --system
αααααα‘αΎα nginx α
ααΆαααααααα―αααΆαααααΆαα nginx /etc/yum.repos.d/nginx.repo
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key
module_hotfixes=true
ααα‘αΎααααα αα nginx
yum install -y nginx
ααααΌαααΎαααααΌαααααααα ααΆαααααααααααααα ααα»αααα»α Nginx αααα»αα―αααΆα /etc/nginx/nginx.conf
user nginx;
# you must set worker processes based on your CPU cores, nginx does not benefit from setting more than that
worker_processes auto; #some last versions calculate it automatically
# number of file descriptors used for nginx
# the limit for the maximum FDs on the server is usually set by the OS.
# if you don't set FD's then OS settings will be used which is by default 2000
worker_rlimit_nofile 100000;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
# provides the configuration file context in which the directives that affect connection processing are specified.
events {
# determines how much clients will be served per worker
# max clients = worker_connections * worker_processes
# max clients is also limited by the number of socket connections available on the system (~64k)
worker_connections 4000;
# optimized to serve many clients with each thread, essential for linux -- for testing environment
use epoll;
# accept as many connections as possible, may flood worker connections if set too low -- for testing environment
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
log_format vector escape=json
'{'
'"node_name":"nginx-vector",'
'"timestamp":"$time_iso8601",'
'"server_name":"$server_name",'
'"request_full": "$request",'
'"request_user_agent":"$http_user_agent",'
'"request_http_host":"$http_host",'
'"request_uri":"$request_uri",'
'"request_scheme": "$scheme",'
'"request_method":"$request_method",'
'"request_length":"$request_length",'
'"request_time": "$request_time",'
'"request_referrer":"$http_referer",'
'"response_status": "$status",'
'"response_body_bytes_sent":"$body_bytes_sent",'
'"response_content_type":"$sent_http_content_type",'
'"remote_addr": "$remote_addr",'
'"remote_port": "$remote_port",'
'"remote_user": "$remote_user",'
'"upstream_addr": "$upstream_addr",'
'"upstream_bytes_received": "$upstream_bytes_received",'
'"upstream_bytes_sent": "$upstream_bytes_sent",'
'"upstream_cache_status":"$upstream_cache_status",'
'"upstream_connect_time":"$upstream_connect_time",'
'"upstream_header_time":"$upstream_header_time",'
'"upstream_response_length":"$upstream_response_length",'
'"upstream_response_time":"$upstream_response_time",'
'"upstream_status": "$upstream_status",'
'"upstream_content_type":"$upstream_http_content_type"'
'}';
access_log /var/log/nginx/access.log main;
access_log /var/log/nginx/access.json.log vector; # ΠΠΎΠ²ΡΠΉ Π»ΠΎΠ³ Π² ΡΠΎΡΠΌΠ°ΡΠ΅ json
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
ααΎααααΈαα»αα±ααααΌα ααΆαααααααα ααΆαααααααααα αα α»ααααααααααα’ααα Nginx α’αα»ααααΆαα±ααα’αααααΆαααΆαααααΆα access_log ααΆα αααΎαα
access_log /var/log/nginx/access.log main; # Π‘ΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠΉ Π»ΠΎΠ³
access_log /var/log/nginx/access.json.log vector; # ΠΠΎΠ²ΡΠΉ Π»ΠΎΠ³ Π² ΡΠΎΡΠΌΠ°ΡΠ΅ json
αα»αααααα ααααααα αααΆααααΎααααΈ logrotate αααααΆαααααααα ααα»ααααΈ (ααααα·αααΎα―αααΆααααααα ααα»αα·ααααα ααααα .log)
αα default.conf α ααααΈ /etc/nginx/conf.d/
rm -f /etc/nginx/conf.d/default.conf
αααααααααΆαααΈααα·αααα·α /etc/nginx/conf.d/vhost1.conf
server {
listen 80;
server_name vhost1;
location / {
proxy_pass http://172.26.10.106:8080;
}
}
αααααααααΆαααΈααα·αααα·α /etc/nginx/conf.d/vhost2.conf
server {
listen 80;
server_name vhost2;
location / {
proxy_pass http://172.26.10.108:8080;
}
}
αααααααααΆαααΈααα·αααα·α /etc/nginx/conf.d/vhost3.conf
server {
listen 80;
server_name vhost3;
location / {
proxy_pass http://172.26.10.109:8080;
}
}
αααααααααΆαααΈααα·αααα·α /etc/nginx/conf.d/vhost4.conf
server {
listen 80;
server_name vhost4;
location / {
proxy_pass http://172.26.10.116:8080;
}
}
αααααααααΆαααΈααα·αααα·α (172.26.10.106 ip αααααΆαααΈαααααα nginx ααααΌαααΆαααα‘αΎα) αα αααΆαααΈαααααΆααα’αααα α―αααΆα /etc/hostsα
172.26.10.106 vhost1
172.26.10.106 vhost2
172.26.10.106 vhost3
172.26.10.106 vhost4
α αΎαααααα·αααΎα’αααΈααα½α ααΆαα
nginx -t
systemctl restart nginx
α₯α‘αΌααααααα‘αΎαααΆααααααα½αα―αα
yum install -y https://packages.timber.io/vector/0.9.X/vector-x86_64.rpm
ααααααααΎαα―αααΆαααααααααααΆαα systemd /etc/systemd/system/vector.service
[Unit]
Description=Vector
After=network-online.target
Requires=network-online.target
[Service]
User=vector
Group=vector
ExecStart=/usr/bin/vector
ExecReload=/bin/kill -HUP $MAINPID
Restart=no
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=vector
[Install]
WantedBy=multi-user.target
α αΎαααααααα ααΆααααααααααΆααααα½α Filebeat αα αααα»α /etc/vector/vector.toml config α α’αΆααααααΆα IP 172.26.10.108 ααΊααΆα’αΆααααααΆα IP αααααααΆαααΈααααααααα ααα» (Vector-Server)
data_dir = "/var/lib/vector"
[sources.nginx_file]
type = "file"
include = [ "/var/log/nginx/access.json.log" ]
start_at_beginning = false
fingerprinting.strategy = "device_and_inode"
[sinks.nginx_output_vector]
type = "vector"
inputs = [ "nginx_file" ]
address = "172.26.10.108:9876"
αα»αααααα ααααααα’αααααααΎααααΆααααα·α ααααα αααα»ααααααααΌαααΆαααΎααααΈα±ααααΆααα’αΆα α’αΆαα―αααΆααααααα ααα»α α§ααΆα ααα nginx αααα»α centos αααααΎααααααα ααα»αααααΆααα·αααα·αααα»α adm α
usermod -a -G adm vector
αααα αΆααααααΎαααααΆααααααα·α ααα
systemctl enable vector
systemctl start vector
αααααα ααα»ααα·α αααα’αΆα ααΎαααΌα αααα
journalctl -f -u vector
αα½αααααΆαααΆαα»ααΌα ααααα αααα»ααααααα ααα»
INFO vector::topology::builder: Healthcheck: Passed.
ααΆαααααΎαααααααΆαααΆαααΉα
ααΆαααααΎαααααααααΌαααΆαα’αα»αααααααααααΎ Apache benchmark α
αααα αα httpd-tools ααααΌαααΆαααα‘αΎααα ααΎαααΆαααΈαααααΆααα’ααα
ααΎαα
αΆααααααΎαααΆααααααααααααΎ Apache benchmark ααΈαααΆαααΈααα 4 αααααααααΆαα
αααα»αα’ααααααα ααααΌα ααΎαααΎαααααΎαααΆα multixer ααααΆααΈαα’αααααα α αΎααααααΆααααααΎαα
αΆααααααΎαααΆααααααααααααΎ Apache benchmark α ααααααααΎααΆαααΆαα½αα’αααααααααα’αααα’αΆα
ααααΆα
ααΈαααΆαααΈαααααΈ 1
while true; do ab -H "User-Agent: 1server" -c 100 -n 10 -t 10 http://vhost1/; sleep 1; done
ααΈαααΆαααΈαααααΈ 2
while true; do ab -H "User-Agent: 2server" -c 100 -n 10 -t 10 http://vhost2/; sleep 1; done
ααΈαααΆαααΈαααααΈ 3
while true; do ab -H "User-Agent: 3server" -c 100 -n 10 -t 10 http://vhost3/; sleep 1; done
ααΈαααΆαααΈαααααΈ 4
while true; do ab -H "User-Agent: 4server" -c 100 -n 10 -t 10 http://vhost4/; sleep 1; done
ααααα·αα·αααααΎααα·αααααααα αααα»α Clickhouse
α αΌααα ααΆαα Clickhouse
clickhouse-client -h 172.26.10.109 -m
αααααΎααααα½α SQL
SELECT * FROM vector.logs;
ββnode_nameβββββ¬βββββββββββtimestampββ¬βserver_nameββ¬βuser_idββ¬βrequest_fullββββ¬βrequest_user_agentββ¬βrequest_http_hostββ¬βrequest_uriββ¬βrequest_schemeββ¬βrequest_methodββ¬βrequest_lengthββ¬βrequest_timeββ¬βrequest_referrerββ¬βresponse_statusββ¬βresponse_body_bytes_sentββ¬βresponse_content_typeββ¬βββremote_addrββ¬βremote_portββ¬βremote_userββ¬βupstream_addrββ¬βupstream_portββ¬βupstream_bytes_receivedββ¬βupstream_bytes_sentββ¬βupstream_cache_statusββ¬βupstream_connect_timeββ¬βupstream_header_timeββ¬βupstream_response_lengthββ¬βupstream_response_timeββ¬βupstream_statusββ¬βupstream_content_typeββ
β nginx-vector β 2020-08-07 04:32:42 β vhost1 β β GET / HTTP/1.0 β 1server β vhost1 β / β http β GET β 66 β 0.028 β β 404 β 27 β β 172.26.10.106 β 45886 β β 172.26.10.106 β 0 β 109 β 97 β DISABLED β 0 β 0.025 β 27 β 0.029 β 404 β β
ββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββ΄ββββββββββ΄βββββββββββββββββ΄βββββββββββββββββββββ΄ββββββββββββββββββββ΄ββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββββ΄βββββββββββββββ΄βββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββ΄ββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββ΄ββββββββββββββββββββββββββ΄ββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄ββββββββββββββββββββββββ΄βββββββββββββββββββββββ΄βββββββββββββββββββββββββββ΄βββββββββββββββββββββββββ΄ββββββββββββββββββ΄βββββββββββββββββββββββ
ααααααααααΈααα αααααΆααΆααα αααα»α Clickhouse
select concat(database, '.', table) as table,
formatReadableSize(sum(bytes)) as size,
sum(rows) as rows,
max(modification_time) as latest_modification,
sum(bytes) as bytes_size,
any(engine) as engine,
formatReadableSize(sum(primary_key_bytes_in_memory)) as primary_keys_size
from system.parts
where active
group by database, table
order by bytes_size desc;
α αΌαααΎαααααααααααΆααΎαααααα ααα»ααα»ααααΆαααΆαααααΎα‘αΎααα αααα»α Clickhouse α
ααα αααΆααΆααααααα ααα»ααΊ 857.19 MB α
ααα ααααα·ααααααααΌα ααααΆαα αααα»αααααααααααα αααα»α Elasticsearch ααΊ 4,5GB α
ααααα·αααΎα’ααααα·ααααααΆαααα·αααααααααα»αααα·α ααααααα»ααααΆαααΆααααααααα Clickhouse α αααΆα 4500/857.19 = 5.24 αααα·α ααΆααα αααα»α Elasticsearch α
αα αααα»αααα·α ααα ααΆααααα αΆααααααΌαααΆαααααΎααΆαααααΆαααΎαα
Telegram ααααααα
Telegram ααααααα
Telegram ααααααα "
ααααα: www.habr.com