Tradiksyon atik la te prepare espesyalman pou elèv yo nan kou a
De zan de sa mwen te pase
ClickHouse konsiste de 170 mil liy kòd C++, san konte bibliyotèk twazyèm pati, e li se youn nan pi piti baz kod ki distribye. An konparezon, SQLite pa sipòte distribisyon ak konsiste de 235 mil liy nan kòd C. Kòm nan ekri sa a, 207 enjenyè te kontribye nan ClickHouse, ak entansite a nan komèt te ogmante dènyèman.
Nan mwa mas 2017, ClickHouse te kòmanse fè konduit
Nan atik sa a, mwen pral pran yon gade nan pèfòmans nan yon gwoup ClickHouse sou AWS EC2 lè l sèvi avèk 36-debaz processeurs ak depo NVMe.
MIZAJOU: Yon semèn apre mwen te pibliye pòs sa a, mwen te refè tès la ak yon konfigirasyon amelyore epi mwen te reyalize pi bon rezilta. Pòs sa a te mete ajou pou reflete chanjman sa yo.
Lanse yon AWS EC2 Cluster
Mwen pral sèvi ak twa egzanp c5d.9xlarge EC2 pou pòs sa a. Chak nan yo gen 36 CPU vityèl, 72 GB RAM, 900 GB depo SSD NVMe ak sipòte rezo 10 Gigabit. Yo koute $1,962/èdtan chak nan rejyon eu-west-1 lè yo kouri sou demann. Mwen pral sèvi ak Ubuntu Server 16.04 LTS kòm sistèm operasyon an.
Se pare-feu a konfigirasyon pou chak machin ka kominike youn ak lòt san restriksyon, epi sèlman adrès IPv4 mwen an se lis blan pa SSH nan gwoup la.
Kondwi NVMe nan eta preparasyon pou operasyon
Pou ClickHouse travay, mwen pral kreye yon sistèm fichye nan fòma EXT4 sou yon kondwi NVMe sou chak nan sèvè yo.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
Yon fwa ke tout bagay se configuré, ou ka wè pwen mòn lan ak 783 GB espas ki disponib sou chak sistèm.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
Ansanm done mwen pral itilize nan tès sa a se yon depo done mwen te pwodwi nan 1.1 milya vwayaj taksi yo te pran nan Vil Nouyòk sou sis ane. Sou blog la
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Mwen pral mete limit demann kliyan an nan 100 pou dosye yo telechaje pi vit pase paramèt default yo.
$ aws configure set
default.s3.max_concurrent_requests
100
Mwen pral telechaje dataset taxi woulib la nan AWS S3 epi estoke li sou yon kondwi NVMe sou premye sèvè a. Ansanm done sa a se ~104GB nan fòma CSV GZIP-konprese.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
ClickHouse enstalasyon
Mwen pral enstale distribisyon OpenJDK pou Java 8 kòm li oblije kouri Apache ZooKeeper, ki nesesè pou yon enstalasyon distribye ClickHouse sou tout twa machin yo.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Lè sa a, mwen mete varyab anviwònman an JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Lè sa a, mwen pral sèvi ak sistèm jesyon pake Ubuntu a enstale ClickHouse 18.16.1, gade ak ZooKeeper sou tout twa machin yo.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Mwen pral kreye yon anyè pou ClickHouse epi tou mwen pral fè kèk ranvwa konfigirasyon sou tout twa serveurs.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Sa yo se konfigirasyon konfigirasyon ke mwen pral itilize.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Lè sa a, mwen pral kouri ZooKeeper ak sèvè ClickHouse sou tout twa machin yo.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
Téléchargement done sou ClickHouse
Sou premye sèvè a mwen pral kreye yon tab vwayaj (trips
), ki pral estoke yon seri done vwayaj taksi lè l sèvi avèk motè Log la.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Lè sa a, mwen ekstrè epi chaje chak nan dosye CSV yo nan yon tab vwayaj (trips
). Sa ki annapre yo te konplete nan 55 minit ak 10 segonn. Apre operasyon sa a, gwosè a nan anyè done a te 134 GB.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
Vitès enpòte a te 155 MB nan kontni CSV dekonprese pa segonn. Mwen sispèk sa a te akòz yon bouche nan dekonpresyon GZIP. Li ta ka pi vit dekonprime tout fichye gzipped yo an paralèl lè l sèvi avèk xargs ak Lè sa a, chaje done yo dekonprime. Anba a se yon deskripsyon sa ki te rapòte pandan pwosesis enpòte CSV la.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Mwen pral libere espas sou kondwi a NVMe lè w efase dosye orijinal CSV yo anvan ou kontinye.
$ sudo rm -fr /ch/csv
Konvèti nan fòm kolòn
Log ClickHouse motè a pral estoke done nan yon fòma oryante ranje. Pou rechèch done pi vit, mwen konvèti li nan fòma kolòn lè l sèvi avèk motè MergeTree la.
$ clickhouse-client --host=0.0.0.0
Sa ki annapre yo te konplete nan 34 minit ak 50 segonn. Apre operasyon sa a, gwosè a nan anyè done a te 237 GB.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Men sa pwodiksyon ti koutje a te sanble pandan operasyon an:
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
Nan dènye tès la, plizyè kolòn yo te konvèti ak rekalkile. Mwen te jwenn ke kèk nan fonksyon sa yo pa travay ankò jan yo espere sou dataset sa a. Pou rezoud pwoblèm sa a, mwen retire fonksyon ki pa apwopriye yo epi chaje done yo san yo pa konvèti nan plis kalite granulaire.
Distribisyon done atravè gwoup la
Mwen pral distribye done yo atravè tout twa nœuds gwoup yo. Pou kòmanse, anba a mwen pral kreye yon tab sou tout twa machin yo.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Lè sa a, mwen pral asire w ke premye sèvè a ka wè tout twa nœuds nan gwoup la.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
──────
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
──────
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
──────
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Lè sa a, mwen pral defini yon nouvo tab sou sèvè a premye ki baze sou chema a trips_mergetree_third
epi sèvi ak motè distribiye a.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Lè sa a, mwen pral kopye done ki soti nan tablo ki baze sou MergeTree nan tout twa serveurs. Sa ki annapre yo te konplete nan 34 minit ak 44 segonn.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Apre operasyon ki pi wo a, mwen te bay ClickHouse 15 minit pou m ale lwen mak nivo maksimòm depo a. Anyè done yo te fini 264 GB, 34 GB ak 33 GB respektivman sou chak nan twa serveurs yo.
Evalyasyon pèfòmans gwoup ClickHouse
Ki sa mwen te wè annapre se tan ki pi rapid mwen te wè kouri chak demann sou yon tab plizyè fwa trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
Sa ki annapre yo te konplete nan 2.449 segonn.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
Sa ki annapre yo te konplete nan 0.691 segonn.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
Sa ki annapre yo te konplete nan 0 segonn.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
Sa ki annapre yo te konplete nan 0.983 segonn.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Pou konparezon, mwen te kouri demann yo menm sou yon tab MergeTree ki baze sou ki abite sèlman sou premye sèvè a.
Evalyasyon pèfòmans yon sèl ClickHouse ne
Ki sa mwen te wè annapre se tan ki pi rapid mwen te wè kouri chak demann sou yon tab plizyè fwa trips_mergetree_x3
.
Sa ki annapre yo te konplete nan 0.241 segonn.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
Sa ki annapre yo te konplete nan 0.826 segonn.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
Sa ki annapre yo te konplete nan 1.209 segonn.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
Sa ki annapre yo te konplete nan 1.781 segonn.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Refleksyon sou rezilta yo
Sa a se premye fwa ke yon baz done gratis ki baze sou CPU te kapab depase yon baz done ki baze sou GPU nan tès mwen yo. Sa a baz done ki baze sou GPU te ale nan de revizyon depi lè sa a, men pèfòmans ClickHouse te delivre sou yon sèl ne se kanmenm trè enpresyonan.
An menm tan an, lè w ap egzekite Rekèt 1 sou yon motè distribiye, depans jeneral yo se yon lòd ki pi wo. Mwen espere ke mwen te rate yon bagay nan rechèch mwen an pou pòs sa a paske li ta bon pou wè tan demann desann pandan m ap ajoute plis nœuds nan gwoup la. Sepandan, li se gwo ke lè egzekite lòt demann, pèfòmans ogmante pa apeprè 2 fwa.
Li ta bon pou wè ClickHouse evolye nan direksyon pou yo kapab separe depo ak kalkile pou yo ka echèl poukont yo. Sipò HDFS, ki te ajoute ane pase a, ta ka yon etap nan direksyon sa a. An tèm de informatique, si yon sèl demann ka akselere lè yo ajoute plis nœuds nan gwoup la, Lè sa a, lavni an nan lojisyèl sa a se trè klere.
Mèsi paske w pran tan pou w li pòs sa a. Mwen ofri sèvis konsiltasyon, achitekti ak devlopman pratik bay kliyan nan Amerik di Nò ak Ewòp. Si ou ta renmen diskite sou fason sijesyon mwen yo ka ede biznis ou, tanpri kontakte m 'via
Sous: www.habr.com