De oersetting fan it artikel waard spesifyk taret foar de learlingen fan 'e kursus
Twa jier lyn haw ik trochbrocht
ClickHouse bestiet út 170 tûzen rigels fan C ++ koade, eksklusyf tredden bibleteken, en is ien fan de lytste ferspraat databank codebases. Yn ferliking stipet SQLite gjin distribúsje en bestiet út 235 tûzen rigels fan C-koade. As fan dit skriuwen hawwe 207 yngenieurs bydroegen oan ClickHouse, en de yntinsiteit fan commits is koartlyn tanommen.
Yn maart 2017 begon ClickHouse te fieren
Yn dit artikel sil ik de prestaasjes sjen fan in ClickHouse-kluster op AWS EC2 mei 36-kearnprozessors en NVMe-opslach.
UPDATE: In wike nei it orizjineel publisearjen fan dizze post, haw ik de test opnij útfierd mei in ferbettere konfiguraasje en berikte folle bettere resultaten. Dizze post is bywurke om dizze wizigingen te reflektearjen.
In AWS EC2-kluster lansearje
Ik sil brûke trije c5d.9xlarge EC2 eksimplaren foar dizze post. Elk fan harren befettet 36 firtuele CPU's, 72 GB RAM, 900 GB NVMe SSD opslach en stipet 10 Gigabit netwurk. Se kostje elk $ 1,962 / oere yn 'e eu-west-1-regio as se op oanfraach rinne. Ik sil Ubuntu Server 16.04 LTS brûke as it bestjoeringssysteem.
De brânmuorre is konfigurearre sadat elke masine kin kommunisearje mei elkoar sûnder beheiningen, en allinnich myn IPv4 adres wurdt whitelisted troch SSH yn it kluster.
NVMe drive yn operasjonele ree tastân
Foar ClickHouse om te wurkjen, sil ik in bestânsysteem meitsje yn it EXT4-formaat op in NVMe-stasjon op elk fan 'e servers.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
As alles is konfigureare, kinne jo it berchpunt sjen en de 783 GB romte beskikber op elk systeem.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
De dataset dy't ik sil brûke yn dizze test is in gegevensdump dy't ik genereare fan 1.1 miljard taksyritten dy't oer seis jier yn New York City binne nommen. Op it blog
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Ik sil de limyt foar tagelyk fersyk fan 'e kliïnt op 100 ynstelle, sadat bestannen rapper downloade dan de standertynstellingen.
$ aws configure set
default.s3.max_concurrent_requests
100
Ik sil download de taksy rides dataset út AWS S3 en bewarje it op in NVMe drive op de earste tsjinner. Dizze dataset is ~104GB yn GZIP-komprimearre CSV-formaat.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
ClickHouse ynstallaasje
Ik sil ynstallearje de OpenJDK distribúsje foar Java 8 sa't it is nedich foar in run Apache ZooKeeper, dat is nedich foar in ferspraat ynstallaasje fan ClickHouse op alle trije masines.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Dan stel ik de omjouwingsfariabele yn JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Ik sil dan Ubuntu's pakketbehearsysteem brûke om ClickHouse 18.16.1, blikken en ZooKeeper op alle trije masines te ynstallearjen.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Ik sil in map meitsje foar ClickHouse en ek wat konfiguraasje oerskriuwe op alle trije servers.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Dit binne de konfiguraasje-overrides dy't ik sil brûke.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Ik sil dan ZooKeeper en de ClickHouse-tsjinner op alle trije masines útfiere.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
It opladen fan gegevens nei ClickHouse
Op de earste tsjinner sil ik in reistabel meitsje (trips
), dy't in dataset fan taksyreizen sil opslaan mei de logmotor.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Ik extract en laad elk fan 'e CSV-bestannen dan yn in reistabel (trips
). It folgjende waard foltôge yn 55 minuten en 10 sekonden. Nei dizze operaasje wie de grutte fan 'e gegevensmap 134 GB.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
De ymportsnelheid wie 155 MB net-komprimearre CSV-ynhâld per sekonde. Ik tink dat dit kaam troch in knelpunt yn GZIP-dekompresje. It kin rapper west hawwe om alle gzippede bestannen parallel te unzip mei xargs en dan de unzipped gegevens te laden. Hjirûnder is in beskriuwing fan wat waard rapportearre tidens it CSV-ymportproses.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Ik sil romte frijmeitsje op it NVMe-stasjon troch de orizjinele CSV-bestannen te wiskjen foardat ik trochgie.
$ sudo rm -fr /ch/csv
Konvertearje nei Kolomfoarm
De Log ClickHouse-motor sil gegevens opslaan yn in rige-oriïntearre opmaak. Om gegevens rapper te freegjen, konvertearje ik it nei kolomformaat mei de MergeTree-motor.
$ clickhouse-client --host=0.0.0.0
It folgjende waard foltôge yn 34 minuten en 50 sekonden. Nei dizze operaasje wie de grutte fan 'e gegevensmap 237 GB.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Dit is hoe't de blikútfier der útseach tidens de operaasje:
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
Yn 'e lêste test waarden ferskate kolommen omboud en opnij berekkene. Ik fûn dat guon fan dizze funksjes net mear wurkje lykas ferwachte op dizze dataset. Om dit probleem op te lossen, haw ik de ûngeskikte funksjes fuorthelle en de gegevens laden sûnder te konvertearjen nei mear korrelige typen.
Ferdieling fan gegevens oer it kluster
Ik sil de gegevens fersprieden oer alle trije klusterknooppunten. Om te begjinnen, hjirûnder sil ik in tabel meitsje op alle trije masines.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Dan sil ik derfoar soargje dat de earste tsjinner alle trije knopen yn it kluster sjen kin.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
──────
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
──────
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
──────
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Dan sil ik in nije tabel definiearje op 'e earste tsjinner dy't basearre is op it skema trips_mergetree_third
en brûkt de Distributed motor.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Ik sil dan de gegevens fan 'e MergeTree basearre tabel kopiearje nei alle trije servers. It folgjende waard foltôge yn 34 minuten en 44 sekonden.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Nei de boppesteande operaasje joech ik ClickHouse 15 minuten om fuort te gean fan it maksimum opslachnivo mark. De gegevensmappen wiene respektivelik 264 GB, 34 GB en 33 GB op elk fan 'e trije servers.
ClickHouse kluster prestaasjes evaluaasje
Wat ik neist seach, wie de rapste tiid dy't ik haw sjoen dat elke fraach meardere kearen op in tabel draaide trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
De folgjende foltôge yn 2.449 sekonden.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
De folgjende foltôge yn 0.691 sekonden.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
De folgjende foltôge yn 0 sekonden.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
De folgjende foltôge yn 0.983 sekonden.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Foar fergeliking rûn ik deselde queries op in MergeTree-basearre tabel dy't allinich op 'e earste tsjinner wennet.
Prestaasjeevaluaasje fan ien ClickHouse-knooppunt
Wat ik neist seach, wie de rapste tiid dy't ik haw sjoen dat elke fraach meardere kearen op in tabel draaide trips_mergetree_x3
.
De folgjende foltôge yn 0.241 sekonden.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
De folgjende foltôge yn 0.826 sekonden.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
De folgjende foltôge yn 1.209 sekonden.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
De folgjende foltôge yn 1.781 sekonden.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Refleksjes oer de resultaten
Dit is de earste kear dat in fergese CPU-basearre databank in GPU-basearre databank koe prestearje yn myn tests. Dy GPU-basearre databank is sûnt doe twa ferzjes trochmakke, mar de prestaasjes dy't ClickHouse levere op ien knooppunt is lykwols heul yndrukwekkend.
Tagelyk, by it útfieren fan Query 1 op in ferspraat motor, binne de overheadkosten in folchoarder fan grutte heger. Ik hoopje dat ik wat miste yn myn ûndersyk foar dizze post, om't it moai wêze soe om querytiden te sjen omleech as ik mear knopen tafoegje oan it kluster. It is lykwols geweldich dat by it útfieren fan oare fragen de prestaasjes mei sawat 2 kear ferhege.
It soe moai wêze om ClickHouse te sjen evoluearje nei it kinnen skiede opslach en berekkenje sadat se selsstannich kinne skaalje. HDFS-stipe, dy't ferline jier tafoege is, kin in stap nei dit wêze. Yn termen fan komputer, as in inkele query kin wurde fersneld troch it tafoegjen fan mear knopen oan it kluster, dan is de takomst fan dizze software tige helder.
Betanke foar it nimmen fan de tiid om dizze post te lêzen. Ik biede tsjinsten foar konsultaasje, arsjitektuer en praktykûntwikkeling oan kliïnten yn Noard-Amearika en Jeropa. As jo wolle beprate hoe't myn suggestjes jo bedriuw kinne helpe, nim dan kontakt mei my op fia
Boarne: www.habr.com