Prijevod članka pripremljen je posebno za studente kolegija
Prije dvije godine potrošio sam
ClickHouse se sastoji od 170 tisuća redaka C++ koda, isključujući biblioteke trećih strana, i jedna je od najmanjih baza kodova distribuiranih baza podataka. Za usporedbu, SQLite ne podržava distribuciju i sastoji se od 235 tisuća redaka koda C. Od pisanja ovog teksta, 207 inženjera doprinijelo je ClickHouseu, a intenzitet obveza u zadnje vrijeme raste.
U ožujku 2017. ClickHouse je počeo provoditi
U ovom ću članku pogledati izvedbu klastera ClickHouse na AWS EC2 koristeći 36-jezgrene procesore i NVMe pohranu.
AŽURIRANJE: Tjedan dana nakon prvobitnog objavljivanja ovog posta, ponovno sam proveo test s poboljšanom konfiguracijom i postigao mnogo bolje rezultate. Ovaj post je ažuriran kako bi odražavao te promjene.
Pokretanje AWS EC2 klastera
Koristit ću tri c5d.9xlarge EC2 instance za ovaj post. Svaki od njih sadrži 36 virtualnih procesora, 72 GB RAM-a, 900 GB NVMe SSD pohrane i podržava 10 Gigabitnu mrežu. Koštaju 1,962 dolara po satu u regiji eu-west-1 kada rade na zahtjev. Koristit ću Ubuntu Server 16.04 LTS kao operativni sustav.
Vatrozid je konfiguriran tako da svaki stroj može međusobno komunicirati bez ograničenja, a samo je moja IPv4 adresa na listi dopuštenih SSH-a u klasteru.
NVMe pogon u stanju operativne spremnosti
Da bi ClickHouse radio, stvorit ću datotečni sustav u EXT4 formatu na NVMe disku na svakom od poslužitelja.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
Nakon što je sve konfigurirano, možete vidjeti točku montiranja i 783 GB dostupnog prostora na svakom sustavu.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
Skup podataka koji ću koristiti u ovom testu je ispis podataka koji sam generirao iz 1.1 milijarde vožnji taksijem u New Yorku tijekom šest godina. Na blogu
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Postavit ću ograničenje istodobnih zahtjeva klijenta na 100 tako da se datoteke preuzimaju brže od zadanih postavki.
$ aws configure set
default.s3.max_concurrent_requests
100
Preuzet ću skup podataka o taksi vožnjama s AWS S3 i pohraniti ga na NVMe disk na prvom poslužitelju. Ovaj skup podataka iznosi ~104 GB u GZIP-komprimiranom CSV formatu.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
ClickHouse instalacija
Instalirat ću distribuciju OpenJDK za Javu 8 jer je potrebna za pokretanje Apache ZooKeepera, koji je potreban za distribuiranu instalaciju ClickHousea na sva tri računala.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Zatim postavljam varijablu okoline JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Zatim ću koristiti Ubuntuov sustav za upravljanje paketima da instaliram ClickHouse 18.16.1, glances i ZooKeeper na sva tri računala.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Stvorit ću direktorij za ClickHouse i napraviti neke promjene konfiguracije na sva tri poslužitelja.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Ovo su nadjačavanja konfiguracije koja ću koristiti.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Zatim ću pokrenuti ZooKeeper i ClickHouse poslužitelj na sva tri računala.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
Prijenos podataka u ClickHouse
Na prvom poslužitelju napravit ću tablicu putovanja (trips
), koji će pohraniti skup podataka o taksi vožnjama pomoću Log motora.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Zatim izdvajam i učitavam svaku od CSV datoteka u tablicu putovanja (trips
). Sljedeće je završeno za 55 minuta i 10 sekundi. Nakon ove operacije, veličina podatkovnog imenika bila je 134 GB.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
Brzina uvoza bila je 155 MB nekomprimiranog CSV sadržaja u sekundi. Pretpostavljam da je do toga došlo zbog uskog grla u GZIP dekompresiji. Možda je bilo brže raspakirati sve gzipane datoteke paralelno koristeći xargs i zatim učitati raspakirane podatke. U nastavku je opis onoga što je prijavljeno tijekom procesa uvoza CSV-a.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Oslobodit ću prostor na NVMe disku brisanjem izvornih CSV datoteka prije nastavka.
$ sudo rm -fr /ch/csv
Pretvori u obrazac stupca
Log ClickHouse mehanizam će pohraniti podatke u formatu usmjerenom prema retku. Kako bih brže postavljao upite podacima, pretvaram ih u stupčasti format pomoću mehanizma MergeTree.
$ clickhouse-client --host=0.0.0.0
Sljedeće je završeno za 34 minute i 50 sekundi. Nakon ove operacije, veličina podatkovnog direktorija bila je 237 GB.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Ovako je izgledao izlazni pogled tijekom operacije:
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
U posljednjem testu nekoliko je stupaca pretvoreno i ponovno izračunato. Otkrio sam da neke od ovih funkcija više ne rade kako se očekuje na ovom skupu podataka. Kako bih riješio ovaj problem, uklonio sam neprikladne funkcije i učitao podatke bez pretvaranja u preciznije vrste.
Distribucija podataka po klasteru
Podatke ću distribuirati na sva tri čvora klastera. Za početak, u nastavku ću napraviti tablicu na sva tri stroja.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Zatim ću se pobrinuti da prvi poslužitelj može vidjeti sva tri čvora u klasteru.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
──────
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
──────
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
──────
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Zatim ću definirati novu tablicu na prvom poslužitelju koja se temelji na shemi trips_mergetree_third
i koristi Distributed engine.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Zatim ću kopirati podatke iz tablice temeljene na MergeTree na sva tri poslužitelja. Sljedeće je završeno za 34 minute i 44 sekunde.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Nakon gornje operacije dao sam ClickHousu 15 minuta da se odmakne od oznake maksimalne razine pohrane. Direktoriji podataka na kraju su imali 264 GB, 34 GB i 33 GB redom na svakom od tri poslužitelja.
Ocjena performansi klastera ClickHouse
Ono što sam sljedeće vidio bilo je najbrže vrijeme koje sam vidio pri pokretanju svakog upita na tablici više puta trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
Sljedeće je završeno za 2.449 sekundi.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
Sljedeće je završeno za 0.691 sekundi.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
Sljedeće je završeno za 0 sekunde.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
Sljedeće je završeno za 0.983 sekundi.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Za usporedbu, pokrenuo sam iste upite na tablici temeljenoj na MergeTree koja se nalazi samo na prvom poslužitelju.
Procjena performansi jednog ClickHouse čvora
Ono što sam sljedeće vidio bilo je najbrže vrijeme koje sam vidio pri pokretanju svakog upita na tablici više puta trips_mergetree_x3
.
Sljedeće je završeno za 0.241 sekundi.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
Sljedeće je završeno za 0.826 sekundi.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
Sljedeće je završeno za 1.209 sekundi.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
Sljedeće je završeno za 1.781 sekundi.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Refleksije na rezultate
Ovo je prvi put da je besplatna baza podataka temeljena na CPU-u uspjela nadmašiti bazu podataka temeljenu na GPU-u u mojim testovima. Ta baza podataka temeljena na GPU-u od tada je prošla kroz dvije revizije, ali izvedba koju je ClickHouse pružio na jednom čvoru svejedno je vrlo impresivna.
U isto vrijeme, kada se izvršava Upit 1 na distribuiranom stroju, opći troškovi su za red veličine veći. Nadam se da sam nešto propustio u svom istraživanju za ovaj post jer bi bilo lijepo vidjeti da se vrijeme upita smanjuje kako dodam više čvorova u klaster. Međutim, sjajno je što se pri izvršavanju drugih upita izvedba povećala za oko 2 puta.
Bilo bi lijepo vidjeti kako ClickHouse evoluira prema mogućnosti odvajanja pohrane i računanja kako bi se mogli samostalno skalirati. Podrška za HDFS, koja je dodana prošle godine, mogla bi biti korak ka tome. Što se tiče računalstva, ako se jedan upit može ubrzati dodavanjem više čvorova u klaster, onda je budućnost ovog softvera vrlo svijetla.
Hvala vam što ste odvojili vrijeme da pročitate ovaj post. Nudim usluge savjetovanja, arhitekture i razvoja prakse klijentima u Sjevernoj Americi i Europi. Ako želite razgovarati o tome kako moji prijedlozi mogu pomoći vašem poslovanju, kontaktirajte me putem
Izvor: www.habr.com