Tafsiri ya makala hiyo ilitayarishwa mahsusi kwa ajili ya wanafunzi wa kozi hiyo
Miaka miwili iliyopita nilitumia
ClickHouse ina mistari elfu 170 ya msimbo wa C++, bila kujumuisha maktaba za watu wengine, na ni mojawapo ya misingi midogo ya hifadhidata iliyosambazwa. Kwa kulinganisha, SQLite haiauni usambazaji na ina mistari elfu 235 ya msimbo C. Hadi tunapoandika haya, wahandisi 207 wamechangia ClickHouse, na ukubwa wa ahadi umekuwa ukiongezeka hivi karibuni.
Mnamo Machi 2017, ClickHouse ilianza kufanya
Katika nakala hii, nitaangalia utendaji wa nguzo ya ClickHouse kwenye AWS EC2 kwa kutumia vichakataji 36-msingi na uhifadhi wa NVMe.
HABARI HII: Wiki moja baada ya kuchapisha chapisho hili awali, nilifanya jaribio tena kwa usanidi ulioboreshwa na nikapata matokeo bora zaidi. Chapisho hili limesasishwa ili kuonyesha mabadiliko haya.
Inazindua Kundi la AWS EC2
Nitakuwa nikitumia matukio matatu ya c5d.9xlarge EC2 kwa chapisho hili. Kila moja yao ina CPU pepe 36, GB 72 ya RAM, GB 900 za hifadhi ya NVMe SSD na inasaidia mtandao wa Gigabit 10. Zinagharimu $1,962/saa kila moja katika eneo la eu-west-1 zinapoendeshwa kwa mahitaji. Nitakuwa nikitumia Ubuntu Server 16.04 LTS kama mfumo wa uendeshaji.
Firewall imesanidiwa ili kila mashine iweze kuwasiliana bila vizuizi, na anwani yangu ya IPv4 pekee ndiyo iliyoidhinishwa na SSH kwenye nguzo.
Hifadhi ya NVMe katika hali ya utayari wa kufanya kazi
Ili ClickHouse ifanye kazi, nitaunda mfumo wa faili katika umbizo la EXT4 kwenye kiendeshi cha NVMe kwenye kila seva.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
Mara tu kila kitu kitakaposanidiwa, unaweza kuona sehemu ya kupachika na GB 783 ya nafasi inayopatikana kwenye kila mfumo.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
ββnvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
Seti ya data nitakayotumia katika jaribio hili ni dampo la data nililozalisha kutoka kwa safari za teksi bilioni 1.1 zilizochukuliwa katika Jiji la New York kwa muda wa miaka sita. Kwenye blogi
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Nitaweka kikomo cha ombi la mteja kuwa 100 ili faili zipakue haraka kuliko mipangilio chaguo-msingi.
$ aws configure set
default.s3.max_concurrent_requests
100
Nitapakua hifadhidata ya safari za teksi kutoka kwa AWS S3 na kuihifadhi kwenye kiendeshi cha NVMe kwenye seva ya kwanza. Seti hii ya data ni ~GB 104 katika umbizo la CSV iliyobanwa na GZIP.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
Ufungaji wa ClickHouse
Nitasakinisha usambazaji wa OpenJDK kwa Java 8 kwani inahitajika kuendesha Apache ZooKeeper, ambayo inahitajika kwa usakinishaji uliosambazwa wa ClickHouse kwenye mashine zote tatu.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Kisha nikaweka utofauti wa mazingira JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Kisha nitatumia mfumo wa usimamizi wa kifurushi cha Ubuntu kusakinisha ClickHouse 18.16.1, mtazamo na ZooKeeper kwenye mashine zote tatu.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Nitaunda saraka ya ClickHouse na pia kufanya mabadiliko kadhaa ya usanidi kwenye seva zote tatu.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Hizi ndizo ubatilishaji wa usanidi ambao nitakuwa nikitumia.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Kisha nitaendesha ZooKeeper na seva ya ClickHouse kwenye mashine zote tatu.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
Inapakia data kwa ClickHouse
Kwenye seva ya kwanza nitaunda meza ya safari (trips
), ambayo itahifadhi seti ya data ya safari za teksi kwa kutumia injini ya Kumbukumbu.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Kisha mimi huchota na kupakia kila faili ya CSV kwenye jedwali la safari (trips
) Ifuatayo ilikamilishwa kwa dakika 55 na sekunde 10. Baada ya operesheni hii, saizi ya saraka ya data ilikuwa 134 GB.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
Kasi ya kuingiza ilikuwa MB 155 ya maudhui ya CSV ambayo hayajabanwa kwa sekunde. Ninashuku kuwa hii ilitokana na kizuizi katika mtengano wa GZIP. Inaweza kuwa haraka kufungua faili zote za gzipped sambamba kwa kutumia xargs na kisha kupakia data ambayo haijafungwa. Yafuatayo ni maelezo ya kile kilichoripotiwa wakati wa mchakato wa kuleta CSV.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Nitafuta nafasi kwenye hifadhi ya NVMe kwa kufuta faili asili za CSV kabla ya kuendelea.
$ sudo rm -fr /ch/csv
Badilisha hadi Fomu ya Safu wima
Injini ya Log ClickHouse itahifadhi data katika umbizo linalolenga safu mlalo. Ili kuuliza data haraka, ninaibadilisha kuwa muundo wa safu kwa kutumia injini ya MergeTree.
$ clickhouse-client --host=0.0.0.0
Ifuatayo ilikamilishwa kwa dakika 34 na sekunde 50. Baada ya operesheni hii, saizi ya saraka ya data ilikuwa 237 GB.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Hivi ndivyo pato la kutazama lilionekana wakati wa operesheni:
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
Katika jaribio la mwisho, safu wima kadhaa zilibadilishwa na kuhesabiwa upya. Niligundua kuwa baadhi ya vipengele hivi havifanyi kazi tena kama inavyotarajiwa kwenye hifadhidata hii. Ili kutatua tatizo hili, niliondoa kazi zisizofaa na kupakia data bila kubadilisha kwa aina zaidi za punjepunje.
Usambazaji wa data kwenye nguzo
Nitasambaza data kwenye nodi zote tatu za nguzo. Kuanza, hapa chini nitaunda meza kwenye mashine zote tatu.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Kisha nitahakikisha kwamba seva ya kwanza inaweza kuona nodi zote tatu kwenye nguzo.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
ββββββ
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
ββββββ
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
ββββββ
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Kisha nitafafanua jedwali mpya kwenye seva ya kwanza ambayo inategemea schema trips_mergetree_third
na hutumia injini iliyosambazwa.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Kisha nitakili data kutoka kwa jedwali la msingi la MergeTree hadi kwa seva zote tatu. Ifuatayo ilikamilishwa kwa dakika 34 na sekunde 44.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Baada ya operesheni iliyo hapo juu, niliipa ClickHouse dakika 15 ili kuondoka kwenye alama ya kiwango cha juu cha uhifadhi. Saraka za data ziliishia kuwa GB 264, GB 34 na GB 33 mtawalia kwenye kila seva tatu.
Tathmini ya utendaji wa nguzo ya ClickHouse
Nilichoona baadaye ilikuwa wakati wa haraka sana ambao nimeona ukiendesha kila swali kwenye jedwali mara kadhaa trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
Ifuatayo ilikamilika kwa sekunde 2.449.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
Ifuatayo ilikamilika kwa sekunde 0.691.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
Ifuatayo ilikamilika kwa sekunde 0.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
Ifuatayo ilikamilika kwa sekunde 0.983.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Kwa kulinganisha, niliendesha maswali sawa kwenye jedwali la msingi la MergeTree ambalo linakaa kwenye seva ya kwanza pekee.
Tathmini ya utendaji wa nodi moja ya ClickHouse
Nilichoona baadaye ilikuwa wakati wa haraka sana ambao nimeona ukiendesha kila swali kwenye jedwali mara kadhaa trips_mergetree_x3
.
Ifuatayo ilikamilika kwa sekunde 0.241.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
Ifuatayo ilikamilika kwa sekunde 0.826.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
Ifuatayo ilikamilika kwa sekunde 1.209.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
Ifuatayo ilikamilika kwa sekunde 1.781.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Tafakari juu ya matokeo
Hii ni mara ya kwanza kwa hifadhidata isiyolipishwa ya msingi wa CPU kuweza kushinda hifadhidata ya msingi wa GPU katika majaribio yangu. Hifadhidata hiyo ya msingi wa GPU imepitia marekebisho mawili tangu wakati huo, lakini utendaji ambao ClickHouse uliwasilisha kwenye nodi moja bado ni ya kuvutia sana.
Wakati huo huo, wakati wa kutekeleza Hoja 1 kwenye injini iliyosambazwa, gharama za juu ni amri ya ukubwa wa juu. Natumai nimekosa kitu katika utafiti wangu wa chapisho hili kwa sababu itakuwa vizuri kuona nyakati za hoja zikishuka ninapoongeza nodi zaidi kwenye nguzo. Walakini, ni vizuri kwamba wakati wa kutekeleza maswali mengine, utendaji uliongezeka kwa takriban mara 2.
Ingekuwa vyema kuona ClickHouse ikibadilika kuelekea kuweza kutenganisha uhifadhi na kukokotoa ili waweze kukua kwa kujitegemea. Msaada wa HDFS, ambao uliongezwa mwaka jana, unaweza kuwa hatua kuelekea hili. Kwa upande wa kompyuta, ikiwa swala moja inaweza kuharakishwa kwa kuongeza nodes zaidi kwenye nguzo, basi wakati ujao wa programu hii ni mkali sana.
Asante kwa kuchukua muda kusoma chapisho hili. Ninatoa huduma za ushauri, usanifu na ukuzaji wa mazoezi kwa wateja walio Amerika Kaskazini na Ulaya. Ikiwa ungependa kujadili jinsi mapendekezo yangu yanaweza kusaidia biashara yako, tafadhali wasiliana nami kupitia
Chanzo: mapenzi.com