Maqolaning tarjimasi kurs talabalari uchun maxsus tayyorlangan
Ikki yil oldin men o'tkazdim
ClickHouse uchinchi tomon kutubxonalari bundan mustasno 170 ming satr C++ kodidan iborat boʻlib, eng kichik taqsimlangan maʼlumotlar bazasi kod bazalaridan biridir. Taqqoslash uchun, SQLite tarqatishni qo'llab-quvvatlamaydi va 235 ming satr C kodidan iborat.Ushbu yozilishgacha ClickHouse-ga 207 muhandis o'z hissasini qo'shgan va so'nggi paytlarda majburiyatlar intensivligi ortib bormoqda.
2017 yil mart oyida ClickHouse o'z faoliyatini boshladi
Ushbu maqolada men 2 yadroli protsessorlar va NVMe xotirasidan foydalangan holda AWS EC36 da ClickHouse klasterining ishlashini ko'rib chiqmoqchiman.
YANGILANISH: Ushbu postni dastlab e'lon qilgandan bir hafta o'tgach, men testni yaxshilangan konfiguratsiya bilan qayta o'tkazdim va ancha yaxshi natijalarga erishdim. Ushbu o'zgarishlarni aks ettirish uchun ushbu post yangilandi.
AWS EC2 klasterini ishga tushirish
Men ushbu post uchun uchta c5d.9xlarge EC2 nusxasidan foydalanaman. Ularning har birida 36 ta virtual protsessor, 72 Gb tezkor xotira, 900 Gb NVMe SSD xotirasi mavjud va 10 Gigabit tarmoqni qo‘llab-quvvatlaydi. Eu-west-1,962 mintaqasida talab bo'yicha ishlayotganda ularning har biri soatiga 1 dollar turadi. Men operatsion tizim sifatida Ubuntu Server 16.04 LTS dan foydalanaman.
Xavfsizlik devori shunday tuzilganki, har bir mashina bir-biri bilan cheklovsiz muloqot qila oladi va faqat mening IPv4 manzilim klasterda SSH tomonidan oq ro‘yxatga kiritilgan.
NVMe drayveri ishlashga tayyor holatda
ClickHouse ishlashi uchun men har bir serverda NVMe diskida EXT4 formatida fayl tizimini yarataman.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
Har bir narsa sozlangandan so'ng, siz o'rnatish nuqtasini va har bir tizimda mavjud bo'lgan 783 GB bo'sh joyni ko'rishingiz mumkin.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
└─nvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
Ushbu testda men foydalanadigan ma'lumotlar to'plami men olti yil davomida Nyu-York shahrida 1.1 milliard taksi safari natijasida yaratilgan ma'lumotlar to'plamidir. Blogda
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Fayllar standart sozlamalardan tezroq yuklab olinishi uchun mijozning bir vaqtda soʻrovi chegarasini 100 ga oʻrnataman.
$ aws configure set
default.s3.max_concurrent_requests
100
Men AWS S3-dan taksi safarlari ma'lumotlar to'plamini yuklab olaman va uni birinchi serverdagi NVMe diskida saqlayman. Bu maʼlumotlar toʻplami GZIP-siqilgan CSV formatida ~104GB.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
ClickHouse o'rnatish
Men Java 8 uchun OpenJDK distributivini o'rnataman, chunki u Apache ZooKeeper-ni ishga tushirish uchun talab qilinadi, bu ClickHouse-ni barcha uchta mashinada taqsimlangan o'rnatish uchun zarurdir.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Keyin muhit o'zgaruvchisini o'rnatdim JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Keyin Ubuntu paketlarini boshqarish tizimidan uchta mashinada ClickHouse 18.16.1, glances va ZooKeeperni o'rnatish uchun foydalanaman.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Men ClickHouse uchun katalog yarataman va barcha uchta serverda konfiguratsiyani bekor qilaman.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Bular men foydalanadigan konfiguratsiyani bekor qilishdir.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Keyin ZooKeeper va ClickHouse serverini uchta mashinada ishga tushiraman.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
ClickHouse-ga ma'lumotlar yuklanmoqda
Birinchi serverda men sayohat jadvalini yarataman (trips
), bu log dvigateli yordamida taksi safarlari ma'lumotlar to'plamini saqlaydi.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Keyin men har bir CSV faylini chiqarib, sayohat jadvaliga yuklayman (trips
). Quyidagi 55 daqiqa 10 soniyada bajarildi. Ushbu operatsiyadan so'ng ma'lumotlar katalogining hajmi 134 GB edi.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
Import tezligi soniyasiga 155 MB siqilmagan CSV kontentini tashkil etdi. Menimcha, bu GZIP dekompressiyasidagi muammo tufayli bo'lgan. Xargs yordamida barcha gzip fayllarni parallel ravishda ochish va keyin ochilgan ma'lumotlarni yuklash tezroq bo'lishi mumkin edi. Quyida CSV import jarayonida xabar qilingan narsalarning tavsifi keltirilgan.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Davom etishdan oldin asl CSV fayllarini o‘chirib, NVMe diskida joy bo‘shatib qo‘yaman.
$ sudo rm -fr /ch/csv
Ustun shakliga aylantirish
Log ClickHouse mexanizmi ma'lumotlarni qatorga yo'naltirilgan formatda saqlaydi. Ma'lumotlarni tezroq so'rash uchun MergeTree mexanizmi yordamida uni ustunli formatga aylantiraman.
$ clickhouse-client --host=0.0.0.0
Quyidagi 34 daqiqa 50 soniyada bajarildi. Ushbu operatsiyadan so'ng, ma'lumotlar katalogining hajmi 237 GB edi.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Operatsiya paytida ko'rish chiqishi quyidagicha ko'rinadi:
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
Oxirgi testda bir nechta ustunlar aylantirildi va qayta hisoblab chiqildi. Men ushbu funktsiyalarning ba'zilari endi ushbu ma'lumotlar to'plamida kutilganidek ishlamasligini aniqladim. Ushbu muammoni hal qilish uchun men nomaqbul funktsiyalarni olib tashladim va ma'lumotlarni ko'proq donador turlarga aylantirmasdan yukladim.
Klaster bo'yicha ma'lumotlarni taqsimlash
Men ma'lumotlarni barcha uchta klaster tugunlari bo'ylab tarqataman. Boshlash uchun quyida men uchta mashinada jadval yarataman.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Keyin birinchi server klasterdagi barcha uchta tugunni ko'rishiga ishonch hosil qilaman.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
──────
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
──────
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
──────
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Keyin birinchi serverda sxemaga asoslangan yangi jadvalni aniqlayman trips_mergetree_third
va taqsimlangan dvigateldan foydalanadi.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Keyin MergeTree asosidagi jadvaldagi ma'lumotlarni barcha uchta serverga ko'chirib olaman. Quyidagi 34 daqiqa 44 soniyada yakunlandi.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Yuqoridagi operatsiyadan so'ng, men ClickHouse-ga maksimal saqlash darajasi belgisidan uzoqlashish uchun 15 daqiqa vaqt berdim. Ma'lumotlar kataloglari uchta serverning har birida mos ravishda 264 GB, 34 GB va 33 GB bo'ldi.
ClickHouse klasterining ishlashini baholash
Keyingi ko'rgan narsam jadvaldagi har bir so'rovni bir necha marta bajarishni ko'rgan eng tez vaqt edi trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
Quyidagi 2.449 soniyada yakunlandi.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
Quyidagi 0.691 soniyada yakunlandi.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
Quyidagi 0 soniyada yakunlandi.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
Quyidagi 0.983 soniyada yakunlandi.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Taqqoslash uchun, men bir xil so'rovlarni faqat birinchi serverda joylashgan MergeTree-ga asoslangan jadvalda bajardim.
Bitta ClickHouse tugunining ishlashini baholash
Keyingi ko'rgan narsam jadvaldagi har bir so'rovni bir necha marta bajarishni ko'rgan eng tez vaqt edi trips_mergetree_x3
.
Quyidagi 0.241 soniyada yakunlandi.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
Quyidagi 0.826 soniyada yakunlandi.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
Quyidagi 1.209 soniyada yakunlandi.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
Quyidagi 1.781 soniyada yakunlandi.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Natijalar bo'yicha mulohazalar
Bu birinchi marta protsessorga asoslangan bepul ma'lumotlar bazasi sinovlarimda GPU-ga asoslangan ma'lumotlar bazasidan ustun bo'ldi. O'shandan beri GPU-ga asoslangan ma'lumotlar bazasi ikki marta qayta ko'rib chiqildi, ammo ClickHouse-ning bitta tugunda taqdim etgan ishlashi juda ta'sirli.
Shu bilan birga, taqsimlangan dvigatelda 1-so'rovni bajarishda qo'shimcha xarajatlar kattaroq tartibni tashkil qiladi. Umid qilamanki, men ushbu post bo'yicha tadqiqotimda biror narsani o'tkazib yubordim, chunki klasterga ko'proq tugunlar qo'shganimda so'rovlar vaqtini kamaytirishni ko'rish yaxshi bo'lardi. Biroq, boshqa so'rovlarni bajarishda unumdorlik taxminan 2 barobarga oshgani juda yaxshi.
ClickHouse-ni saqlash va hisoblashni ajratish imkoniyatiga ega bo'lish yo'lida rivojlanishini ko'rish yaxshi bo'lardi, shunda ular mustaqil ravishda o'lchovni o'tkazishlari mumkin. O'tgan yili qo'shilgan HDFS yordami bunga qadam bo'lishi mumkin. Hisoblash nuqtai nazaridan, agar bitta so'rovni klasterga ko'proq tugunlarni qo'shish orqali tezlashtirish mumkin bo'lsa, unda ushbu dasturiy ta'minotning kelajagi juda porloq.
Ushbu postni o'qishga vaqt ajratganingiz uchun tashakkur. Men Shimoliy Amerika va Yevropadagi mijozlarga konsalting, arxitektura va amaliyotni rivojlantirish xizmatlarini taklif qilaman. Agar siz mening takliflarim sizning biznesingizga qanday yordam berishi mumkinligini muhokama qilishni istasangiz, iltimos, orqali men bilan bog'laning
Manba: www.habr.com