Articuli translatio specialiter pro alumnis curriculi praeparata est
Duo annos exegi
ClickHouse constat ex CLXX milibus linearum C++ codice, bibliothecae tertiae factionis exclusis, et una ex minimis datorum codicebas distributa est. Prae SQLite distributionem non adiuvat et consistit in numero 170 milium linearum ex codice C, sicut scriptum est, CCVII fabrum ClickHouse contulerunt et intensio commissorum nuper aucta est.
Mense Martio MMXVII, ClickHouse agere coepit
In hoc articulo, inspicere me facturum glomerorum globulorum in AWS EC2 utens 36-core processuum et NVMe repositionis.
UPDATE: Post dies octo postquam primum hanc epistulam evulgavi, probationem meliore configuratione repeto et multo meliores effectus perficio. Haec posta renovata est ad has mutationes cogitandas.
Deductis in AWS EC2 Cluster
Instantias tres c5d.9xlarge EC2 adhibeam in hac positione. Uterque eorum continet 36 virtualis CPUs, 72 GB of RAM, 900 GB of NVMe SSD reposita et subsidia 10 Gigabit retis. $1,962/hora singula in regione occidentali-occidentali, cum currit postulatio, constant. Ego uti Ubuntu Servo 1 LTS ut ratio operativae.
Murus ignis ita configuratur ut quaelibet machina sine restrictionibus inter se communicare possit, et sola mea inscriptio IPv4 a SSH in botro dealbata est.
NVMe claui operational promptitudini status
Pro ClickHouse ad operandum, systema fasciculi in EXT4 forma creabo in NVMe eiectis singulis servientibus.
$ sudo mkfs -t ext4 /dev/nvme1n1
$ sudo mkdir /ch
$ sudo mount /dev/nvme1n1 /ch
Cum omnia configurantur, videre potes punctum montis et 783 GB spatii in unaquaque systemate praesto.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 87.9M 1 loop /snap/core/5742
loop1 7:1 0 16.5M 1 loop /snap/amazon-ssm-agent/784
nvme0n1 259:1 0 8G 0 disk
ββnvme0n1p1 259:2 0 8G 0 part /
nvme1n1 259:0 0 838.2G 0 disk /ch
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 35G 0 35G 0% /dev
tmpfs 6.9G 8.8M 6.9G 1% /run
/dev/nvme0n1p1 7.7G 967M 6.8G 13% /
tmpfs 35G 0 35G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 35G 0 35G 0% /sys/fs/cgroup
/dev/loop0 88M 88M 0 100% /snap/core/5742
/dev/loop1 17M 17M 0 100% /snap/amazon-ssm-agent/784
tmpfs 6.9G 0 6.9G 0% /run/user/1000
/dev/nvme1n1 825G 73M 783G 1% /ch
Dataset utar in hoc experimento notitia TUBER quam generavi ex 1.1 miliardis taxi in urbe New York supra sex annos capta. In diarii
$ sudo apt update
$ sudo apt install awscli
$ aws configure
Petitionem concurrentem huius ad 100 terminum constituam ut lima citius quam occasus defaltam accipiam.
$ aws configure set
default.s3.max_concurrent_requests
100
Dataset ex AWS S3 invehitur taxi Faciam et in NVMe primo servo repone. Haec dataset ~104GB in GZIP-CSV forma compressa est.
$ sudo mkdir -p /ch/csv
$ sudo chown -R ubuntu /ch/csv
$ aws s3 sync s3://<bucket>/csv /ch/csv
ClickHouse institutionem
Distributionem OpenJDK pro Java 8 instituam sicut oportet ut Apache ZooKeeper currere, quod ad distributam institutionem ClickHouse in omnibus tribus machinis requiritur.
$ sudo apt update
$ sudo apt install
openjdk-8-jre
openjdk-8-jdk-headless
Et constitui amet variabilis JAVA_HOME
.
$ sudo vi /etc/profile
export JAVA_HOME=/usr
$ source /etc/profile
Tunc ego systema sarcinarum Ubuntu utar ut strepita de instruam 18.16.1, aspectus et Zookeeper in omnibus tribus machinis.
$ sudo apt-key adv
--keyserver hkp://keyserver.ubuntu.com:80
--recv E0C56BD4
$ echo "deb http://repo.yandex.ru/clickhouse/deb/stable/ main/" |
sudo tee /etc/apt/sources.list.d/clickhouse.list
$ sudo apt-get update
$ sudo apt install
clickhouse-client
clickhouse-server
glances
zookeeperd
Directorium pro ClickHouse creabo et etiam configurationem aliquam vincit omnibus tribus servientibus.
$ sudo mkdir /ch/clickhouse
$ sudo chown -R clickhouse /ch/clickhouse
$ sudo mkdir -p /etc/clickhouse-server/conf.d
$ sudo vi /etc/clickhouse-server/conf.d/taxis.conf
Hae figurae vincit me utendo.
<?xml version="1.0"?>
<yandex>
<listen_host>0.0.0.0</listen_host>
<path>/ch/clickhouse/</path>
<remote_servers>
<perftest_3shards>
<shard>
<replica>
<host>172.30.2.192</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.162</host>
<port>9000</port>
</replica>
</shard>
<shard>
<replica>
<host>172.30.2.36</host>
<port>9000</port>
</replica>
</shard>
</perftest_3shards>
</remote_servers>
<zookeeper-servers>
<node>
<host>172.30.2.192</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.162</host>
<port>2181</port>
</node>
<node>
<host>172.30.2.36</host>
<port>2181</port>
</node>
</zookeeper-servers>
<macros>
<shard>03</shard>
<replica>01</replica>
</macros>
</yandex>
Curram igitur Zookeeper et servo ClickHouse in omnibus tribus machinis.
$ sudo /etc/init.d/zookeeper start
$ sudo service clickhouse-server start
Discas notitia ut ClickHouse
In prima servo mensam iter creabo (trips
) quae schedulam taxi itinerariorum utens machinam Logicam congreget.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips (
trip_id UInt32,
vendor_id String,
pickup_datetime DateTime,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = Log;
Tunc extraho et onero singulas tabulas CSV in tabulam triplicem (trips
). Sequentia completa est in 55 minutis et 10 secundis. Post hanc operationem, magnitudo directorii notitiae 134 GB erat.
$ time (for FILENAME in /ch/csv/trips_x*.csv.gz; do
echo $FILENAME
gunzip -c $FILENAME |
clickhouse-client
--host=0.0.0.0
--query="INSERT INTO trips FORMAT CSV"
done)
Haec summa velocitas 155 MB contenti incompressi CSV per alterum fuit. Suspicor hoc decompressione in GZIP deberi. Posset citius esse ad unzip omnes fasciculos gzipped in parallelis utens xargs et postea data unzipped onerant. Infra descriptio eorum quae in processu CSV importare relata sunt.
$ sudo glances
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 0:11:42
CPU 8.2% nice: 0.0% LOAD 36-core MEM 9.8% active: 5.20G SWAP 0.0%
user: 6.0% irq: 0.0% 1 min: 2.24 total: 68.7G inactive: 61.0G total: 0
system: 0.9% iowait: 1.3% 5 min: 1.83 used: 6.71G buffers: 66.4M used: 0
idle: 91.8% steal: 0.0% 15 min: 1.01 free: 62.0G cached: 61.6G free: 0
NETWORK Rx/s Tx/s TASKS 370 (507 thr), 2 run, 368 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 136b 2Kb
lo 343Mb 343Mb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
100.4 1.5 1.65G 1.06G 9909 ubuntu 0 S 1:01.33 0 0 clickhouse-client --host=0.0.0.0 --query=INSERT INTO trips FORMAT CSV
DISK I/O R/s W/s 85.1 0.0 4.65M 708K 9908 ubuntu 0 R 0:50.60 32M 0 gzip -d -c /ch/csv/trips_xac.csv.gz
loop0 0 0 54.9 5.1 8.14G 3.49G 8091 clickhous 0 S 1:44.23 0 45M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
loop1 0 0 4.5 0.0 0 0 319 root 0 S 0:07.50 1K 0 kworker/u72:2
nvme0n1 0 3K 2.3 0.0 91.1M 28.9M 9912 root 0 R 0:01.56 0 0 /usr/bin/python3 /usr/bin/glances
nvme0n1p1 0 3K 0.3 0.0 0 0 960 root -20 S 0:00.10 0 0 kworker/28:1H
nvme1n1 32.1M 495M 0.3 0.0 0 0 1058 root -20 S 0:00.90 0 0 kworker/23:1H
Spatium in NVMe liberabo delendo originalis CSV imagini continuando.
$ sudo rm -fr /ch/csv
Convertere ad formam Columna
Log ClickHouse machinam notitias reponunt in forma actuaria ordinantur. Ad data interrogatione citius, eam ad formas columnares utendo machinam MergeTree converto.
$ clickhouse-client --host=0.0.0.0
Sequentia completa est in 34 minutis et 50 secundis. Post hanc operationem, magnitudo indicii notitiae 237 GB ipsius erat.
CREATE TABLE trips_mergetree
ENGINE = MergeTree(pickup_date, pickup_datetime, 8192)
AS SELECT
trip_id,
CAST(vendor_id AS Enum8('1' = 1,
'2' = 2,
'CMT' = 3,
'VTS' = 4,
'DDS' = 5,
'B02512' = 10,
'B02598' = 11,
'B02617' = 12,
'B02682' = 13,
'B02764' = 14)) AS vendor_id,
toDate(pickup_datetime) AS pickup_date,
ifNull(pickup_datetime, toDateTime(0)) AS pickup_datetime,
toDate(dropoff_datetime) AS dropoff_date,
ifNull(dropoff_datetime, toDateTime(0)) AS dropoff_datetime,
assumeNotNull(store_and_fwd_flag) AS store_and_fwd_flag,
assumeNotNull(rate_code_id) AS rate_code_id,
assumeNotNull(pickup_longitude) AS pickup_longitude,
assumeNotNull(pickup_latitude) AS pickup_latitude,
assumeNotNull(dropoff_longitude) AS dropoff_longitude,
assumeNotNull(dropoff_latitude) AS dropoff_latitude,
assumeNotNull(passenger_count) AS passenger_count,
assumeNotNull(trip_distance) AS trip_distance,
assumeNotNull(fare_amount) AS fare_amount,
assumeNotNull(extra) AS extra,
assumeNotNull(mta_tax) AS mta_tax,
assumeNotNull(tip_amount) AS tip_amount,
assumeNotNull(tolls_amount) AS tolls_amount,
assumeNotNull(ehail_fee) AS ehail_fee,
assumeNotNull(improvement_surcharge) AS improvement_surcharge,
assumeNotNull(total_amount) AS total_amount,
assumeNotNull(payment_type) AS payment_type_,
assumeNotNull(trip_type) AS trip_type,
pickup AS pickup,
pickup AS dropoff,
CAST(assumeNotNull(cab_type)
AS Enum8('yellow' = 1, 'green' = 2))
AS cab_type,
precipitation AS precipitation,
snow_depth AS snow_depth,
snowfall AS snowfall,
max_temperature AS max_temperature,
min_temperature AS min_temperature,
average_wind_speed AS average_wind_speed,
pickup_nyct2010_gid AS pickup_nyct2010_gid,
pickup_ctlabel AS pickup_ctlabel,
pickup_borocode AS pickup_borocode,
pickup_boroname AS pickup_boroname,
pickup_ct2010 AS pickup_ct2010,
pickup_boroct2010 AS pickup_boroct2010,
pickup_cdeligibil AS pickup_cdeligibil,
pickup_ntacode AS pickup_ntacode,
pickup_ntaname AS pickup_ntaname,
pickup_puma AS pickup_puma,
dropoff_nyct2010_gid AS dropoff_nyct2010_gid,
dropoff_ctlabel AS dropoff_ctlabel,
dropoff_borocode AS dropoff_borocode,
dropoff_boroname AS dropoff_boroname,
dropoff_ct2010 AS dropoff_ct2010,
dropoff_boroct2010 AS dropoff_boroct2010,
dropoff_cdeligibil AS dropoff_cdeligibil,
dropoff_ntacode AS dropoff_ntacode,
dropoff_ntaname AS dropoff_ntaname,
dropoff_puma AS dropoff_puma
FROM trips;
Hoc est quod aspectus output videbatur sicut in operatione;
ip-172-30-2-200 (Ubuntu 16.04 64bit / Linux 4.4.0-1072-aws) Uptime: 1:06:09
CPU 10.3% nice: 0.0% LOAD 36-core MEM 16.1% active: 13.3G SWAP 0.0%
user: 7.9% irq: 0.0% 1 min: 1.87 total: 68.7G inactive: 52.8G total: 0
system: 1.6% iowait: 0.8% 5 min: 1.76 used: 11.1G buffers: 71.8M used: 0
idle: 89.7% steal: 0.0% 15 min: 1.95 free: 57.6G cached: 57.2G free: 0
NETWORK Rx/s Tx/s TASKS 367 (523 thr), 1 run, 366 slp, 0 oth sorted automatically by cpu_percent, flat view
ens5 1Kb 8Kb
lo 2Kb 2Kb CPU% MEM% VIRT RES PID USER NI S TIME+ IOR/s IOW/s Command
241.9 12.8 20.7G 8.78G 8091 clickhous 0 S 30:36.73 34M 125M /usr/bin/clickhouse-server --config=/etc/clickhouse-server/config.xml
DISK I/O R/s W/s 2.6 0.0 90.4M 28.3M 9948 root 0 R 1:18.53 0 0 /usr/bin/python3 /usr/bin/glances
loop0 0 0 1.3 0.0 0 0 203 root 0 S 0:09.82 0 0 kswapd0
loop1 0 0 0.3 0.1 315M 61.3M 15701 ubuntu 0 S 0:00.40 0 0 clickhouse-client --host=0.0.0.0
nvme0n1 0 3K 0.3 0.0 0 0 7 root 0 S 0:00.83 0 0 rcu_sched
nvme0n1p1 0 3K 0.0 0.0 0 0 142 root 0 S 0:00.22 0 0 migration/27
nvme1n1 25.8M 330M 0.0 0.0 59.7M 1.79M 2764 ubuntu 0 S 0:00.00 0 0 (sd-pam)
In ultimo experimento plures columnae sunt conversi et recalculi. Aliquod harum functionum non amplius laborandum inveni in hac dataset exspectatione. Ad hanc quaestionem solvendam, functiones indebitas sustuli et notitias oneravit sine conversione ad rationes magis granulares.
Distributio data per botri
Notitiam distribuam per omnes tres nodos botri. Incipere, infra mensam super tribus machinis creabo.
$ clickhouse-client --host=0.0.0.0
CREATE TABLE trips_mergetree_third (
trip_id UInt32,
vendor_id String,
pickup_date Date,
pickup_datetime DateTime,
dropoff_date Date,
dropoff_datetime Nullable(DateTime),
store_and_fwd_flag Nullable(FixedString(1)),
rate_code_id Nullable(UInt8),
pickup_longitude Nullable(Float64),
pickup_latitude Nullable(Float64),
dropoff_longitude Nullable(Float64),
dropoff_latitude Nullable(Float64),
passenger_count Nullable(UInt8),
trip_distance Nullable(Float64),
fare_amount Nullable(Float32),
extra Nullable(Float32),
mta_tax Nullable(Float32),
tip_amount Nullable(Float32),
tolls_amount Nullable(Float32),
ehail_fee Nullable(Float32),
improvement_surcharge Nullable(Float32),
total_amount Nullable(Float32),
payment_type Nullable(String),
trip_type Nullable(UInt8),
pickup Nullable(String),
dropoff Nullable(String),
cab_type Nullable(String),
precipitation Nullable(Int8),
snow_depth Nullable(Int8),
snowfall Nullable(Int8),
max_temperature Nullable(Int8),
min_temperature Nullable(Int8),
average_wind_speed Nullable(Int8),
pickup_nyct2010_gid Nullable(Int8),
pickup_ctlabel Nullable(String),
pickup_borocode Nullable(Int8),
pickup_boroname Nullable(String),
pickup_ct2010 Nullable(String),
pickup_boroct2010 Nullable(String),
pickup_cdeligibil Nullable(FixedString(1)),
pickup_ntacode Nullable(String),
pickup_ntaname Nullable(String),
pickup_puma Nullable(String),
dropoff_nyct2010_gid Nullable(UInt8),
dropoff_ctlabel Nullable(String),
dropoff_borocode Nullable(UInt8),
dropoff_boroname Nullable(String),
dropoff_ct2010 Nullable(String),
dropoff_boroct2010 Nullable(String),
dropoff_cdeligibil Nullable(String),
dropoff_ntacode Nullable(String),
dropoff_ntaname Nullable(String),
dropoff_puma Nullable(String)
) ENGINE = MergeTree(pickup_date, pickup_datetime, 8192);
Tunc faciam ut primus minister videat omnes tres nodos in botro.
SELECT *
FROM system.clusters
WHERE cluster = 'perftest_3shards'
FORMAT Vertical;
Row 1:
ββββββ
cluster: perftest_3shards
shard_num: 1
shard_weight: 1
replica_num: 1
host_name: 172.30.2.192
host_address: 172.30.2.192
port: 9000
is_local: 1
user: default
default_database:
Row 2:
ββββββ
cluster: perftest_3shards
shard_num: 2
shard_weight: 1
replica_num: 1
host_name: 172.30.2.162
host_address: 172.30.2.162
port: 9000
is_local: 0
user: default
default_database:
Row 3:
ββββββ
cluster: perftest_3shards
shard_num: 3
shard_weight: 1
replica_num: 1
host_name: 172.30.2.36
host_address: 172.30.2.36
port: 9000
is_local: 0
user: default
default_database:
Tum primum servo novam mensam definiam quae schemate innititur trips_mergetree_third
and uses the Distributed engine.
CREATE TABLE trips_mergetree_x3
AS trips_mergetree_third
ENGINE = Distributed(perftest_3shards,
default,
trips_mergetree_third,
rand());
Tunc exscribam notitias de MergeTree substructa mensa omnibus tribus servientibus. Sequentia completa sunt in 34 minutis et 44 secundis.
INSERT INTO trips_mergetree_x3
SELECT * FROM trips_mergetree;
Post operationem supra, XV minuta dedi ClickHouse ut amoveret a maximo gradu repono notae. Notitia directoria finita sunt in 15 GB, 264 GB et 34 GB respective ad unumquemque trium servientium.
ClickHouse botrum portassent perficientur iudicium
Quod proximum videbam erat tempus quam celerrime currentem singulas interrogationes in mensa pluries vidi trips_mergetree_x3
.
$ clickhouse-client --host=0.0.0.0
Sequentia peracta 2.449 secundis.
SELECT cab_type, count(*)
FROM trips_mergetree_x3
GROUP BY cab_type;
Sequentia peracta 0.691 secundis.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree_x3
GROUP BY passenger_count;
Sequentia in 0 secundis perficitur.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year;
Sequentia peracta 0.983 secundis.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree_x3
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Ad comparationem, has interrogationes de MergeTree fundata mensa cucurri, quae solum in primo servo residet.
Euismod aestimatio unius nodi ClickHouse
Quod proximum videbam erat tempus quam celerrime currentem singulas interrogationes in mensa pluries vidi trips_mergetree_x3
.
Sequentia peracta 0.241 secundis.
SELECT cab_type, count(*)
FROM trips_mergetree
GROUP BY cab_type;
Sequentia peracta 0.826 secundis.
SELECT passenger_count,
avg(total_amount)
FROM trips_mergetree
GROUP BY passenger_count;
Sequentia peracta 1.209 secundis.
SELECT passenger_count,
toYear(pickup_date) AS year,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year;
Sequentia peracta 1.781 secundis.
SELECT passenger_count,
toYear(pickup_date) AS year,
round(trip_distance) AS distance,
count(*)
FROM trips_mergetree
GROUP BY passenger_count,
year,
distance
ORDER BY year,
count(*) DESC;
Cogitationes de eventibus
Hoc primum est quod gratuita CPU-fundata datorum possibilitatem datorum GPU fundatorum in probationibus meis formare potuit. Quod GPU-fundatur database duas emendationes ab eo tempore pervasit, sed perficiendi quod ClickHouse in uno nodo traditum est, nihilominus valde infigo est.
Eodem tempore, cum Query 1 in machina distributa exequens, supra caput gratuita sunt, ordo magnitudinis superior. Spero me aliquid desiderari in investigatione mea pro hac cursore, quia pulchrum esset videre interrogationes temporum descendentes sicut plures nodos ad botrum addidi. Sed magnum est quod, cum alias interrogationes exequens, effectus per circa 2 tempora augetur.
Pulchrum esset videre evolutionis ClickHouse versus posse reposita separare et computare ut independenter scandere possint. HDFS subsidium, quod proximo anno adiectum est, gradus ad hunc esse potuit. In terminis computandi, si una quaestio accelerari potest additis nodis ad botrum, tunc futura huius programmatis clarissima est.
Wisi enim ad minim tempus legere hoc post. Consultationem, architecturam, et praxim evolutionis clientium in America Septentrionali et Europa officia praebeo. Si disputare velis quomodo suggestiones meae negotium tuum adiuvare possunt, pete me per contactum
Source: www.habr.com