ProHoster > Блог > Administrado > Instalado de distribuita mistolerema objektostokado de LeoFS, kongrua kun klientoj uzantaj S3, NFS
Instalado de distribuita mistolerema objektostokado de LeoFS, kongrua kun klientoj uzantaj S3, NFS
Laŭ Opennet: LeoFS — distribuita mistolerema objektostokado LeoFS, kongrua kun klientoj uzante la Amazon S3 API kaj REST-API, kaj ankaŭ subtenas NFS-servila reĝimo. Estas optimumigoj por stoki kaj malgrandajn kaj tre grandajn objektojn, ekzistas enkonstruita kaŝmemormekanismo, kaj reproduktado de stokado inter datumcentroj eblas. La celoj de la projekto inkluzivas atingi 99.9999999% fidindecon per redunda reproduktado de duplikatoj kaj forigi ununuran punkton de fiasko. La projektkodo estas skribita en Erlang.
LeoFS konsistas el tri komponentoj:
Stokado de LeoFS — servas operaciojn de aldono, reakiro kaj forigo de objektoj kaj metadatenoj, respondecas pri elfarado de reproduktado, reakiro kaj vicigado de klientpetoj.
LeoFS Gateway — servas HTTP-petojn kaj alidirektas respondojn al klientoj uzante REST-API aŭ S3-API, certigas kaŝmemoron de la plej bezonataj datumoj en memoro kaj sur disko.
LeoFS-Manaĝero — kontrolas la funkciadon de LeoFS Gateway kaj LeoFS Storage nodoj, kontrolas la staton de nodoj kaj kontrolas ĉeksumojn. Garantias integrecon de datumoj kaj altan stokan haveblecon.
En ĉi tiu afiŝo ni instalos Leofs uzante ansible-playbook kaj testos S3, NFS.
Se vi provas instali LeoFS uzante la oficialajn ludlibrojn, vi renkontos diversajn erarojn: 1,2. En ĉi tiu afiŝo mi skribos tion, kion oni devas fari por eviti ĉi tiujn erarojn.
Kie vi rulos ansible-playbook, vi devas instali netcat.
Ekzemplo de inventaro
Ekzempla inventaro (en la deponejo hosts.sample):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
#[builder]
#172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.176
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.178
[leo_storage]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Preparado de servilo
Malebligante Selinux. Mi esperas, ke la komunumo kreos Selinux-politikojn por LeoFS.
- name: Install libselinux as prerequisite for SELinux Ansible module
yum:
name: "{{item}}"
state: latest
with_items:
- libselinux-python
- libsemanage-python
- name: Disable SELinux at next reboot
selinux:
state: disabled
- name: Set SELinux in permissive mode until the machine is rebooted
command: setenforce 0
ignore_errors: true
changed_when: false
fikso netcat и redhat-lsb-core. netcat bezonata por leofs-adm, redhat-lsb-core necesa por determini la OS-version tie.
Kontrolante la staton de la cluster en Primara LeoManager
leofs-adm status
Primaraj kaj Malĉefaj videblas en la protokoloj de ansible-ludlibroj
La eligo estos io tia
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | a0314afb
previous ring-hash | a0314afb
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | attached | | | | 2019-12-05 10:33:58 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK
Listo de rubujoj:
leofs-adm get-buckets
cluster id | bucket | owner | permissions | created at
-------------+----------+--------+------------------+---------------------------
leofs_1 | leofs | leofs | Me(full_control) | 2019-12-02 10:44:02 +0000
Agordante s3cmd
En kampo HTTP Proxy server name specifu la IP-servilon de Enirejo
s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]:
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]:
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]:
HTTP Proxy server port [8080]:
New settings:
Access Key: 9c2615f32e81e6a1caf5
Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
Default Region: US
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: leofs
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name: 172.26.9.180
HTTP Proxy server port: 8080
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'
Se vi ricevas la eraron ERARO: S3-eraro: 403 (Aliro Malpermesita): Aliro Malpermesita:
s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py' [1 of 1]
382 of 382 100% in 0s 3.40 kB/s done
ERROR: S3 error: 403 (AccessDenied): Access Denied
Tiam vi devas ŝanĝi signature_v3 al True en la s2cmd-agordo. Detaloj en ĉi tio temo.
Se signature_v2 estas Falsa, tiam estos eraro kiel ĉi tio:
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 6 sec...
ERROR: Test failed: Request failed for: /?delimiter=%2F
Ŝarĝo-testado
Kreu 1GB-dosieron
fallocate -l 1GB 1gb
Alŝutu ĝin al Leofs
time s3cmd put 1gb s3://leofs/
real 0m19.099s
user 0m7.855s
sys 0m1.620s
Статистика
leofs-adm du por 1 nodo:
leofs-adm du [email protected]
active number of objects: 156
total number of objects: 156
active size of objects: 602954495
total size of objects: 602954495
ratio of active size: 100.0%
last compaction start: ____-__-__ __:__:__
last compaction end: ____-__-__ __:__:__
Ni vidas, ke la konkludo ne estas tre informa.
Ni vidu kie troviĝas ĉi tiu dosiero.
leofs-adm kie estas leofs/1gb
Ni aktivigas NFS sur la servilo Leo Gateway 172.26.9.184.
Instalu nfs-utils sur la servilo kaj kliento
sudo yum install nfs-utils
Laŭ la instrukcioj, ni korektos la agordan dosieron /usr/local/leofs/current/leo_gateway/etc/leo_gateway.conf
protocol = nfs
Sur la servilo 172.26.9.184 rulu rpcbind kaj leofs-gateway
sudo service rpcbind start
sudo service leofs-gateway restart
Sur la servilo, kie funkcias leo_manager, kreu sitelon por NFS kaj generu ŝlosilon por konekti al NFS
leofs-adm add-bucket test 05236
leofs-adm gen-nfs-mnt-key test 05236 ip-адрес-nfs-клиента
Konektante al NFS
sudo mkdir /mnt/leofs
## for Linux - "sudo mount -t nfs -o nolock <host>:/<bucket>/<token> <dir>"
sudo mount -t nfs -o nolock ip-адрес-nfs-сервера-там-где-у-вас-установлен-gateway:/bucket/access_key_id/ключ-полученный-от-gen-nfs-mnt-key /mnt/leofs
sudo mount -t nfs -o nolock 172.26.9.184:/test/05236/bb5034f0c740148a346ed663ca0cf5157efb439f /mnt/leofs
Rigardu diskospacon per NFS-kliento
Diskospaco, konsiderante ke ĉiu stoka nodo havas 40GB-diskon (3 kurantaj nodoj, 1 kuna nodo):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 60G 3.6G 57G 6% /mnt/leofs
Instalado de LeoFS kun 6 stokaj nodoj.
Stokregistro (sen konstruanto):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.176
[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
172.26.9.185 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Eligo leofs-adm statuso
Eligo leofs-adm statuso
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | d8ff465e
previous ring-hash | d8ff465e
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:30 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
Diskospaco, konsiderante ke ĉiu stoka nodo havas 40GB-diskon (6 kurantaj nodoj):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 120G 3.6G 117G 3% /mnt/leofs