ProHoster > Blog > administration > Installation af distribueret fejltolerant LeoFS-objektlager, kompatibel med klienter, der bruger S3, NFS
Installation af distribueret fejltolerant LeoFS-objektlager, kompatibel med klienter, der bruger S3, NFS
Ifølge Åbent net: LeoFS — distribueret fejltolerant objektlagring LeoFS, kompatibel med klienter, der bruger Amazon S3 API og REST-API, og understøtter også NFS-servertilstand. Der er optimeringer til lagring af både små og meget store objekter, der er indbygget caching-mekanisme, og replikering af lagring mellem datacentre er mulig. Projektets mål inkluderer at opnå 99.9999999 % pålidelighed gennem redundant replikering af dubletter og eliminering af et enkelt fejlpunkt. Projektkoden er skrevet i Erlang.
LeoFS består af tre komponenter:
LeoFS Storage — betjener operationer med tilføjelse, hentning og sletning af objekter og metadata, er ansvarlig for at udføre replikering, gendannelse og kø af klientanmodninger.
LeoFS Gateway — serverer HTTP-anmodninger og omdirigerer svar til klienter, der bruger REST-API eller S3-API, sikrer caching af de mest nødvendige data i hukommelsen og på disken.
LeoFS Manager — overvåger driften af LeoFS Gateway og LeoFS Storage noder, overvåger status for noder og kontrollerer kontrolsummer. Garanterer dataintegritet og høj lagertilgængelighed.
I dette indlæg vil vi installere Leofs ved hjælp af ansible-playbook og teste S3, NFS.
Hvis du prøver at installere LeoFS ved hjælp af de officielle spillebøger, vil du støde på forskellige fejl: 1,2. I dette indlæg vil jeg skrive, hvad der skal gøres for at undgå disse fejl.
Hvor du vil køre ansible-playbook, skal du installere netcat.
Eksempel på inventar
Eksempel på beholdning (i hosts.sample repository):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
#[builder]
#172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.176
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.178
[leo_storage]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Server forberedelse
Deaktivering af Selinux. Jeg håber, at fællesskabet vil skabe Selinux-politikker for LeoFS.
- name: Install libselinux as prerequisite for SELinux Ansible module
yum:
name: "{{item}}"
state: latest
with_items:
- libselinux-python
- libsemanage-python
- name: Disable SELinux at next reboot
selinux:
state: disabled
- name: Set SELinux in permissive mode until the machine is rebooted
command: setenforce 0
ignore_errors: true
changed_when: false
Installation netcat и redhat-lsb-core. netcat behov for leofs-adm, redhat-lsb-core nødvendig for at bestemme OS-versionen her.
Primær og sekundær kan ses i ansible-playbook-logfilerne
Outputtet vil være noget som dette
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | a0314afb
previous ring-hash | a0314afb
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | attached | | | | 2019-12-05 10:33:58 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK
Bucket liste:
leofs-adm get-buckets
cluster id | bucket | owner | permissions | created at
-------------+----------+--------+------------------+---------------------------
leofs_1 | leofs | leofs | Me(full_control) | 2019-12-02 10:44:02 +0000
Konfigurerer s3cmd
I feltet HTTP Proxy server name angiv Gateway-serverens IP
s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]:
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]:
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]:
HTTP Proxy server port [8080]:
New settings:
Access Key: 9c2615f32e81e6a1caf5
Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
Default Region: US
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: leofs
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name: 172.26.9.180
HTTP Proxy server port: 8080
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'
Hvis du får fejlen ERROR: S3 error: 403 (Access Denied): Adgang nægtet:
s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py' [1 of 1]
382 of 382 100% in 0s 3.40 kB/s done
ERROR: S3 error: 403 (AccessDenied): Access Denied
Så skal du ændre signature_v3 til True i s2cmd-konfigurationen. Detaljer i denne spørgsmål.
Hvis signatur_v2 er falsk, vil der være en fejl som denne:
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 6 sec...
ERROR: Test failed: Request failed for: /?delimiter=%2F
Belastningstest
Opret en 1GB fil
fallocate -l 1GB 1gb
Upload det til Leofs
time s3cmd put 1gb s3://leofs/
real 0m19.099s
user 0m7.855s
sys 0m1.620s
Statistikker
leofs-adm du for 1 node:
leofs-adm du [email protected]
active number of objects: 156
total number of objects: 156
active size of objects: 602954495
total size of objects: 602954495
ratio of active size: 100.0%
last compaction start: ____-__-__ __:__:__
last compaction end: ____-__-__ __:__:__
Vi ser, at konklusionen ikke er særlig informativ.
Lad os se, hvor denne fil er placeret.
leofs-adm hvor er leofs/1gb
Vi aktiverer NFS på Leo Gateway 172.26.9.184-serveren.
Installer nfs-utils på serveren og klienten
sudo yum install nfs-utils
I henhold til instruktionerne vil vi rette konfigurationsfilen /usr/local/leofs/current/leo_gateway/etc/leo_gateway.conf
protocol = nfs
Kør rpcbind og leofs-gateway på server 172.26.9.184
sudo service rpcbind start
sudo service leofs-gateway restart
På serveren, hvor leo_manager kører, skal du oprette en bøtte til NFS og generere en nøgle til at oprette forbindelse til NFS
leofs-adm add-bucket test 05236
leofs-adm gen-nfs-mnt-key test 05236 ip-адрес-nfs-клиента
Opretter forbindelse til NFS
sudo mkdir /mnt/leofs
## for Linux - "sudo mount -t nfs -o nolock <host>:/<bucket>/<token> <dir>"
sudo mount -t nfs -o nolock ip-адрес-nfs-сервера-там-где-у-вас-установлен-gateway:/bucket/access_key_id/ключ-полученный-от-gen-nfs-mnt-key /mnt/leofs
sudo mount -t nfs -o nolock 172.26.9.184:/test/05236/bb5034f0c740148a346ed663ca0cf5157efb439f /mnt/leofs
Se diskplads via NFS-klient
Diskplads, idet der tages højde for, at hver lagerknude har en 40 GB disk (3 kørende noder, 1 tilknyttet node):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 60G 3.6G 57G 6% /mnt/leofs
Installation af LeoFS med 6 lagernoder.
Inventar (uden builder):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.176
[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
172.26.9.185 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Output leofs-adm status
Output leofs-adm status
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | d8ff465e
previous ring-hash | d8ff465e
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:30 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
Diskplads, idet der tages højde for, at hver lagerknude har en 40 GB disk (6 kørende noder):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 120G 3.6G 117G 3% /mnt/leofs