Ukufakwa kwesitoreji sento ye-LeoFS esabalalisiwe esibekezelela amaphutha, ehambisana namakhasimende asebenzisa i-S3, NFS

Ngokusho I-Opennet: I-LeoFS - isitoreji sento esabalalisiwe esibekezelela amaphutha I-LeoFS, ehambisana namakhasimende asebenzisa i-Amazon S3 API ne-REST-API, futhi isekela imodi yeseva ye-NFS. Kukhona ukulungiselelwa kokugcina kokubili izinto ezincane nezinkulu kakhulu, kukhona indlela eyakhelwe ngaphakathi yokugcina isikhashana, futhi ukuphindaphinda kwesitoreji phakathi kwezikhungo zedatha kungenzeka. Imigomo yale phrojekthi ihlanganisa ukuzuza ukwethembeka okungu-99.9999999% ngokuphindaphinda okungafuneki kwezimpinda kanye nokuqeda iphuzu elilodwa lokwehluleka. Ikhodi yephrojekthi ibhalwe ngesi-Erlang.

I-LeoFS iqukethe izingxenye ezintathu:

  • Isitoreji se-LeoFS - Isebenzela imisebenzi yokwengeza, ukubuyisa kanye nokususa izinto kanye nemethadatha, inesibopho sokuphindaphinda, ukubuyisela kanye nokufaka izicelo zamaklayenti kulayini.
  • I-LeoFS Gateway - isebenzela izicelo ze-HTTP futhi iqondise kabusha izimpendulo kumakhasimende asebenzisa i-REST-API noma i-S3-API, iqinisekisa ukulondolozwa kwedatha edingeka kakhulu kumemori nakudiski.
  • Umphathi we-LeoFS β€” iqapha ukusebenza kwe-LeoFS Gateway ne-LeoFS Storage node, iqapha isimo samanodi futhi ihlole amanani okuhlola. Iqinisekisa ubuqotho bedatha nokutholakala kwesitoreji esiphezulu.

Kulokhu okuthunyelwe sizofaka i-Leofs sisebenzisa i-ansible-playbook kanye nokuhlola i-S3, i-NFS.

Uma uzama ukufaka i-LeoFS usebenzisa izincwadi zokudlala ezisemthethweni, uzohlangabezana namaphutha ahlukahlukene: 1,2. Kule post ngizobhala okumele kwenziwe ukugwema la maphutha.

Lapho uzosebenzisa i-ansible-playbook, udinga ukufaka i-netcat.

Isibonelo sokusungula

Isibonelo sohlu lwempahla (ku-hosts.sample repository):

# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"

#[builder]
#172.26.9.177

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.176

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.178

[leo_storage]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]

[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]

[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage

Ukulungiselela iseva

Ikhubaza i-Selinux. Ngethemba ukuthi umphakathi uzodala izinqubomgomo ze-Selinux ze-LeoFS.

    - name: Install libselinux as prerequisite for SELinux Ansible module
      yum:
        name: "{{item}}"
        state: latest
      with_items:
        - libselinux-python
        - libsemanage-python

    - name: Disable SELinux at next reboot
      selinux:
        state: disabled

    - name: Set SELinux in permissive mode until the machine is rebooted
      command: setenforce 0
      ignore_errors: true
      changed_when: false

setting netcat ΠΈ redhat-lsb-core. netcat edingekayo ukuze leofs-adm, redhat-lsb-core edingekayo ukuze kunqunywe inguqulo ye-OS lapha.

    - name: Install Packages
      yum: name={{ item }} state=present
      with_items:
        - nmap-ncat
        - redhat-lsb-core

Ukudala ama-leafs omsebenzisi nokuyengeza eqenjini lamasondo

    - name: Create user leofs
      group:
        name: leofs
        state: present

    - name: Allow 'wheel' group to have passwordless sudo
      lineinfile:
        dest: /etc/sudoers
        state: present
        regexp: '^%wheel'
        line: '%wheel ALL=(ALL) NOPASSWD: ALL'
        validate: 'visudo -cf %s'

    - name: Add the user 'leofs' to group 'wheel'
      user:
        name: leofs
        groups: wheel
        append: yes

Ifaka u-Erlang

    - name: Remote erlang-20.3.8.23-1.el7.x86_64.rpm install with yum
      yum: name=https://github.com/rabbitmq/erlang-rpm/releases/download/v20.3.8.23/erlang-20.3.8.23-1.el7.x86_64.rpm

Inguqulo ephelele yencwadi yokudlala elungisiwe ye-Ansible ingatholakala lapha: https://github.com/patsevanton/leofs_ansible

Ukufakwa, ukumisa, ukuqalisa

Okulandelayo senza njengoba kubhalwe ku https://github.com/leo-project/leofs_ansible ngaphandle kwe-build_leofs.yml

## Install LeoFS
$ ansible-playbook -i hosts install_leofs.yml

## Config LeoFS
$ ansible-playbook -i hosts config_leofs.yml

## Start LeoFS
$ ansible-playbook -i hosts start_leofs.yml

Ihlola isimo seqoqo ku-Primary LeoManager

leofs-adm status

Okuyisisekelo nesesibili kungabonwa kulogi ye-ansible-playbook

Ukufakwa kwesitoreji sento ye-LeoFS esabalalisiwe esibekezelela amaphutha, ehambisana namakhasimende asebenzisa i-S3, NFS

Ukufakwa kwesitoreji sento ye-LeoFS esabalalisiwe esibekezelela amaphutha, ehambisana namakhasimende asebenzisa i-S3, NFS

Okuphumayo kuzoba into enjengale

 [System Confiuration]
-----------------------------------+----------
 Item                              | Value    
-----------------------------------+----------
 Basic/Consistency level
-----------------------------------+----------
                    system version | 1.4.3
                        cluster Id | leofs_1
                             DC Id | dc_1
                    Total replicas | 2
          number of successes of R | 1
          number of successes of W | 1
          number of successes of D | 1
 number of rack-awareness replicas | 0
                         ring size | 2^128
-----------------------------------+----------
 Multi DC replication settings
-----------------------------------+----------
 [mdcr] max number of joinable DCs | 2
 [mdcr] total replicas per a DC    | 1
 [mdcr] number of successes of R   | 1
 [mdcr] number of successes of W   | 1
 [mdcr] number of successes of D   | 1
-----------------------------------+----------
 Manager RING hash
-----------------------------------+----------
                 current ring-hash | a0314afb
                previous ring-hash | a0314afb
-----------------------------------+----------

 [State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
 type  |         node         |    state     | rack id |  current ring  |   prev ring    |          updated at         
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
  S    | [email protected]      | running      |         | a0314afb       | a0314afb       | 2019-12-05 10:33:47 +0000
  S    | [email protected]      | running      |         | a0314afb       | a0314afb       | 2019-12-05 10:33:47 +0000
  S    | [email protected]      | running      |         | a0314afb       | a0314afb       | 2019-12-05 10:33:47 +0000
  S    | [email protected]      | attached     |         |                |                | 2019-12-05 10:33:58 +0000
  G    | [email protected]      | running      |         | a0314afb       | a0314afb       | 2019-12-05 10:33:49 +0000
  G    | [email protected]      | running      |         | a0314afb       | a0314afb       | 2019-12-05 10:33:49 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------

Ukudala umsebenzisi

Dala umsebenzisi we-leafs:

leofs-adm create-user leofs leofs

  access-key-id: 9c2615f32e81e6a1caf5
  secret-access-key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb

Uhlu lwabasebenzisi:

leofs-adm get-users
user_id     | role_id | access_key_id          | created_at                
------------+---------+------------------------+---------------------------
_test_leofs | 9       | 05236                  | 2019-12-02 06:56:49 +0000
leofs       | 1       | 9c2615f32e81e6a1caf5   | 2019-12-02 10:43:29 +0000

Dala Ibhakede

Wenze ibhakede

leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK

Uhlu lwamabhakede:

 leofs-adm get-buckets
cluster id   | bucket   | owner  | permissions      | created at                
-------------+----------+--------+------------------+---------------------------
leofs_1      | leofs    | leofs  | Me(full_control) | 2019-12-02 10:44:02 +0000

Ilungiselela i-s3cmd

Ensimini HTTP Proxy server name cacisa iseva ye-Gateway IP

s3cmd --configure 

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]: 
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]: 
Default Region [US]: 

Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 

Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password: 
Path to GPG program [/usr/bin/gpg]: 

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]: 

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]: 
HTTP Proxy server port [8080]: 

New settings:
  Access Key: 9c2615f32e81e6a1caf5
  Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
  Default Region: US
  S3 Endpoint: s3.amazonaws.com
  DNS-style bucket+hostname:port template for accessing a bucket: leofs
  Encryption password: 
  Path to GPG program: /usr/bin/gpg
  Use HTTPS protocol: False
  HTTP Proxy server name: 172.26.9.180
  HTTP Proxy server port: 8080

Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)

Now verifying that encryption works...
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'

Uma uthola iphutha IPHUTHA: Iphutha le-S3: 403 (Ukufinyelela Kunqatshiwe): Ukufinyelela Kunqatshiwe:

s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py'  [1 of 1]
 382 of 382   100% in    0s     3.40 kB/s  done
ERROR: S3 error: 403 (AccessDenied): Access Denied

Bese udinga ukushintsha isignesha_v3 ibe Iqiniso ekucushweni kwe-s2cmd. Imininingwane kulokhu Ukukhishwa.

Uma isignesha_v2 ithi Amanga, kuzoba nephutha elifana naleli:

WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 6 sec...
ERROR: Test failed: Request failed for: /?delimiter=%2F

Ukuhlolwa komthwalo

Dala ifayela elingu-1GB

fallocate -l 1GB 1gb

Layisha ku-Leofs

time s3cmd put 1gb s3://leofs/
real    0m19.099s
user    0m7.855s
sys 0m1.620s

Izibalo

leofs-adm du yenodi engu-1:

leofs-adm du [email protected]
 active number of objects: 156
  total number of objects: 156
   active size of objects: 602954495
    total size of objects: 602954495
     ratio of active size: 100.0%
    last compaction start: ____-__-__ __:__:__
      last compaction end: ____-__-__ __:__:__

Siyabona ukuthi isiphetho asifundisi kakhulu.

Ake sibone ukuthi leli fayela likuphi.
leofs-adm whereis leafs/1gb

leofs-adm whereis leofs/1gb
-------+----------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
 del?  |         node         |             ring address             |    size    |   checksum   |  has children  |  total chunks  |     clock      |             when            
-------+----------------------+--------------------------------------+------------+--------------+----------------+----------------+----------------+----------------------------
       | [email protected]      | 657a9f3a3db822a7f1f5050925b26270     |    976563K |   a4634eea55 | true           |             64 | 598f2aa976a4f  | 2019-12-05 10:48:15 +0000
       | [email protected]      | 657a9f3a3db822a7f1f5050925b26270     |    976563K |   a4634eea55 | true           |             64 | 598f2aa976a4f  | 2019-12-05 10:48:15 +0000

Yenza kusebenze i-NFS

Senza kusebenze i-NFS kuseva ye-Leo Gateway 172.26.9.184.

Faka ama-nfs-utils kuseva neklayenti

sudo yum install nfs-utils

Ngokwemiyalelo, sizolungisa ifayela lokucushwa /usr/local/leofs/current/leo_gateway/etc/leo_gateway.conf

protocol = nfs

Kuseva 172.26.9.184 sebenzisa i-rpcbind ne-leofs-gateway

sudo service rpcbind start
sudo service leofs-gateway restart

Kuseva lapho i-leo_manager isebenza khona, dala ibhakede le-NFS bese ukhiqiza ukhiye wokuxhuma ku-NFS.

leofs-adm add-bucket test 05236
leofs-adm gen-nfs-mnt-key test 05236 ip-адрСс-nfs-ΠΊΠ»ΠΈΠ΅Π½Ρ‚Π°

Ixhuma ku-NFS

sudo mkdir /mnt/leofs
## for Linux - "sudo mount -t nfs -o nolock <host>:/<bucket>/<token> <dir>"
sudo mount -t nfs -o nolock ip-адрСс-nfs-сСрвСра-Ρ‚Π°ΠΌ-Π³Π΄Π΅-Ρƒ-вас-установлСн-gateway:/bucket/access_key_id/ΠΊΠ»ΡŽΡ‡-ΠΏΠΎΠ»ΡƒΡ‡Π΅Π½Π½Ρ‹ΠΉ-ΠΎΡ‚-gen-nfs-mnt-key /mnt/leofs
sudo mount -t nfs -o nolock 172.26.9.184:/test/05236/bb5034f0c740148a346ed663ca0cf5157efb439f /mnt/leofs

Buka isikhala sediski ngeklayenti le-NFS

Isikhala sediski, kucatshangelwa ukuthi indawo ngayinye yokugcina inediski engu-40GB (ama-node ama-3 asebenzayo, i-node eyodwa enamathiselwe):

df -hP
Filesystem                                                         Size  Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b   60G  3.6G   57G   6% /mnt/leofs

Ukufakwa kwe-LeoFS enama-node okugcina ayi-6.

Inventory (ngaphandle komakhi):

# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.177

# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.176

[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
172.26.9.185 [email protected]

[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]

[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage

Okukhiphayo leofs-adm isimo

Okukhiphayo leofs-adm isimo

 [System Confiuration]
-----------------------------------+----------
 Item                              | Value    
-----------------------------------+----------
 Basic/Consistency level
-----------------------------------+----------
                    system version | 1.4.3
                        cluster Id | leofs_1
                             DC Id | dc_1
                    Total replicas | 2
          number of successes of R | 1
          number of successes of W | 1
          number of successes of D | 1
 number of rack-awareness replicas | 0
                         ring size | 2^128
-----------------------------------+----------
 Multi DC replication settings
-----------------------------------+----------
 [mdcr] max number of joinable DCs | 2
 [mdcr] total replicas per a DC    | 1
 [mdcr] number of successes of R   | 1
 [mdcr] number of successes of W   | 1
 [mdcr] number of successes of D   | 1
-----------------------------------+----------
 Manager RING hash
-----------------------------------+----------
                 current ring-hash | d8ff465e
                previous ring-hash | d8ff465e
-----------------------------------+----------

 [State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
 type  |         node         |    state     | rack id |  current ring  |   prev ring    |          updated at         
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:29 +0000
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:29 +0000
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:30 +0000
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:29 +0000
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:29 +0000
  S    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:29 +0000
  G    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:31 +0000
  G    | [email protected]      | running      |         | d8ff465e       | d8ff465e       | 2019-12-06 05:18:31 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------

Isikhala sediski, kucatshangelwa ukuthi indawo ngayinye yokugcina inediski engu-40GB (ama-node ayi-6 asebenzayo):

df -hP
Filesystem                                                         Size  Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b  120G  3.6G  117G   3% /mnt/leofs

Uma kusetshenziswa izindawo zokugcina ezi-5

[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]

df -hP
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b  100G  3.0G   97G   3% /mnt/leofs

Izingodo

Amalogi atholakala kunkhombandlela /usr/local/leofs/current/*/log

Isiteshi socingo: I-SDS kanye ne-Cluster FS

Source: www.habr.com

Engeza amazwana