Pulogalamu ya ProHoster > Blog > Ulamuliro > Kuyika kwa malo osungirako zinthu zololera za LeoFS, zomwe zimagwirizana ndi makasitomala pogwiritsa ntchito S3, NFS
Kuyika kwa malo osungirako zinthu zololera za LeoFS, zomwe zimagwirizana ndi makasitomala pogwiritsa ntchito S3, NFS
Malingana ndi Opennet: Mtengo wa LeoFS - kugawa zinthu zololera zolakwika Mtengo wa LeoFS, yogwirizana ndi makasitomala omwe amagwiritsa ntchito Amazon S3 API ndi REST-API, komanso imathandizira seva ya NFS. Pali kukhathamiritsa kwa kusunga zinthu zazing'ono komanso zazikulu kwambiri, pali njira yosungiramo yosungiramo, ndipo kubwereza kosungirako pakati pa malo a data ndikotheka. Zolinga za polojekitiyi zikuphatikiza kukwaniritsa kudalirika kwa 99.9999999% mwa kubwerezabwereza kobwerezabwereza ndikuchotsa mfundo imodzi yolephera. Khodi ya polojekitiyi idalembedwa ku Erlang.
LeoFS ili ndi zigawo zitatu:
Zosungirako za LeoFS - imagwira ntchito powonjezera, kubweza ndi kuchotsa zinthu ndi metadata, ili ndi udindo wobwerezabwereza, kuchira ndi kutsata zopempha zamakasitomala.
LeoFS Gateway - imathandizira zopempha za HTTP ndikuwongoleranso mayankho kwa makasitomala pogwiritsa ntchito REST-API kapena S3-API, imatsimikizira kusungitsa deta yomwe ikufunika kwambiri pamtima ndi pa disk.
Woyang'anira LeoFS - imayang'anira magwiridwe antchito a LeoFS Gateway ndi LeoFS Storage node, imayang'anira momwe ma node alili ndikuwunika macheke. Zimatsimikizira kukhulupirika kwa data ndi kupezeka kosungirako kwakukulu.
Mu positi iyi tiyika Leofs pogwiritsa ntchito ansible-playbook ndikuyesa S3, NFS.
Mukayesa kukhazikitsa LeoFS pogwiritsa ntchito mabuku ovomerezeka, mudzakumana ndi zolakwika zosiyanasiyana: 1,2. Mu positi iyi ndilemba zomwe ziyenera kuchitika kuti tipewe zolakwika izi.
Kuyang'ana momwe gululi liliri pa Primary LeoManager
leofs-adm status
Pulayimale ndi Sekondale zitha kuwoneka m'mabuku ansible-playbook
Zotsatira zake zidzakhala chonchi
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | a0314afb
previous ring-hash | a0314afb
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | attached | | | | 2019-12-05 10:33:58 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK
Mndandanda wa zidebe:
leofs-adm get-buckets
cluster id | bucket | owner | permissions | created at
-------------+----------+--------+------------------+---------------------------
leofs_1 | leofs | leofs | Me(full_control) | 2019-12-02 10:44:02 +0000
Kupanga s3cmd
M'munda HTTP Proxy server name tchulani Gateway seva IP
s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]:
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]:
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]:
HTTP Proxy server port [8080]:
New settings:
Access Key: 9c2615f32e81e6a1caf5
Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
Default Region: US
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: leofs
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name: 172.26.9.180
HTTP Proxy server port: 8080
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'
s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py' [1 of 1]
382 of 382 100% in 0s 3.40 kB/s done
ERROR: S3 error: 403 (AccessDenied): Access Denied
Kenako muyenera kusintha signature_v3 kukhala Zoona mu s2cmd config. Tsatanetsatane mu izi nkhani.
Ngati signature_v2 ndi Bodza, padzakhala cholakwika chonga ichi:
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 6 sec...
ERROR: Test failed: Request failed for: /?delimiter=%2F
Kuyesa katundu
Pangani fayilo ya 1GB
fallocate -l 1GB 1gb
Kwezani ku Leofs
time s3cmd put 1gb s3://leofs/
real 0m19.099s
user 0m7.855s
sys 0m1.620s
Amabala
leofs-adm du ya 1 node:
leofs-adm du [email protected]
active number of objects: 156
total number of objects: 156
active size of objects: 602954495
total size of objects: 602954495
ratio of active size: 100.0%
last compaction start: ____-__-__ __:__:__
last compaction end: ____-__-__ __:__:__
Timatsegula NFS pa seva ya Leo Gateway 172.26.9.184.
Ikani nfs-utils pa seva ndi kasitomala
sudo yum install nfs-utils
Malinga ndi malangizo, tidzakonza fayilo yokonzekera /usr/local/leofs/current/leo_gateway/etc/leo_gateway.conf
protocol = nfs
Pa seva 172.26.9.184 thamangani rpcbind ndi leofs-gateway
sudo service rpcbind start
sudo service leofs-gateway restart
Pa seva yomwe leo_manager ikugwira ntchito, pangani chidebe cha NFS ndikupanga kiyi yolumikizira ku NFS.
leofs-adm add-bucket test 05236
leofs-adm gen-nfs-mnt-key test 05236 ip-Π°Π΄ΡΠ΅Ρ-nfs-ΠΊΠ»ΠΈΠ΅Π½ΡΠ°
Kugwirizana ndi NFS
sudo mkdir /mnt/leofs
## for Linux - "sudo mount -t nfs -o nolock <host>:/<bucket>/<token> <dir>"
sudo mount -t nfs -o nolock ip-Π°Π΄ΡΠ΅Ρ-nfs-ΡΠ΅ΡΠ²Π΅ΡΠ°-ΡΠ°ΠΌ-Π³Π΄Π΅-Ρ-Π²Π°Ρ-ΡΡΡΠ°Π½ΠΎΠ²Π»Π΅Π½-gateway:/bucket/access_key_id/ΠΊΠ»ΡΡ-ΠΏΠΎΠ»ΡΡΠ΅Π½Π½ΡΠΉ-ΠΎΡ-gen-nfs-mnt-key /mnt/leofs
sudo mount -t nfs -o nolock 172.26.9.184:/test/05236/bb5034f0c740148a346ed663ca0cf5157efb439f /mnt/leofs
Onani malo a disk kudzera pa kasitomala wa NFS
Malo a Disk, poganizira kuti node iliyonse yosungira ili ndi 40GB disk (3 yothamanga node, 1 yophatikizidwa node):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 60G 3.6G 57G 6% /mnt/leofs
Kuyika kwa LeoFS ndi malo 6 osungira.
Inventory (popanda womanga):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.176
[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
172.26.9.185 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Mawonekedwe a leofs-adm
Mawonekedwe a leofs-adm
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | d8ff465e
previous ring-hash | d8ff465e
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:30 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
Malo a Disk, poganizira kuti node iliyonse yosungira ili ndi 40GB disk (6 yothamanga node):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 120G 3.6G 117G 3% /mnt/leofs