ProHoster > Блог > Gudanarwa > Shigar da ɓangarorin ma'ajin abin ajiya na LeoFS mai jurewa, mai dacewa da abokan ciniki ta amfani da S3, NFS
Shigar da ɓangarorin ma'ajin abin ajiya na LeoFS mai jurewa, mai dacewa da abokan ciniki ta amfani da S3, NFS
A cewar Opennet: LeoFS - rarraba abin ajiya mai jurewa kuskure LeoFS, mai jituwa tare da abokan ciniki ta amfani da Amazon S3 API da REST-API, kuma yana goyan bayan yanayin uwar garken NFS. Akwai haɓakawa don adana ƙanana da manya-manyan abubuwa, akwai ingantacciyar hanyar ɓoyewa, kuma ana iya yin kwafin ajiya tsakanin cibiyoyin bayanai. Manufofin aikin sun haɗa da samun 99.9999999% amintacce ta hanyar maimaita kwafi da kawar da faɗuwar maki guda ɗaya. An rubuta lambar aikin a cikin Erlang.
LeoFS ya ƙunshi abubuwa uku:
LeoFS Storage - yana ba da ayyuka na ƙarawa, maidowa da share abubuwa da metadata, yana da alhakin aiwatar da kwafi, dawo da jerin buƙatun abokin ciniki.
LeoFS Gateway - yana ba da buƙatun HTTP kuma yana tura martani ga abokan ciniki ta amfani da REST-API ko S3-API, yana tabbatar da adana mafi yawan bayanan da ake buƙata a ƙwaƙwalwar ajiya da faifai.
LeoFS Manager - sa ido kan yadda LeoFS Gateway da LeoFS Storage nodes suke aiki, yana sa ido kan matsayin nodes da duba kima. Yana ba da garantin amincin bayanai da babban wadatar ajiya.
A cikin wannan sakon za mu shigar da Leofs ta amfani da littafin-playbook mai yiwuwa da gwada S3, NFS.
Idan kayi ƙoƙarin shigar da LeoFS ta amfani da littattafan wasan kwaikwayo na hukuma, zaku gamu da kurakurai daban-daban: 1,2. A cikin wannan rubutu zan rubuta abin da ya kamata a yi don guje wa waɗannan kurakurai.
Inda za ku gudanar da littafin-playbook, kuna buƙatar shigar da netcat.
Misalin kaya
Misalin kaya (a cikin hosts.sample repository):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
#[builder]
#172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.176
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.178
[leo_storage]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Shirye-shiryen uwar garken
Kashe Selinux. Ina fatan al'umma za su ƙirƙiri manufofin Selinux don LeoFS.
- name: Install libselinux as prerequisite for SELinux Ansible module
yum:
name: "{{item}}"
state: latest
with_items:
- libselinux-python
- libsemanage-python
- name: Disable SELinux at next reboot
selinux:
state: disabled
- name: Set SELinux in permissive mode until the machine is rebooted
command: setenforce 0
ignore_errors: true
changed_when: false
saitin netcat и redhat-lsb-core. netcat ake bukata domin leofs-adm, redhat-lsb-core ake buƙata don ƙayyade sigar OS a nan.
Ana iya ganin Firamare da Sakandare a cikin littattafan wasan kwaikwayo mai yiwuwa
Fitowar zata kasance kamar haka
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | a0314afb
previous ring-hash | a0314afb
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:47 +0000
S | [email protected] | attached | | | | 2019-12-05 10:33:58 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
G | [email protected] | running | | a0314afb | a0314afb | 2019-12-05 10:33:49 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
leofs-adm add-bucket leofs 9c2615f32e81e6a1caf5
OK
Jerin guga:
leofs-adm get-buckets
cluster id | bucket | owner | permissions | created at
-------------+----------+--------+------------------+---------------------------
leofs_1 | leofs | leofs | Me(full_control) | 2019-12-02 10:44:02 +0000
Ana saita s3cmd
A cikin filin HTTP Proxy server name saka IP ɗin uwar garken Gateway
s3cmd --configure
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.
Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9c2615f32e81e6a1caf5]:
Secret Key [8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb]:
Default Region [US]:
Use "s3.amazonaws.com" for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]:
Use "%(bucket)s.s3.amazonaws.com" to the target Amazon S3. "%(bucket)s" and "%(location)s" vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: leofs
Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:
When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:
On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can't connect to S3 directly
HTTP Proxy server name [172.26.9.180]:
HTTP Proxy server port [8080]:
New settings:
Access Key: 9c2615f32e81e6a1caf5
Secret Key: 8aaaa35c1ad78a2cbfa1a6cd49ba8aaeb3ba39eb
Default Region: US
S3 Endpoint: s3.amazonaws.com
DNS-style bucket+hostname:port template for accessing a bucket: leofs
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name: 172.26.9.180
HTTP Proxy server port: 8080
Test access with supplied credentials? [Y/n] Y
Please wait, attempting to list all buckets...
Success. Your access key and secret key worked fine :-)
Now verifying that encryption works...
Not configured. Never mind.
Save settings? [y/N] y
Configuration saved to '/home/user/.s3cfg'
Idan kun sami Kuskuren Kuskuren: Kuskuren S3: 403 (An hana samun damar): An ƙi shiga:
s3cmd put test.py s3://leofs/
upload: 'test.py' -> 's3://leofs/test.py' [1 of 1]
382 of 382 100% in 0s 3.40 kB/s done
ERROR: S3 error: 403 (AccessDenied): Access Denied
Sannan kuna buƙatar canza signature_v3 zuwa Gaskiya a cikin tsarin s2cmd. Cikakken bayani a cikin wannan batun.
Idan signature_v2 Karya ne, to za a sami kuskure kamar haka:
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 3 sec...
WARNING: Retrying failed request: /?delimiter=%2F (getaddrinfo() argument 2 must be integer or string)
WARNING: Waiting 6 sec...
ERROR: Test failed: Request failed for: /?delimiter=%2F
Gwajin lodi
Ƙirƙiri fayil 1GB
fallocate -l 1GB 1gb
Loda shi zuwa Leofs
time s3cmd put 1gb s3://leofs/
real 0m19.099s
user 0m7.855s
sys 0m1.620s
Stats
leofs-adm du na node 1:
leofs-adm du [email protected]
active number of objects: 156
total number of objects: 156
active size of objects: 602954495
total size of objects: 602954495
ratio of active size: 100.0%
last compaction start: ____-__-__ __:__:__
last compaction end: ____-__-__ __:__:__
Mun ga cewa ƙarshe ba shi da cikakken bayani.
Bari mu ga inda wannan fayil ɗin yake.
leofs-adm inda leofs/1gb
Muna kunna NFS akan uwar garken Leo Gateway 172.26.9.184.
Shigar nfs-utils akan uwar garken da abokin ciniki
sudo yum install nfs-utils
Bisa ga umarnin, za mu gyara fayil ɗin sanyi /usr/local/leofs/current/leo_gateway/etc/leo_gateway.conf
protocol = nfs
A kan uwar garken 172.26.9.184 gudu rpcbind da leofs-gateway
sudo service rpcbind start
sudo service leofs-gateway restart
A kan uwar garken inda leo_manager ke gudana, ƙirƙirar guga don NFS kuma samar da maɓalli don haɗi zuwa NFS
leofs-adm add-bucket test 05236
leofs-adm gen-nfs-mnt-key test 05236 ip-адрес-nfs-клиента
Haɗa zuwa NFS
sudo mkdir /mnt/leofs
## for Linux - "sudo mount -t nfs -o nolock <host>:/<bucket>/<token> <dir>"
sudo mount -t nfs -o nolock ip-адрес-nfs-сервера-там-где-у-вас-установлен-gateway:/bucket/access_key_id/ключ-полученный-от-gen-nfs-mnt-key /mnt/leofs
sudo mount -t nfs -o nolock 172.26.9.184:/test/05236/bb5034f0c740148a346ed663ca0cf5157efb439f /mnt/leofs
Duba sararin faifai ta abokin ciniki na NFS
Wurin diski, la'akari da cewa kowane kullin ajiya yana da faifai 40GB (nodes masu gudana 3, kumburin haɗe 1):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 60G 3.6G 57G 6% /mnt/leofs
Shigar da LeoFS tare da nodes ajiya 6.
Inventory (ba tare da mai gini ba):
# Please check roles/common/vars/leofs_releases for available versions
[all:vars]
leofs_version=1.4.3
build_temp_path="/tmp/leofs_builder"
build_install_path="/tmp/"
build_branch="master"
source="package"
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_0]
172.26.9.177
# nodename of leo_manager_0 and leo_manager_1 are set at group_vars/all
[leo_manager_1]
172.26.9.176
[leo_storage]
172.26.9.178 [email protected]
172.26.9.179 [email protected]
172.26.9.181 [email protected]
172.26.9.182 [email protected]
172.26.9.183 [email protected]
172.26.9.185 [email protected]
[leo_gateway]
172.26.9.180 [email protected]
172.26.9.184 [email protected]
[leofs_nodes:children]
leo_manager_0
leo_manager_1
leo_gateway
leo_storage
Matsayin fitarwa na leofs-adm
Matsayin fitarwa na leofs-adm
[System Confiuration]
-----------------------------------+----------
Item | Value
-----------------------------------+----------
Basic/Consistency level
-----------------------------------+----------
system version | 1.4.3
cluster Id | leofs_1
DC Id | dc_1
Total replicas | 2
number of successes of R | 1
number of successes of W | 1
number of successes of D | 1
number of rack-awareness replicas | 0
ring size | 2^128
-----------------------------------+----------
Multi DC replication settings
-----------------------------------+----------
[mdcr] max number of joinable DCs | 2
[mdcr] total replicas per a DC | 1
[mdcr] number of successes of R | 1
[mdcr] number of successes of W | 1
[mdcr] number of successes of D | 1
-----------------------------------+----------
Manager RING hash
-----------------------------------+----------
current ring-hash | d8ff465e
previous ring-hash | d8ff465e
-----------------------------------+----------
[State of Node(s)]
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
type | node | state | rack id | current ring | prev ring | updated at
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:30 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
S | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:29 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
G | [email protected] | running | | d8ff465e | d8ff465e | 2019-12-06 05:18:31 +0000
-------+----------------------+--------------+---------+----------------+----------------+----------------------------
Wurin diski, la'akari da cewa kowane kullin ajiya yana da faifai 40GB (nodes masu gudana 6):
df -hP
Filesystem Size Used Avail Use% Mounted on
172.26.9.184:/test/05236/e7298032e78749149dd83a1e366afb328811c95b 120G 3.6G 117G 3% /mnt/leofs