Salaam wote. Katika mkesha wa kuanza kwa kikundi kipya cha kozi
Kifungu hiki kitazingatia kesi 2 za kubadilisha diski na kuhamisha habari kwa diski mpya za uwezo mkubwa na upanuzi zaidi wa safu na mfumo wa faili. Kesi ya kwanza itahusu uingizwaji wa diski na ugawaji sawa wa MBR/MBR au GPT/GPT, kesi ya pili inahusu uingizwaji wa diski na ugawaji wa MBR na diski zilizo na uwezo wa zaidi ya 2 TB, ambayo utahitaji kusanikisha. kizigeu cha GPT na kizigeu cha biosboot. Katika visa vyote viwili, diski ambazo tunahamisha data tayari zimewekwa kwenye seva. Mfumo wa faili unaotumiwa kwa kizigeu cha mizizi ni ext4.
Kesi ya 1: Kubadilisha diski ndogo na diski kubwa (hadi 2TB)
Kazi: Badilisha diski za sasa na diski kubwa (hadi 2 TB) na uhamishaji wa habari. Katika kesi hii, tuna diski 2 x 240 GB SSD (RAID-1) na mfumo umewekwa na diski 2 x 1 TB SATA ambayo mfumo unahitaji kuhamishiwa.
Hebu tuangalie mpangilio wa sasa wa disk.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sda2 8:2 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
sdd 8:48 0 931,5G 0 disk
Wacha tuangalie nafasi ya mfumo wa faili inayotumika sasa.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 9,6M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,3G 192G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 6,3G 0 6,3G 0% /run/user/0
Saizi ya mfumo wa faili kabla ya kuchukua nafasi ya diski ni 204 GB, safu 2 za programu za md126 hutumiwa, ambazo zimewekwa ndani. /boot
и md127
, ambayo inatumika kama kiasi cha kimwili kwa kikundi cha VG vg0.
1. Kuondoa sehemu za diski kutoka kwa safu
Kuangalia hali ya safu
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda1[0] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sda2[0] sdb2[1]
233206784 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
unused devices: <none>
Mfumo hutumia safu 2: md126
(hatua ya mlima /boot
) - lina sehemu /dev/sda1
и /dev/sdb1
, md127
(LVM kwa wabadilishane na mzizi wa mfumo wa faili) - inajumuisha /dev/sda2
и /dev/sdb2
.
Tunaweka alama za sehemu za diski ya kwanza ambayo hutumiwa katika kila safu kuwa mbaya.
mdadm /dev/md126 --fail /dev/sda1
mdadm /dev/md127 --fail /dev/sda2
Tunaondoa sehemu za kifaa cha kuzuia /dev/sda kutoka kwa safu.
mdadm /dev/md126 --remove /dev/sda1
mdadm /dev/md127 --remove /dev/sda2
Baada ya kuondoa diski kutoka kwa safu, habari ya kifaa cha kuzuia itaonekana kama hii.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
sdd 8:48 0 931,5G 0 disk
Hali ya safu baada ya kuondoa diski.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb1[1]
1047552 blocks super 1.2 [2/1] [_U]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
bitmap: 1/2 pages [4KB], 65536KB chunk
unused devices: <none>
2. Nakili meza ya kugawanya kwenye diski mpya
Unaweza kuangalia meza ya kugawanya iliyotumiwa kwenye diski na amri ifuatayo.
fdisk -l /dev/sdb | grep 'Disk label type'
Matokeo ya MBR yatakuwa:
Disk label type: dos
kwa GPT ni:
Disk label type: gpt
Kunakili jedwali la kizigeu kwa MBR:
sfdisk -d /dev/sdb | sfdisk /dev/sdc
Katika timu hii первым diski imeonyeshwa с ambayo markup inakiliwa, pili - wapi nakala.
Tahadhari: Kwa GPT первым diski imeonyeshwa juu ya ambayo alama za nakala, pili diski inaonyesha diski ambayo kutoka alama ya nakala. Ikiwa unachanganya diski, ugawaji mzuri wa awali utaandikwa na kuharibiwa.
Kunakili jedwali la mpangilio kwa GPT:
sgdisk -R /dev/sdс /dev/sdb
Ifuatayo, toa UUID isiyo ya kawaida kwenye diski (kwa GPT).
sgdisk -G /dev/sdc
Baada ya amri kutekelezwa, sehemu zinapaswa kuonekana kwenye diski /dev/sdc
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
└─sdc2 8:34 0 222,5G 0 part
sdd 8:48 0 931,5G 0 disk
Ikiwa, baada ya hatua kufanywa, sehemu za mfumo kwenye diski /dev/sdc
bila kuamua, basi tunatoa amri ya kusoma tena jedwali la kizigeu.
sfdisk -R /dev/sdc
Ikiwa diski za sasa zinatumia meza ya MBR na habari inahitaji kuhamishiwa kwa diski kubwa kuliko 2 TB, basi kwenye diski mpya utahitaji kuunda ugawaji wa GPT kwa mikono kwa kutumia ugawaji wa biosboot. Kesi hii itajadiliwa katika Sehemu ya 2 ya nakala hii.
3. Kuongeza partitions ya disk mpya kwa safu
Wacha tuongeze sehemu za diski kwenye safu zinazolingana.
mdadm /dev/md126 --add /dev/sdc1
mdadm /dev/md127 --add /dev/sdc2
Tunaangalia kuwa sehemu zimeongezwa.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
Baada ya hayo, tunasubiri safu ili kusawazisha.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc1[2] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc2[2] sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
[==>..................] recovery = 10.6% (24859136/233206784) finish=29.3min speed=118119K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
Unaweza kuendelea kufuatilia mchakato wa maingiliano kwa kutumia matumizi watch
.
watch -n 2 cat /proc/mdstat
Parameter -n
inabainisha kwa vipindi vipi katika sekunde amri lazima itekelezwe ili kuangalia maendeleo.
Rudia hatua 1 - 3 kwa diski ya uingizwaji inayofuata.
Tunaweka alama za sehemu za diski ya pili ambayo hutumiwa katika kila safu kuwa mbaya.
mdadm /dev/md126 --fail /dev/sdb1
mdadm /dev/md127 --fail /dev/sdb2
Inaondoa vizuizi vya kifaa /dev/sdb
kutoka kwa safu.
mdadm /dev/md126 --remove /dev/sdb1
mdadm /dev/md127 --remove /dev/sdb2
Baada ya kuondoa diski kutoka kwa safu, habari ya kifaa cha kuzuia itaonekana kama hii.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
Hali ya safu baada ya kuondoa diski.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc1[2]
1047552 blocks super 1.2 [2/1] [U_]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc2[2]
233206784 blocks super 1.2 [2/1] [U_]
bitmap: 1/2 pages [4KB], 65536KB chunk
unused devices: <none>
Kunakili jedwali la kizigeu cha MBR kutoka kwa diski /dev/sdс
kwa diski /dev/sdd
.
sfdisk -d /dev/sdс | sfdisk /dev/sdd
Baada ya amri kutekelezwa, sehemu zinapaswa kuonekana kwenye diski /dev/sdd
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
└─sdd2 8:50 0 222,5G 0 part
Kuongeza partitions disk kwa safu.
mdadm /dev/md126 --add /dev/sdd1
mdadm /dev/md127 --add /dev/sdd2
Tunaangalia kuwa sehemu zimeongezwa.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Baada ya hayo, tunasubiri safu ili kusawazisha.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdd1[3] sdc1[2]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdd2[3] sdc2[2]
233206784 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.5% (1200000/233206784) finish=35.4min speed=109090K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
5. Kufunga GRUB kwenye anatoa mpya
kwa CentOS:
grub2-install /dev/sdX
Kwa Debian/Ubuntu:
grub-install /dev/sdX
ambapo X
- barua ya kifaa cha kuzuia. Katika kesi hii, unahitaji kusakinisha GRUB /dev/sdc
и /dev/sdd
.
6. Ugani wa mfumo wa faili (ext4) wa sehemu ya mizizi
Kwenye diski mpya /dev/sdc
и /dev/sdd
GB 931.5 inapatikana. Kwa sababu ya ukweli kwamba meza ya kizigeu ilinakiliwa kutoka kwa diski ndogo, sehemu /dev/sdc2
и /dev/sdd2
GB 222.5 inapatikana.
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Ni muhimu:
- Panua kizigeu cha 2 kwenye kila diski,
- Panua safu md127,
- Panua PV (kiasi cha kimwili),
- Panua LV (kiasi cha kimantiki) vg0-mzizi,
- Panua mfumo wa faili.
Kutumia matumizi kugawanyika tupanue sehemu /dev/sdc2
kwa thamani ya juu. Tekeleza amri parted /dev/sdc
(1) na tazama jedwali la sasa la kizigeu na amri p
(2).
Kama unaweza kuona, mwisho wa kizigeu 2 huisha kwa 240 GB. Wacha tupanue kizigeu na amri resizepart
2
, ambapo 2 ni idadi ya sehemu (3). Tunaonyesha thamani katika muundo wa digital, kwa mfano 1000 GB, au kutumia dalili ya sehemu ya disk - 100%. Tunaangalia tena kuwa kizigeu kina saizi mpya (4).
Rudia hatua zilizo hapo juu kwa diski /dev/sdd
. Baada ya kupanua partitions /dev/sdc2
и /dev/sdd2
ikawa sawa na GB 930.5.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 930,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 930,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Baada ya hayo tunapanua safu md127 kwa upeo.
mdadm --grow /dev/md127 --size=max
Tunaangalia ikiwa safu imepanuliwa. Sasa ukubwa wake umekuwa 930.4 GB.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 930,5G 0 part
└─md127 9:127 0 930,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 930,5G 0 part
└─md127 9:127 0 930,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Kupanua ugani kiasi cha kimwili. Kabla ya kupanua, hebu tuangalie hali ya sasa ya PV.
[root@localhost ~]# pvscan
PV /dev/md127 VG vg0 lvm2 [222,40 GiB / 0 free]
Total: 1 [222,40 GiB] / in use: 1 [222,40 GiB] / in no VG: 0 [0 ]
Kama inavyoonekana, PV /dev/md127
hutumia GB 222.4 ya nafasi.
Tunapanua PV kwa amri ifuatayo.
pvresize /dev/md127
Kuangalia matokeo ya upanuzi wa PV.
[root@localhost ~]# pvscan
PV /dev/md127 VG vg0 lvm2 [930,38 GiB / 707,98 GiB free]
Total: 1 [930,38 GiB] / in use: 1 [930,38 GiB] / in no VG: 0 [0 ]
Kupanua kiasi cha mantiki. Kabla ya kupanua, hebu tuangalie hali ya sasa ya LV (1).
[root@localhost ~]# lvscan
ACTIVE '/dev/vg0/swap' [<16,00 GiB] inherit
ACTIVE '/dev/vg0/root' [<206,41 GiB] inherit
LV /dev/vg0/root
hutumia GB 206.41.
Tunapanua LV kwa amri ifuatayo (2).
lvextend -l +100%FREE /dev/mapper/vg0-root
Tunaangalia hatua iliyokamilishwa (3).
[root@localhost ~]# lvscan
ACTIVE '/dev/vg0/swap' [<16,00 GiB] inherit
ACTIVE '/dev/vg0/root' [<914,39 GiB] inherit
Kama unaweza kuona, baada ya kupanua LV, kiasi cha nafasi ya diski iliyochukuliwa ikawa 914.39 GB.
Kiasi cha LV kimeongezeka (4), lakini mfumo wa faili bado unachukua 204 GB (5).
1. Hebu kupanua mfumo wa faili.
resize2fs /dev/mapper/vg0-root
Baada ya amri kutekelezwa, tunaangalia ukubwa wa mfumo wa faili.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 9,5M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 900G 1,3G 860G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 6,3G 0 6,3G 0% /run/user/0
Saizi ya mfumo wa faili ya mizizi itaongezeka hadi 900 GB. Baada ya kukamilisha hatua, unaweza kuondoa diski za zamani.
Kesi ya 2: Kubadilisha diski ndogo na diski kubwa (zaidi ya 2TB)
Zoezi: Badilisha diski za sasa na diski kubwa (2 x 3TB) huku ukihifadhi habari. Katika kesi hii, tuna diski 2 x 240 GB SSD (RAID-1) na mfumo umewekwa na diski 2 x 3 TB SATA ambayo mfumo unahitaji kuhamishiwa. Diski za sasa hutumia jedwali la kizigeu la MBR. Kwa kuwa diski mpya zina uwezo mkubwa zaidi ya 2 TB, watahitaji kutumia meza ya GPT, kwani MBR haiwezi kufanya kazi na diski kubwa kuliko 2 TB.
Hebu tuangalie mpangilio wa sasa wa disk.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sda2 8:2 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
sdd 8:48 0 2,7T 0 disk
Wacha tuangalie jedwali la kizigeu linalotumiwa kwenye diski /dev/sda
.
[root@localhost ~]# fdisk -l /dev/sda | grep 'Disk label type'
Disk label type: dos
Kwenye diski /dev/sdb
Jedwali sawa la kizigeu hutumiwa. Hebu tuangalie nafasi ya disk iliyotumiwa kwenye mfumo.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,5M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,3G 192G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
Kama unaweza kuona, mizizi ya mfumo wa faili inachukua 204 GB. Wacha tuangalie hali ya sasa ya safu ya RAID ya programu.
1. Kuweka meza ya kugawanya ya GPT na ugawaji wa diski
Hebu tuangalie mpangilio wa disk kwa sekta.
[root@localhost ~]# parted /dev/sda print
Модель: ATA KINGSTON SVP200S (scsi)
Диск /dev/sda: 240GB
Размер сектора (логич./физич.): 512B/512B
Таблица разделов: msdos
Disk Flags:
Номер Начало Конец Размер Тип Файловая система Флаги
1 1049kB 1076MB 1075MB primary загрузочный, raid
2 1076MB 240GB 239GB primary raid
Kwenye diski mpya ya 3TB tutahitaji kuunda sehemu 3:
- Sehemu
bios_grub
Saizi ya 2MiB kwa utangamano wa GPT BIOS, - Sehemu ya safu ya RAID ambayo itawekwa ndani
/boot
. - Sehemu ya safu ya RAID ambayo kutakuwa na Mzizi wa LV и Kubadilisha LV.
Ufungaji wa matumizi kugawanyika timu yum install -y parted
(kwa CentOS), apt install -y parted
(kwa Debian/Ubuntu).
Kutumia kugawanyika Wacha tuendeshe amri zifuatazo za kugawa diski.
Tekeleza amri parted /dev/sdc
na uende kwenye modi ya uhariri wa mpangilio wa diski.
Unda jedwali la kizigeu cha GPT.
(parted) mktable gpt
Unda sehemu 1 bios_grub
sehemu na kuweka bendera kwa ajili yake.
(parted) mkpart primary 1MiB 3MiB
(parted) set 1 bios_grub on
Unda kizigeu cha 2 na uweke bendera yake. Sehemu hiyo itatumika kama kizuizi kwa safu ya RAID na kuwekwa ndani /boot
.
(parted) mkpart primary ext2 3MiB 1028MiB
(parted) set 2 boot on
Tunaunda sehemu ya 3, ambayo pia itatumika kama safu ya safu ambayo LVM itapatikana.
(parted) mkpart primary 1028MiB 100%
Katika kesi hii, si lazima kuweka bendera, lakini ikiwa ni lazima, inaweza kuweka kwa amri ifuatayo.
(parted) set 3 raid on
Tunaangalia meza iliyoundwa.
(parted) p
Модель: ATA TOSHIBA DT01ACA3 (scsi)
Диск /dev/sdc: 3001GB
Размер сектора (логич./физич.): 512B/4096B
Таблица разделов: gpt
Disk Flags:
Номер Начало Конец Размер Файловая система Имя Флаги
1 1049kB 3146kB 2097kB primary bios_grub
2 3146kB 1077MB 1074MB primary загрузочный
3 1077MB 3001GB 3000GB primary
Tunapeana GUID mpya ya nasibu kwenye diski.
sgdisk -G /dev/sdd
2. Kuondoa partitions ya disk ya kwanza kutoka kwa safu
Kuangalia hali ya safu
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda1[0] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sda2[0] sdb2[1]
233206784 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
unused devices: <none>
Mfumo hutumia safu 2: md126 (hatua ya mlima / boot) - inajumuisha /dev/sda1
и /dev/sdb1
, md127
(LVM kwa swap
na mzizi wa mfumo wa faili) - inajumuisha /dev/sda2
и /dev/sdb2
.
Tunaweka alama za sehemu za diski ya kwanza ambayo hutumiwa katika kila safu kuwa mbaya.
mdadm /dev/md126 --fail /dev/sda1
mdadm /dev/md127 --fail /dev/sda2
Inaondoa vizuizi vya kifaa /dev/sda
kutoka kwa safu.
mdadm /dev/md126 --remove /dev/sda1
mdadm /dev/md127 --remove /dev/sda2
Kuangalia hali ya safu baada ya kuondoa diski.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb1[1]
1047552 blocks super 1.2 [2/1] [_U]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
3. Kuongeza partitions ya disk mpya kwa safu
Hatua inayofuata ni kuongeza sehemu za diski mpya kwenye safu za maingiliano. Hebu tuangalie hali ya sasa ya mpangilio wa disk.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
└─sdc3 8:35 0 2,7T 0 part
sdd 8:48 0 2,7T 0 disk
Sehemu /dev/sdc1
ni bios_grub
sehemu na haihusiki katika uundaji wa safu. Safu zitatumika tu /dev/sdc2
и /dev/sdc3
. Tunaongeza sehemu hizi kwa safu zinazofanana.
mdadm /dev/md126 --add /dev/sdc2
mdadm /dev/md127 --add /dev/sdc3
Kisha tunasubiri safu ili kusawazisha.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc2[2] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc3[2] sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.2% (619904/233206784) finish=31.2min speed=123980K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
Ugawaji wa diski baada ya kuongeza sehemu kwenye safu.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
4. Kuondoa partitions ya disk ya pili kutoka kwa safu
Tunaweka alama za sehemu za diski ya pili ambayo hutumiwa katika kila safu kuwa mbaya.
mdadm /dev/md126 --fail /dev/sdb1
mdadm /dev/md127 --fail /dev/sdb2
Inaondoa vizuizi vya kifaa /dev/sda
kutoka kwa safu.
mdadm /dev/md126 --remove /dev/sdb1
mdadm /dev/md127 --remove /dev/sdb2
5. Nakili jedwali la mpangilio wa GPT na ulandanishe safu
Ili kunakili jedwali la markup la GPT tutatumia matumizi sgdisk
, ambayo imejumuishwa kwenye kifurushi cha kufanya kazi na sehemu za diski na meza ya GPT - gdisk
.
Ufungaji gdisk
kwa CentOS:
yum install -y gdisk
Ufungaji gdisk
kwa Debian/Ubuntu:
apt install -y gdisk
Tahadhari: Kwa GPT первым diski imeonyeshwa juu ya ambayo nakala markup, pili diski inaonyesha diski ambayo kutoka nakala ghafi. Ikiwa unachanganya diski, ugawaji mzuri wa awali utaandikwa na kuharibiwa.
Nakili jedwali la alama za GPT.
sgdisk -R /dev/sdd /dev/sdc
Ugawaji wa diski baada ya kuhamisha meza kwenye diski /dev/sdd
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
└─sdd3 8:51 0 2,7T 0 part
Ifuatayo, tunaongeza kila sehemu zinazoshiriki katika safu za RAID za programu.
mdadm /dev/md126 --add /dev/sdd2
mdadm /dev/md127 --add /dev/sdd3
Tunasubiri safu ili kusawazisha.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdd2[3] sdc2[2]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active raid1 sdd3[3] sdc3[2]
233206784 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.0% (148224/233206784) finish=26.2min speed=148224K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
Baada ya kunakili kizigeu cha GPT kwenye diski mpya ya pili, kizigeu kitaonekana kama hii.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Ifuatayo, sasisha GRUB kwenye diski mpya.
Ufungaji wa CentOS:
grub2-install /dev/sdX
Ufungaji wa Debian/Ubuntu:
grub-install /dev/sdX
ambapo X
- barua ya gari, kwa upande wetu anatoa /dev/sdc
и /dev/sdd
.
Tunasasisha habari kuhusu safu.
kwa CentOS:
mdadm --detail --scan --verbose > /etc/mdadm.conf
Kwa Debian/Ubuntu:
echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
mdadm --detail --scan --verbose | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf
Inasasisha picha initrd
:
kwa CentOS:
dracut -f -v --regenerate-all
Kwa Debian/Ubuntu:
update-initramfs -u -k all
Tunasasisha usanidi wa GRUB.
kwa CentOS:
grub2-mkconfig -o /boot/grub2/grub.cfg
Kwa Debian/Ubuntu:
update-grub
Baada ya kukamilisha hatua, disks za zamani zinaweza kuondolewa.
6. Ugani wa mfumo wa faili (ext4) wa sehemu ya mizizi
Ugawaji wa diski kabla ya upanuzi wa mfumo wa faili baada ya kuhamia mfumo hadi diski 2 x 3TB (RAID-1).
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
sdb 8:16 0 223,6G 0 disk
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md126 9:126 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md126 9:126 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Sasa sehemu /dev/sdc3
и /dev/sdd3
kuchukua 2.7 TB. Kwa kuwa tumeunda mpangilio mpya wa diski na jedwali la GPT, saizi ya kizigeu 3 iliwekwa mara moja kwa nafasi ya juu iwezekanavyo ya diski; katika kesi hii, hakuna haja ya kupanua kizigeu.
Ni muhimu:
- Panua safu md126,
- Panua PV (kiasi cha kimwili),
- Panua LV (kiasi cha kimantiki) vg0-mzizi,
- Panua mfumo wa faili.
1. Panua safu md126
kwa upeo.
mdadm --grow /dev/md126 --size=max
Baada ya upanuzi wa safu md126
ukubwa wa nafasi iliyochukuliwa imeongezeka hadi 2.7 TB.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
sdb 8:16 0 223,6G 0 disk
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md126 9:126 0 2,7T 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md126 9:126 0 2,7T 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Kupanua kiasi cha kimwili.
Kabla ya kupanua, angalia thamani ya sasa ya nafasi iliyochukuliwa PV /dev/md126
.
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md126 vg0 lvm2 a-- 222,40g 0
Tunapanua PV kwa amri ifuatayo.
pvresize /dev/md126
Tunaangalia hatua iliyokamilishwa.
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md126 vg0 lvm2 a-- <2,73t 2,51t
Kupanua kiasi cha mantiki vg0-mzizi.
Baada ya kupanua PV, hebu tuangalie nafasi iliyochukuliwa VG.
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 2 0 wz--n- <2,73t 2,51t
Wacha tuangalie nafasi iliyochukuliwa na LV.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root vg0 -wi-ao---- <206,41g
swap vg0 -wi-ao---- <16,00g
Kiasi cha mzizi wa vg0 kinachukua GB 206.41.
Tunapanua LV hadi nafasi ya juu ya diski.
lvextend -l +100%FREE /dev/mapper/vg0-root
Kuangalia nafasi ya LV baada ya upanuzi.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root vg0 -wi-ao---- 2,71t
swap vg0 -wi-ao---- <16,00g
Kupanua mfumo wa faili (ext4).
Hebu tuangalie ukubwa wa sasa wa mfumo wa faili.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,4G 192G 1% /
/dev/md127 1007M 141M 816M 15% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
Kiasi /dev/mapper/vg0-root inachukua GB 204 baada ya upanuzi wa LV.
Kupanua mfumo wa faili.
resize2fs /dev/mapper/vg0-root
Kuangalia saizi ya mfumo wa faili baada ya kuipanua.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 2,7T 1,4G 2,6T 1% /
/dev/md127 1007M 141M 816M 15% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
Saizi ya mfumo wa faili imeongezwa ili kufidia kiasi kizima.
Chanzo: mapenzi.com