Maye gurbin ƙananan faifai tare da manyan diski a cikin Linux

Assalamu alaikum. A jajibirin fara sabon rukunin kwas "Linux Administrator" Muna buga abubuwa masu amfani da ɗalibinmu ya rubuta, da kuma mai ba da shawara, ƙwararrun tallafin fasaha don samfuran kamfanoni na REG.RU - Roman Travin.

Wannan labarin zai yi la'akari da lokuta 2 na maye gurbin faifai da kuma canja wurin bayanai zuwa sababbin faifai na iya aiki mafi girma tare da ƙarin fadada tsararru da tsarin fayil. Shari'ar farko za ta shafi maye gurbin diski tare da MBR/MBR ko GPT/GPT partitioning, shari'ar na biyu ya shafi maye gurbin diski tare da rarraba MBR tare da faifai tare da damar sama da 2 TB, wanda za ku buƙaci shigar. wani bangare na GPT tare da ɓangaren biosboot. A cikin lokuta biyu, faifan da muke canja wurin bayanai an riga an shigar dasu akan uwar garken. Tsarin fayil ɗin da ake amfani da shi don tushen ɓangaren shine ext4.

Hali na 1: Sauya ƙananan faifai tare da manyan diski (har zuwa 2TB)

Aiki: Sauya faifai na yanzu tare da faifai masu girma (har zuwa 2 TB) tare da canja wurin bayanai. A wannan yanayin, muna da faifai 2 x 240 GB SSD (RAID-1) tare da tsarin da aka shigar da 2 x 1 TB SATA diski waɗanda tsarin ke buƙatar canjawa wuri.

Bari mu kalli shimfidar faifai na yanzu.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sda2           8:2    0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
sdd              8:48   0 931,5G  0 disk  

Bari mu bincika sararin tsarin fayil ɗin da ake amfani da shi a halin yanzu.

[root@localhost ~]# df -h
Файловая система     Размер Использовано  Дост Использовано% Cмонтировано в
devtmpfs                32G            0   32G            0% /dev
tmpfs                   32G            0   32G            0% /dev/shm
tmpfs                   32G         9,6M   32G            1% /run
tmpfs                   32G            0   32G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,3G  192G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  6,3G            0  6,3G            0% /run/user/0

Girman tsarin fayil ɗin kafin maye gurbin diski shine 204 GB, ana amfani da 2 md126 software arrays, wanda aka saka a ciki. /boot и md127, wanda ake amfani dashi azaman ƙarar jiki don rukunin VG vg 0.

1. Cire sassan faifai daga tsararru

Duba yanayin tsararru

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sda1[0] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sda2[0] sdb2[1]
      233206784 blocks super 1.2 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

Tsarin yana amfani da tsararraki 2: md126 (Dutsen Dutsen /boot) - ya ƙunshi sashe /dev/sda1 и /dev/sdb1, md127 (LVM don canza da tushen tsarin fayil) - ya ƙunshi /dev/sda2 и /dev/sdb2.

Muna yiwa ɓangarorin faifai na farko da ake amfani da su a kowane tsararru alama mara kyau.

mdadm /dev/md126 --fail /dev/sda1

mdadm /dev/md127 --fail /dev/sda2

Muna cire /dev/sda toshe ɓangaren na'urar daga tsararrun.

mdadm /dev/md126 --remove /dev/sda1

mdadm /dev/md127 --remove /dev/sda2

Bayan mun cire faifai daga tsararrun, bayanan na'urar toshe zai yi kama da haka.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
sdd              8:48   0 931,5G  0 disk  

Yanayin tsararraki bayan cire diski.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb1[1]
      1047552 blocks super 1.2 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>

2. Kwafi teburin bangare zuwa sabon faifai

Kuna iya duba tebur ɗin da aka yi amfani da shi akan faifai tare da umarni mai zuwa.

fdisk -l /dev/sdb | grep 'Disk label type'

Fitowar MBR zai kasance:

Disk label type: dos

don GPT:

Disk label type: gpt

Ana kwafe teburin rabo na MBR:

sfdisk -d /dev/sdb | sfdisk /dev/sdc

A cikin wannan tawagar na farko an nuna drive с abin da an kwafe alamar, na biyu - inda kwafi.

TAMBAYA: don GPT na farko an nuna drive akan wanne kwafi markup, na biyu faifan yana nuna diski daga wane kwafi markup. Idan kun haɗu da faifai, ɓangaren da ke da kyau na farko za a sake rubuta shi kuma a lalata shi.

Kwafi teburin shimfidar wuri don GPT:

sgdisk -R /dev/sdс /dev/sdb

Na gaba, sanya UUID bazuwar zuwa faifai (don GPT).


sgdisk -G /dev/sdc

Bayan an aiwatar da umarnin, sassan ya kamata su bayyana akan faifai /dev/sdc.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
└─sdc2           8:34   0 222,5G  0 part  
sdd              8:48   0 931,5G  0 disk  

Idan, bayan an aiwatar da aikin, ɓangarori a cikin tsarin akan faifai /dev/sdc ba tare da yanke shawara ba, sannan mu aiwatar da umarnin don sake karanta teburin ɓangaren.

sfdisk -R /dev/sdc

Idan faifai na yanzu suna amfani da teburin MBR kuma ana buƙatar canja wurin bayanin zuwa diski wanda ya fi TB 2, to akan sabbin faifai za ku buƙaci ƙirƙirar ɓangaren GPT da hannu ta amfani da ɓangaren biosboot. Za a tattauna wannan batu a Sashe na 2 na wannan labarin.

3. Ƙara sassan sabon faifai zuwa tsararru

Bari mu ƙara ɓangarorin faifai zuwa tsararrun da suka dace.

mdadm /dev/md126 --add /dev/sdc1

mdadm /dev/md127 --add /dev/sdc2

Muna duba cewa an ƙara sassan.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  

Bayan wannan, muna jira tsararru don aiki tare.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc1[2] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc2[2] sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      [==>..................]  recovery = 10.6% (24859136/233206784) finish=29.3min speed=118119K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

Kuna iya ci gaba da saka idanu kan tsarin aiki tare ta amfani da mai amfani watch.

watch -n 2 cat /proc/mdstat

Alamar -n Yana ƙayyade a waɗanne tazara a cikin daƙiƙa dole ne a aiwatar da umarnin don duba ci gaba.

Maimaita matakai 1 - 3 don faifan maye na gaba.

Muna yiwa sassan faifai na biyu alama da aka yi amfani da su a kowane tsararru a matsayin mara kyau.

mdadm /dev/md126 --fail /dev/sdb1

mdadm /dev/md127 --fail /dev/sdb2

Cire toshe sassan na'urar /dev/sdb daga tsararraki.

mdadm /dev/md126 --remove /dev/sdb1

mdadm /dev/md127 --remove /dev/sdb2

Bayan mun cire faifai daga tsararrun, bayanan na'urar toshe zai yi kama da haka.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  

Yanayin tsararraki bayan cire diski.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc1[2]
      1047552 blocks super 1.2 [2/1] [U_]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc2[2]
      233206784 blocks super 1.2 [2/1] [U_]
      bitmap: 1/2 pages [4KB], 65536KB chunk

unused devices: <none>

Ana kwafi teburin ɓangaren MBR daga faifai /dev/sdс ku faifai /dev/sdd.

sfdisk -d /dev/sdс | sfdisk /dev/sdd

Bayan an aiwatar da umarnin, sassan ya kamata su bayyana akan faifai /dev/sdd.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
└─sdd2           8:50   0 222,5G  0 part  

Ƙara ɓangarorin faifai zuwa tsararraki.

mdadm /dev/md126 --add /dev/sdd1

mdadm /dev/md127 --add /dev/sdd2

Muna duba cewa an ƙara sassan.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Bayan wannan, muna jira tsararru don aiki tare.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdd1[3] sdc1[2]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdd2[3] sdc2[2]
      233206784 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.5% (1200000/233206784) finish=35.4min speed=109090K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

5. Sanya GRUB akan sababbin faifai

Don CentOS:

grub2-install /dev/sdX

Don Debian/Ubuntu:

grub-install /dev/sdX

inda X - harafin na'urar toshe. A wannan yanayin, kuna buƙatar shigar da GRUB akan /dev/sdc и /dev/sdd.

6. Fayil tsarin tsawo (ext4) na tushen bangare

Akan sabbin faifai /dev/sdc и /dev/sdd 931.5 GB akwai. Saboda gaskiyar cewa an kwafi teburin ɓangaren daga ƙananan faifai, ɓangarori /dev/sdc2 и /dev/sdd2 222.5 GB akwai.

sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Ya zama dole:

  1. Ƙara partition 2 akan kowane faifai,
  2. Fadada tsararru md127,
  3. Fadada PV (ƙarar jiki),
  4. Fadada LV (ma'ana-girma) vg0-tushen,
  5. Fadada tsarin fayil.

Amfani da mai amfani rabu bari mu fadada sashin /dev/sdc2 zuwa matsakaicin darajar. Yi umarnin parted /dev/sdc (1) kuma duba teburin bangare na yanzu tare da umarnin p (2).

Maye gurbin ƙananan faifai tare da manyan diski a cikin Linux

Kamar yadda kake gani, ƙarshen ɓangaren 2 yana ƙare a 240 GB. Bari mu fadada bangare tare da umarni resizepart 2, inda 2 shine adadin sashe (3). Muna nuna darajar a cikin tsarin dijital, misali 1000 GB, ko amfani da alamar rarraba diski - 100%. Mun sake dubawa cewa ɓangaren yana da sabon girman (4).

Maimaita matakan da ke sama don faifai /dev/sdd. Bayan fadada partitions /dev/sdc2 и /dev/sdd2 ya kai 930.5 GBp.

[root@localhost ~]# lsblk                                                 
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 930,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 930,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Bayan wannan muna fadada tsararru md127 zuwa matsakaicin.

mdadm --grow /dev/md127 --size=max

Muna duba cewa tsararru ta faɗaɗa. Yanzu girmansa ya zama 930.4 GB.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0 931,5G  0 disk  
├─sdc1           8:33   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc2           8:34   0 930,5G  0 part  
  └─md127        9:127  0 930,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0 931,5G  0 disk  
├─sdd1           8:49   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd2           8:50   0 930,5G  0 part  
  └─md127        9:127  0 930,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Tsawaita tsawo ƙarar jiki. Kafin faɗaɗa, bari mu bincika halin yanzu na PV.

[root@localhost ~]# pvscan
  PV /dev/md127   VG vg0             lvm2 [222,40 GiB / 0    free]
  Total: 1 [222,40 GiB] / in use: 1 [222,40 GiB] / in no VG: 0 [0   ]

Kamar yadda ake iya gani, PV /dev/md127 yana amfani da 222.4 GB na sarari.

Muna fadada PV tare da umarni mai zuwa.

pvresize /dev/md127

Duba sakamakon fadada PV.

[

root@localhost ~]# pvscan
  PV /dev/md127   VG vg0             lvm2 [930,38 GiB / 707,98 GiB free]
  Total: 1 [930,38 GiB] / in use: 1 [930,38 GiB] / in no VG: 0 [0   ]

Fadada ma'ana girma. Kafin faɗaɗa, bari mu bincika yanayin LV (1) na yanzu.

[root@localhost ~]# lvscan
  ACTIVE            '/dev/vg0/swap' [<16,00 GiB] inherit
  ACTIVE            '/dev/vg0/root' [<206,41 GiB] inherit

LV /dev/vg0/root yana amfani da 206.41 GBp.

Muna fadada LV tare da umarni mai zuwa (2).

lvextend -l +100%FREE /dev/mapper/vg0-root

Muna duba aikin da aka kammala (3).

[root@localhost ~]# lvscan 
  ACTIVE            '/dev/vg0/swap' [<16,00 GiB] inherit
  ACTIVE            '/dev/vg0/root' [<914,39 GiB] inherit

Kamar yadda kake gani, bayan fadada LV, adadin sararin faifai da aka mamaye ya zama 914.39 GB.

Maye gurbin ƙananan faifai tare da manyan diski a cikin Linux

Girman LV ya ƙaru (4), amma tsarin fayil har yanzu yana ɗaukar 204 GB (5).

1. Bari mu fadada tsarin fayil.

resize2fs /dev/mapper/vg0-root

Bayan an aiwatar da umarnin, muna duba girman tsarin fayil ɗin.

[root@localhost ~]# df -h
Файловая система     Размер Использовано  Дост Использовано% Cмонтировано в
devtmpfs                32G            0   32G            0% /dev
tmpfs                   32G            0   32G            0% /dev/shm
tmpfs                   32G         9,5M   32G            1% /run
tmpfs                   32G            0   32G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   900G         1,3G  860G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  6,3G            0  6,3G            0% /run/user/0

Girman tsarin fayil ɗin tushen zai ƙaru zuwa 900 GB. Bayan kammala matakan, zaku iya cire tsoffin diski.

Hali na 2: Sauya ƙananan faifai tare da manyan diski (fiye da 2TB)

Motsa jiki: Sauya faifai na yanzu tare da faifai masu girma (2 x 3TB) yayin adana bayanan. A wannan yanayin, muna da faifai 2 x 240 GB SSD (RAID-1) tare da tsarin da aka shigar da 2 x 3 TB SATA diski wanda tsarin ke buƙatar canjawa wuri. Fayafai na yanzu suna amfani da teburin ɓangaren MBR. Tunda sababbin faifai suna da ƙarfin sama da TB 2, za su buƙaci amfani da tebur na GPT, tunda MBR ba zai iya aiki da faifai masu girma fiye da 2 TB ba.

Bari mu kalli shimfidar faifai na yanzu.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sda2           8:2    0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
sdd              8:48   0   2,7T  0 disk  

Bari mu duba teburin ɓangaren da aka yi amfani da shi akan faifai /dev/sda.

[root@localhost ~]# fdisk -l /dev/sda | grep 'Disk label type'
Disk label type: dos

Akan faifai /dev/sdb ana amfani da tebur mai kama da haka. Bari mu bincika sararin diski da aka yi amfani da shi akan tsarin.

[root@localhost ~]# df -h
Файловая система     Размер Использовано  Дост Использовано% Cмонтировано в
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,5M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,3G  192G            1% /
/dev/md126            1007M         120M  837M           13% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

Kamar yadda kake gani, tushen tsarin fayil ɗin yana ɗaukar 204 GB. Bari mu duba halin yanzu na tsararrun RAID na software.

1. Shigar da tebur na GPT da rarraba diski

Bari mu bincika shimfidar faifai ta yanki.

[root@localhost ~]# parted /dev/sda print
Модель: ATA KINGSTON SVP200S (scsi)
Диск /dev/sda: 240GB
Размер сектора (логич./физич.): 512B/512B
Таблица разделов: msdos
Disk Flags: 

Номер  Начало  Конец   Размер  Тип      Файловая система  Флаги
 1     1049kB  1076MB  1075MB  primary                    загрузочный, raid
 2     1076MB  240GB   239GB   primary                    raid

A sabon faifan 3TB za mu buƙaci ƙirƙirar ɓangarori 3:

  1. Sashe bios_grub Girman 2MiB don dacewa da GPT BIOS,
  2. Bangare don tsararrun RAID da za a saka a ciki /boot.
  3. The partition for RAID tsararru a kan abin da za a yi LV tushen и Canji a farashin LV.

Shigar da kayan aiki rabu tawaga yum install -y parted (na CentOS), apt install -y parted (don Debian/Ubuntu).

Amfani rabu Bari mu gudanar da waɗannan umarni don rarraba diski.

Yi umarnin parted /dev/sdc kuma je zuwa yanayin gyara shimfidar faifai.

Ƙirƙiri teburin ɓangaren GPT.

(parted) mktable gpt

Ƙirƙiri sashi 1 bios_grub sashe kuma saita masa tuta.

(parted) mkpart primary 1MiB 3MiB
(parted) set 1 bios_grub on  

Ƙirƙiri bangare 2 kuma saita mata tuta. Za a yi amfani da ɓangaren azaman toshe don tsararrun RAID kuma a saka a ciki /boot.

(parted) mkpart primary ext2 3MiB 1028MiB
(parted) set 2 boot on

Mun ƙirƙiri sashe na 3, wanda kuma za a yi amfani da shi azaman shingen tsararru wanda LVM zai kasance.

(parted) mkpart primary 1028MiB 100% 

A wannan yanayin, ba lallai ba ne don saita tuta, amma idan ya cancanta, ana iya saita shi tare da umarni mai zuwa.

(parted) set 3 raid on

Muna duba teburin da aka halicce.

(parted) p                                                                
Модель: ATA TOSHIBA DT01ACA3 (scsi)
Диск /dev/sdc: 3001GB
Размер сектора (логич./физич.): 512B/4096B
Таблица разделов: gpt
Disk Flags: 

Номер  Начало  Конец   Размер  Файловая система  Имя      Флаги
 1     1049kB  3146kB  2097kB                    primary  bios_grub
 2     3146kB  1077MB  1074MB                    primary  загрузочный
 3     1077MB  3001GB  3000GB                    primary

Mun sanya sabon GUID bazuwar zuwa faifai.

sgdisk -G /dev/sdd

2. Cire partitions na farko faifai daga tsararru

Duba yanayin tsararru

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sda1[0] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sda2[0] sdb2[1]
      233206784 blocks super 1.2 [2/2] [UU]
      bitmap: 0/2 pages [0KB], 65536KB chunk

unused devices: <none>

Tsarin yana amfani da tsararraki 2: md126 (Dutsen Dutsen / Boot) - ya ƙunshi /dev/sda1 и /dev/sdb1, md127 (LVM don swap da tushen tsarin fayil) - ya ƙunshi /dev/sda2 и /dev/sdb2.

Muna yiwa ɓangarorin faifai na farko da ake amfani da su a kowane tsararru alama mara kyau.

mdadm /dev/md126 --fail /dev/sda1

mdadm /dev/md127 --fail /dev/sda2

Cire toshe sassan na'urar /dev/sda daga tsararraki.

mdadm /dev/md126 --remove /dev/sda1

mdadm /dev/md127 --remove /dev/sda2

Duba yanayin tsararru bayan cire diski.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdb1[1]
      1047552 blocks super 1.2 [2/1] [_U]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

3. Ƙara sassan sabon faifai zuwa tsararru

Mataki na gaba shine ƙara sassan sabon faifai zuwa tsararrun don aiki tare. Bari mu kalli yanayin shimfidar faifai na yanzu.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
└─sdc3           8:35   0   2,7T  0 part  
sdd              8:48   0   2,7T  0 disk  

Sashe /dev/sdc1 shi ne bios_grub sashe kuma ba shi da hannu a cikin ƙirƙirar tsararru. Za a yi amfani da kayan aiki kawai /dev/sdc2 и /dev/sdc3. Muna ƙara waɗannan sassan zuwa tsararrun da suka dace.

mdadm /dev/md126 --add /dev/sdc2

mdadm /dev/md127 --add /dev/sdc3

Sa'an nan kuma mu jira tsararru don aiki tare.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdc2[2] sdb1[1]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdc3[2] sdb2[1]
      233206784 blocks super 1.2 [2/1] [_U]
      [>....................]  recovery =  0.2% (619904/233206784) finish=31.2min speed=123980K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>

Rarraba diski bayan ƙara ɓangarori zuwa tsararru.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdb2           8:18   0 222,5G  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  

4. Cire sassan diski na biyu daga tsararru

Muna yiwa sassan faifai na biyu alama da aka yi amfani da su a kowane tsararru a matsayin mara kyau.

mdadm /dev/md126 --fail /dev/sdb1

mdadm /dev/md127 --fail /dev/sdb2

Cire toshe sassan na'urar /dev/sda daga tsararraki.

mdadm /dev/md126 --remove /dev/sdb1

mdadm /dev/md127 --remove /dev/sdb2

5. Kwafi teburin shimfidar GPT kuma daidaita tsararrun

Don kwafe teburin alamar GPT za mu yi amfani da mai amfani sgdisk, wanda aka haɗa a cikin kunshin don aiki tare da sassan diski da tebur GPT - gdisk.

saitin gdisk don CentOS:

yum install -y gdisk

saitin gdisk don Debian/Ubuntu:

apt install -y gdisk

TAMBAYA: don GPT na farko an nuna drive akan wanne kwafi alamar, na biyu faifan yana nuna diski daga wane kwafi alamar. Idan kun haɗu da faifai, za a sake rubutawa da lalata sashin da aka fara kyau.

Kwafi teburin alamar GPT.

sgdisk -R /dev/sdd /dev/sdc

Rarraba Disk bayan canja wurin tebur zuwa faifai /dev/sdd.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
└─sdd3           8:51   0   2,7T  0 part  

Na gaba, muna ƙara kowane ɓangaren ɓangaren da ke shiga cikin tsarin RAID na software.

mdadm /dev/md126 --add /dev/sdd2

mdadm /dev/md127 --add /dev/sdd3

Muna jiran tsararru don aiki tare.

[root@localhost ~]# cat /proc/mdstat 
Personalities : [raid1] 
md126 : active raid1 sdd2[3] sdc2[2]
      1047552 blocks super 1.2 [2/2] [UU]
      bitmap: 1/1 pages [4KB], 65536KB chunk

md127 : active raid1 sdd3[3] sdc3[2]
      233206784 blocks super 1.2 [2/1] [U_]
      [>....................]  recovery =  0.0% (148224/233206784) finish=26.2min speed=148224K/sec
      bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>

Bayan yin kwafin GPT partition zuwa sabon faifai na biyu, ɓangaren zai yi kama da haka.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
├─sda1           8:1    0     1G  0 part  
└─sda2           8:2    0 222,5G  0 part  
sdb              8:16   0 223,6G  0 disk  
├─sdb1           8:17   0     1G  0 part  
└─sdb2           8:18   0 222,5G  0 part  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md126        9:126  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md127        9:127  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Bayan haka, shigar da GRUB akan sabbin faifai.

Shigarwa don CentOS:

grub2-install /dev/sdX

Shigarwa don Debian/Ubuntu:

grub-install /dev/sdX

inda X - wasiƙar tuƙi, a cikin yanayin tafiyarmu /dev/sdc и /dev/sdd.

Muna sabunta bayanai game da tsararru.

Don CentOS:

mdadm --detail --scan --verbose > /etc/mdadm.conf

Don Debian/Ubuntu:

echo "DEVICE partitions" > /etc/mdadm/mdadm.conf

mdadm --detail --scan --verbose | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf

Ana ɗaukaka hoton initrd:
Don CentOS:

dracut -f -v --regenerate-all

Don Debian/Ubuntu:

update-initramfs -u -k all

Muna sabunta tsarin GRUB.

Don CentOS:

grub2-mkconfig -o /boot/grub2/grub.cfg

Don Debian/Ubuntu:

update-grub

Bayan kammala matakan, ana iya cire tsoffin faifai.

6. Fayil tsarin tsawo (ext4) na tushen bangare

Rarraba diski kafin fadada tsarin fayil bayan ƙaura tsarin zuwa diski na 2 x 3TB (RAID-1).

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
sdb              8:16   0 223,6G  0 disk  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md126        9:126  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md126        9:126  0 222,4G  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Yanzu sassan /dev/sdc3 и /dev/sdd3 dauke da 2.7 TB. Tun da mun ƙirƙiri sabon shimfidar faifai tare da tebur GPT, girman ɓangaren 3 an saita shi nan da nan zuwa matsakaicin yuwuwar sarari diski; a wannan yanayin, babu buƙatar faɗaɗa ɓangaren.

Ya zama dole:

  1. Fadada tsararru md126,
  2. Fadada PV (ƙarar jiki),
  3. Fadada LV (ma'ana-girma) vg0-tushen,
  4. Fadada tsarin fayil.

1. Fadada tsararru md126 zuwa matsakaicin.

mdadm --grow /dev/md126 --size=max

Bayan fadada tsararru md126 girman wurin da aka mamaye ya karu zuwa 2.7 TB.

[root@localhost ~]# lsblk
NAME           MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda              8:0    0 223,6G  0 disk  
sdb              8:16   0 223,6G  0 disk  
sdc              8:32   0   2,7T  0 disk  
├─sdc1           8:33   0     2M  0 part  
├─sdc2           8:34   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdc3           8:35   0   2,7T  0 part  
  └─md126        9:126  0   2,7T  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]
sdd              8:48   0   2,7T  0 disk  
├─sdd1           8:49   0     2M  0 part  
├─sdd2           8:50   0     1G  0 part  
│ └─md127        9:127  0  1023M  0 raid1 /boot
└─sdd3           8:51   0   2,7T  0 part  
  └─md126        9:126  0   2,7T  0 raid1 
    ├─vg0-root 253:0    0 206,4G  0 lvm   /
    └─vg0-swap 253:1    0    16G  0 lvm   [SWAP]

Fadada ƙarar jiki.

Kafin fadadawa, duba ƙimar halin yanzu na sararin samaniya PV /dev/md126.

[root@localhost ~]# pvs
  PV         VG  Fmt  Attr PSize   PFree
  /dev/md126 vg0 lvm2 a--  222,40g    0 

Muna fadada PV tare da umarni mai zuwa.

pvresize /dev/md126

Muna duba aikin da aka kammala.

[root@localhost ~]# pvs
  PV         VG  Fmt  Attr PSize  PFree
  /dev/md126 vg0 lvm2 a--  <2,73t 2,51t

Fadada ma'ana girma vg0-tushen.

Bayan fadada PV, bari mu bincika sararin samaniya VG.

[root@localhost ~]# vgs
  VG  #PV #LV #SN Attr   VSize  VFree
  vg0   1   2   0 wz--n- <2,73t 2,51t

Bari mu duba sararin da LV ya mamaye.

[root@localhost ~]# lvs
  LV   VG  Attr       LSize    Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root vg0 -wi-ao---- <206,41g                                                    
  swap vg0 -wi-ao----  <16,00g            

Girman tushen vg0 ya mamaye 206.41 GB.

Muna faɗaɗa LV zuwa matsakaicin sarari diski.

lvextend -l +100%FREE /dev/mapper/vg0-root 

Duba sararin LV bayan fadadawa.

[root@localhost ~]# lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root vg0 -wi-ao----   2,71t                                                    
  swap vg0 -wi-ao---- <16,00g

Fadada tsarin fayil (ext4).

Bari mu duba girman tsarin fayil na yanzu.

[root@localhost ~]# df -h
Файловая система     Размер Использовано  Дост Использовано% Cмонтировано в
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,6M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   204G         1,4G  192G            1% /
/dev/md127            1007M         141M  816M           15% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

Ƙarar / dev/mapper/vg0-tushen ya mamaye 204 GB bayan fadada LV.

Fadada tsarin fayil.

resize2fs /dev/mapper/vg0-root 

Duba girman tsarin fayil bayan fadada shi.

[root@localhost ~]# df -h
Файловая система     Размер Использовано  Дост Использовано% Cмонтировано в
devtmpfs                16G            0   16G            0% /dev
tmpfs                   16G            0   16G            0% /dev/shm
tmpfs                   16G         9,6M   16G            1% /run
tmpfs                   16G            0   16G            0% /sys/fs/cgroup
/dev/mapper/vg0-root   2,7T         1,4G  2,6T            1% /
/dev/md127            1007M         141M  816M           15% /boot
tmpfs                  3,2G            0  3,2G            0% /run/user/0

An ƙara girman tsarin fayil don rufe duka girma.

source: www.habr.com

Add a comment