Hammaga salom. Yangi kurslar guruhi boshlanishi arafasida
Ushbu maqola disklarni almashtirish va ma'lumotlarni massiv va fayl tizimini yanada kengaytirish bilan katta hajmdagi yangi disklarga o'tkazishning 2 ta holatini ko'rib chiqadi. Birinchi holat disklarni bir xil MBR/MBR yoki GPT/GPT bo'limlari bilan almashtirish bilan bog'liq bo'lsa, ikkinchi holat MBR bo'limiga ega disklarni 2 TB dan ortiq sig'imli disklar bilan almashtirishga tegishli bo'lib, ularni o'rnatishingiz kerak bo'ladi. biosboot bo'limiga ega GPT bo'limi. Ikkala holatda ham biz ma'lumotlarni uzatadigan disklar allaqachon serverga o'rnatilgan. Ildiz bo'limi uchun ishlatiladigan fayl tizimi ext4.
1-holat: Kichikroq disklarni kattaroq disklarga almashtirish (2TB gacha)
Vazifa: Ma'lumot uzatish bilan joriy disklarni kattaroq disklar (2 TB gacha) bilan almashtiring. Bunday holda, bizda tizim o'rnatilgan 2 x 240 GB SSD (RAID-1) disklari va tizimni o'tkazish kerak bo'lgan 2 x 1 TB SATA disklari mavjud.
Keling, joriy disk tartibini ko'rib chiqaylik.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sda2 8:2 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
sdd 8:48 0 931,5G 0 disk
Keling, hozirda foydalanilayotgan fayl tizimi maydonini tekshiramiz.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 9,6M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,3G 192G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 6,3G 0 6,3G 0% /run/user/0
Disklarni almashtirishdan oldin fayl tizimining hajmi 204 Gb ni tashkil qiladi, 2 ta md126 dasturiy ta'minot massivlari ishlatiladi, ular o'rnatilgan. /boot
и md127
sifatida ishlatiladi jismoniy hajm VG guruhi uchun vg0.
1. Massivlardan disk bo'limlarini olib tashlash
Massiv holatini tekshirish
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda1[0] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sda2[0] sdb2[1]
233206784 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
unused devices: <none>
Tizim 2 ta massivdan foydalanadi: md126
(o'rnatish nuqtasi /boot
) - bo'limdan iborat /dev/sda1
и /dev/sdb1
, md127
(LVM uchun swap va fayl tizimining ildizi) - dan iborat /dev/sda2
и /dev/sdb2
.
Biz har bir massivda ishlatiladigan birinchi diskning bo'limlarini yomon deb belgilaymiz.
mdadm /dev/md126 --fail /dev/sda1
mdadm /dev/md127 --fail /dev/sda2
Biz /dev/sda blokli qurilma bo'limlarini massivlardan olib tashlaymiz.
mdadm /dev/md126 --remove /dev/sda1
mdadm /dev/md127 --remove /dev/sda2
Diskni massivdan olib tashlaganimizdan so'ng, blok qurilma ma'lumotlari shunday ko'rinadi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
sdd 8:48 0 931,5G 0 disk
Disklarni olib tashlangandan keyin massivlarning holati.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb1[1]
1047552 blocks super 1.2 [2/1] [_U]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
bitmap: 1/2 pages [4KB], 65536KB chunk
unused devices: <none>
2. Bo'limlar jadvalini yangi diskga nusxalash
Diskdagi ishlatilgan bo'limlar jadvalini quyidagi buyruq bilan tekshirishingiz mumkin.
fdisk -l /dev/sdb | grep 'Disk label type'
MBR uchun chiqish quyidagicha bo'ladi:
Disk label type: dos
GPT uchun:
Disk label type: gpt
MBR uchun bo'lim jadvalini nusxalash:
sfdisk -d /dev/sdb | sfdisk /dev/sdc
Bu jamoada birinchi bo'lib disk ko'rsatilgan с kologorgo belgi nusxa ko'chiriladi, ikkinchisi - qayerda nusxa ko'chirish.
E'tibor: GPT uchun birinchi bo'lib disk ko'rsatilgan qaysi ustiga belgilarni nusxalash, ikkinchi disk diskni bildiradi qaysidan belgilarni nusxalash. Agar siz disklarni aralashtirsangiz, dastlab yaxshi bo'lim qayta yoziladi va yo'q qilinadi.
GPT uchun tartib jadvalidan nusxa olish:
sgdisk -R /dev/sdс /dev/sdb
Keyinchalik, diskka tasodifiy UUID tayinlang (GPT uchun).
sgdisk -G /dev/sdc
Buyruq bajarilgandan so'ng, diskda bo'limlar paydo bo'lishi kerak /dev/sdc
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
└─sdc2 8:34 0 222,5G 0 part
sdd 8:48 0 931,5G 0 disk
Agar harakat bajarilgandan so'ng, diskdagi tizimdagi bo'limlar /dev/sdc
qaror qilmagan bo'lsa, biz bo'lim jadvalini qayta o'qish buyrug'ini bajaramiz.
sfdisk -R /dev/sdc
Agar joriy disklar MBR jadvalidan foydalansa va ma'lumotni 2 TB dan katta disklarga o'tkazish kerak bo'lsa, yangi disklarda biosboot bo'limidan foydalanib GPT bo'limini qo'lda yaratishingiz kerak bo'ladi. Ushbu holat ushbu maqolaning 2-qismida muhokama qilinadi.
3. Massivga yangi diskning bo'limlarini qo'shish
Tegishli massivlarga disk bo'limlarini qo'shamiz.
mdadm /dev/md126 --add /dev/sdc1
mdadm /dev/md127 --add /dev/sdc2
Bo'limlar qo'shilganligini tekshiramiz.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
Shundan so'ng, biz massivlarni sinxronlashtirishni kutamiz.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc1[2] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc2[2] sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
[==>..................] recovery = 10.6% (24859136/233206784) finish=29.3min speed=118119K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
Yordamchi dastur yordamida sinxronizatsiya jarayonini doimiy ravishda kuzatib borishingiz mumkin watch
.
watch -n 2 cat /proc/mdstat
Parametr -n
taraqqiyotni tekshirish uchun buyruq necha soniyalarda bajarilishi kerakligini belgilaydi.
Keyingi diskni almashtirish uchun 1-3-bosqichlarni takrorlang.
Har bir massivda ishlatiladigan ikkinchi diskning bo'limlarini yomon deb belgilaymiz.
mdadm /dev/md126 --fail /dev/sdb1
mdadm /dev/md127 --fail /dev/sdb2
Blok qurilma bo'limlarini olib tashlash /dev/sdb
massivlardan.
mdadm /dev/md126 --remove /dev/sdb1
mdadm /dev/md127 --remove /dev/sdb2
Diskni massivdan olib tashlaganimizdan so'ng, blok qurilma ma'lumotlari shunday ko'rinadi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
Disklarni olib tashlangandan keyin massivlarning holati.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc1[2]
1047552 blocks super 1.2 [2/1] [U_]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc2[2]
233206784 blocks super 1.2 [2/1] [U_]
bitmap: 1/2 pages [4KB], 65536KB chunk
unused devices: <none>
MBR bo'lim jadvalini diskdan nusxalash /dev/sdс
diskka /dev/sdd
.
sfdisk -d /dev/sdс | sfdisk /dev/sdd
Buyruq bajarilgandan so'ng, diskda bo'limlar paydo bo'lishi kerak /dev/sdd
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
└─sdd2 8:50 0 222,5G 0 part
Massivlarga disk bo'limlarini qo'shish.
mdadm /dev/md126 --add /dev/sdd1
mdadm /dev/md127 --add /dev/sdd2
Bo'limlar qo'shilganligini tekshiramiz.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Shundan so'ng, biz massivlarni sinxronlashtirishni kutamiz.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdd1[3] sdc1[2]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdd2[3] sdc2[2]
233206784 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.5% (1200000/233206784) finish=35.4min speed=109090K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
5. GRUBni yangi drayverlarga o'rnatish
CentOS uchun:
grub2-install /dev/sdX
Debian/Ubuntu uchun:
grub-install /dev/sdX
qayerda X
— blok qurilmaning harfi. Bunday holda, siz GRUB-ni o'rnatishingiz kerak /dev/sdc
и /dev/sdd
.
6. Ildiz bo'limining fayl tizimi kengaytmasi (ext4).
Yangi disklarda /dev/sdc
и /dev/sdd
931.5 GB mavjud. Bo'lim jadvali kichikroq disklardan ko'chirilganligi sababli, bo'limlar /dev/sdc2
и /dev/sdd2
222.5 GB mavjud.
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Buning uchun:
- Har bir diskda 2-qismni kengaytiring,
- md127 massivini kengaytirish,
- PVni kengaytirish (jismoniy hajm),
- LV (mantiqiy hajm) vg0-rootni kengaytirish,
- Fayl tizimini kengaytiring.
Yordamchi dasturdan foydalanish xayrlashdi bo'limni kengaytiramiz /dev/sdc2
maksimal qiymatga. Buyruqni bajaring parted /dev/sdc
(1) va joriy bo'lim jadvalini buyruq bilan ko'ring p
(2).
Ko'rib turganingizdek, 2-bo'limning oxiri 240 GB bilan tugaydi. Buyruq bilan bo'limni kengaytiramiz resizepart
2
, bu erda 2 - bo'limning soni (3). Biz qiymatni raqamli formatda ko'rsatamiz, masalan, 1000 GB yoki disk ulushi ko'rsatkichidan foydalanamiz - 100%. Bo'lim yangi o'lchamga ega ekanligini yana bir bor tekshiramiz (4).
Disk uchun yuqoridagi amallarni takrorlang /dev/sdd
. Bo'limlarni kengaytirgandan so'ng /dev/sdc2
и /dev/sdd2
930.5 GB ga teng bo'ldi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 930,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 930,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Shundan so'ng biz massivni kengaytiramiz md127 maksimal darajada.
mdadm --grow /dev/md127 --size=max
Biz massiv kengaytirilganligini tekshiramiz. Endi uning hajmi 930.4 GB ga aylandi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 931,5G 0 disk
├─sdc1 8:33 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc2 8:34 0 930,5G 0 part
└─md127 9:127 0 930,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 931,5G 0 disk
├─sdd1 8:49 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd2 8:50 0 930,5G 0 part
└─md127 9:127 0 930,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Kengaytmani kengaytirish jismoniy hajm. Kengaytirishdan oldin, keling, PV ning hozirgi holatini tekshirib ko'raylik.
[root@localhost ~]# pvscan
PV /dev/md127 VG vg0 lvm2 [222,40 GiB / 0 free]
Total: 1 [222,40 GiB] / in use: 1 [222,40 GiB] / in no VG: 0 [0 ]
Ko'rinib turibdiki, PV /dev/md127
222.4 GB joydan foydalanadi.
Quyidagi buyruq bilan PV ni kengaytiramiz.
pvresize /dev/md127
PV kengayish natijasini tekshirish.
[root@localhost ~]# pvscan
PV /dev/md127 VG vg0 lvm2 [930,38 GiB / 707,98 GiB free]
Total: 1 [930,38 GiB] / in use: 1 [930,38 GiB] / in no VG: 0 [0 ]
Kengaytirilmoqda mantiqiy hajm. Kengaytirishdan oldin, keling, LV (1) ning joriy holatini tekshiramiz.
[root@localhost ~]# lvscan
ACTIVE '/dev/vg0/swap' [<16,00 GiB] inherit
ACTIVE '/dev/vg0/root' [<206,41 GiB] inherit
LV /dev/vg0/root
206.41 GB foydalanadi.
LV ni quyidagi buyruq bilan kengaytiramiz (2).
lvextend -l +100%FREE /dev/mapper/vg0-root
Tugallangan harakatni tekshiramiz (3).
[root@localhost ~]# lvscan
ACTIVE '/dev/vg0/swap' [<16,00 GiB] inherit
ACTIVE '/dev/vg0/root' [<914,39 GiB] inherit
Ko'rib turganingizdek, LV kengaytirilgandan so'ng, ishg'ol qilingan disk maydoni 914.39 GB ni tashkil etdi.
LV hajmi oshdi (4), lekin fayl tizimi hali ham 204 GB (5) ni egallaydi.
1. Fayl tizimini kengaytiramiz.
resize2fs /dev/mapper/vg0-root
Buyruq bajarilgandan so'ng biz fayl tizimining hajmini tekshiramiz.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 9,5M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 900G 1,3G 860G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 6,3G 0 6,3G 0% /run/user/0
Ildiz fayl tizimining hajmi 900 GB gacha oshadi. Qadamlarni bajarganingizdan so'ng siz eski disklarni olib tashlashingiz mumkin.
2-holat: Kichikroq disklarni kattaroq disklarga almashtirish (2 TB dan ortiq)
Mashq: Ma'lumotni saqlagan holda joriy disklarni kattaroq disklarga (2 x 3TB) almashtiring. Bunday holda, bizda tizim o'rnatilgan 2 x 240 GB SSD (RAID-1) disklari va tizimni o'tkazish kerak bo'lgan 2 x 3 TB SATA disklari mavjud. Joriy disklar MBR bo'lim jadvalidan foydalanadi. Yangi disklar sig'imi 2 TB dan katta bo'lgani uchun ular GPT jadvalidan foydalanishlari kerak bo'ladi, chunki MBR 2 TB dan katta disklar bilan ishlay olmaydi.
Keling, joriy disk tartibini ko'rib chiqaylik.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sda2 8:2 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
sdd 8:48 0 2,7T 0 disk
Diskda ishlatiladigan bo'limlar jadvalini tekshiramiz /dev/sda
.
[root@localhost ~]# fdisk -l /dev/sda | grep 'Disk label type'
Disk label type: dos
Diskda /dev/sdb
shunga o'xshash bo'lim jadvali ishlatiladi. Tizimda foydalanilgan disk maydonini tekshiramiz.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,5M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,3G 192G 1% /
/dev/md126 1007M 120M 837M 13% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
Ko'rib turganingizdek, fayl tizimining ildizi 204 Gb ni egallaydi. Keling, dasturiy ta'minot RAID massivining joriy holatini tekshiramiz.
1. GPT bo'lim jadvalini o'rnatish va diskni qismlarga ajratish
Disk tartibini sektorlar bo'yicha tekshiramiz.
[root@localhost ~]# parted /dev/sda print
Модель: ATA KINGSTON SVP200S (scsi)
Диск /dev/sda: 240GB
Размер сектора (логич./физич.): 512B/512B
Таблица разделов: msdos
Disk Flags:
Номер Начало Конец Размер Тип Файловая система Флаги
1 1049kB 1076MB 1075MB primary загрузочный, raid
2 1076MB 240GB 239GB primary raid
Yangi 3TB diskda biz 3 ta bo'lim yaratishimiz kerak bo'ladi:
- Bo'lim
bios_grub
GPT BIOS mosligi uchun 2MiB hajmi, - O'rnatiladigan RAID massivi uchun bo'lim
/boot
. - U erda bo'ladigan RAID massivi uchun bo'lim LV ildizi и LV almashinuvi.
Yordamchi dasturni o'rnatish xayrlashdi jamoa yum install -y parted
(CentOS uchun), apt install -y parted
(Debian/Ubuntu uchun).
Foydalanish xayrlashdi Diskni qismlarga bo'lish uchun quyidagi buyruqlarni bajaramiz.
Buyruqni bajaring parted /dev/sdc
va disk tartibini tahrirlash rejimiga o'ting.
GPT bo'limlari jadvalini yarating.
(parted) mktable gpt
1 bo'lim yarating bios_grub
bo'lim va unga bayroq o'rnating.
(parted) mkpart primary 1MiB 3MiB
(parted) set 1 bios_grub on
2-qism yarating va unga bayroq o'rnating. Bo'lim RAID massivi uchun blok sifatida ishlatiladi va unga o'rnatiladi /boot
.
(parted) mkpart primary ext2 3MiB 1028MiB
(parted) set 2 boot on
Biz 3-bo'limni yaratamiz, u LVM joylashgan massiv bloki sifatida ham ishlatiladi.
(parted) mkpart primary 1028MiB 100%
Bunday holda, bayroqni o'rnatish shart emas, lekin agar kerak bo'lsa, uni quyidagi buyruq bilan o'rnatish mumkin.
(parted) set 3 raid on
Yaratilgan jadvalni tekshiramiz.
(parted) p
Модель: ATA TOSHIBA DT01ACA3 (scsi)
Диск /dev/sdc: 3001GB
Размер сектора (логич./физич.): 512B/4096B
Таблица разделов: gpt
Disk Flags:
Номер Начало Конец Размер Файловая система Имя Флаги
1 1049kB 3146kB 2097kB primary bios_grub
2 3146kB 1077MB 1074MB primary загрузочный
3 1077MB 3001GB 3000GB primary
Biz diskka yangi tasodifiy GUID tayinlaymiz.
sgdisk -G /dev/sdd
2. Birinchi diskning bo'limlarini massivlardan olib tashlash
Massiv holatini tekshirish
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sda1[0] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sda2[0] sdb2[1]
233206784 blocks super 1.2 [2/2] [UU]
bitmap: 0/2 pages [0KB], 65536KB chunk
unused devices: <none>
Tizim ikkita massivdan foydalanadi: md2 (o'rnatish nuqtasi / yuklash) - quyidagilardan iborat /dev/sda1
и /dev/sdb1
, md127
(LVM uchun swap
va fayl tizimining ildizi) - dan iborat /dev/sda2
и /dev/sdb2
.
Biz har bir massivda ishlatiladigan birinchi diskning bo'limlarini yomon deb belgilaymiz.
mdadm /dev/md126 --fail /dev/sda1
mdadm /dev/md127 --fail /dev/sda2
Blok qurilma bo'limlarini olib tashlash /dev/sda
massivlardan.
mdadm /dev/md126 --remove /dev/sda1
mdadm /dev/md127 --remove /dev/sda2
Diskni olib tashlaganingizdan so'ng massiv holatini tekshirish.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdb1[1]
1047552 blocks super 1.2 [2/1] [_U]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
3. Massivga yangi diskning bo'limlarini qo'shish
Keyingi qadam, sinxronizatsiya uchun massivlarga yangi diskning bo'limlarini qo'shishdir. Keling, diskni joylashtirishning hozirgi holatini ko'rib chiqaylik.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
└─sdc3 8:35 0 2,7T 0 part
sdd 8:48 0 2,7T 0 disk
Bo'lim /dev/sdc1
u bios_grub
bo'limi va massivlarni yaratishda ishtirok etmaydi. Massivlar faqat foydalanadi /dev/sdc2
и /dev/sdc3
. Biz ushbu bo'limlarni mos keladigan massivlarga qo'shamiz.
mdadm /dev/md126 --add /dev/sdc2
mdadm /dev/md127 --add /dev/sdc3
Keyin massivni sinxronlashtirishni kutamiz.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdc2[2] sdb1[1]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 0/1 pages [0KB], 65536KB chunk
md127 : active raid1 sdc3[2] sdb2[1]
233206784 blocks super 1.2 [2/1] [_U]
[>....................] recovery = 0.2% (619904/233206784) finish=31.2min speed=123980K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
Massivga bo'limlar qo'shgandan so'ng diskni qismlarga ajratish.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdb2 8:18 0 222,5G 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
4. Massivlardan ikkinchi diskning bo'limlarini olib tashlash
Har bir massivda ishlatiladigan ikkinchi diskning bo'limlarini yomon deb belgilaymiz.
mdadm /dev/md126 --fail /dev/sdb1
mdadm /dev/md127 --fail /dev/sdb2
Blok qurilma bo'limlarini olib tashlash /dev/sda
massivlardan.
mdadm /dev/md126 --remove /dev/sdb1
mdadm /dev/md127 --remove /dev/sdb2
5. GPT tartibi jadvalidan nusxa oling va massivni sinxronlashtiring
GPT belgilash jadvalini nusxalash uchun biz yordam dasturidan foydalanamiz sgdisk
, disk bo'limlari va GPT jadvali bilan ishlash uchun paketga kiritilgan - gdisk
.
sozlama gdisk
CentOS uchun:
yum install -y gdisk
sozlama gdisk
Debian/Ubuntu uchun:
apt install -y gdisk
E'tibor: GPT uchun birinchi bo'lib disk ko'rsatilgan qaysi ustiga belgilarni nusxalash, ikkinchi disk diskni bildiradi qaysidan belgini nusxalash. Agar siz disklarni aralashtirsangiz, dastlab yaxshi bo'lim qayta yoziladi va yo'q qilinadi.
GPT belgilash jadvalidan nusxa oling.
sgdisk -R /dev/sdd /dev/sdc
Jadvalni diskka o'tkazgandan so'ng diskni qismlarga ajratish /dev/sdd
.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
└─sdd3 8:51 0 2,7T 0 part
Keyinchalik, dasturiy ta'minot RAID massivlarida ishtirok etadigan har bir bo'limni qo'shamiz.
mdadm /dev/md126 --add /dev/sdd2
mdadm /dev/md127 --add /dev/sdd3
Biz massivni sinxronlashtirishni kutmoqdamiz.
[root@localhost ~]# cat /proc/mdstat
Personalities : [raid1]
md126 : active raid1 sdd2[3] sdc2[2]
1047552 blocks super 1.2 [2/2] [UU]
bitmap: 1/1 pages [4KB], 65536KB chunk
md127 : active raid1 sdd3[3] sdc3[2]
233206784 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.0% (148224/233206784) finish=26.2min speed=148224K/sec
bitmap: 2/2 pages [8KB], 65536KB chunk
unused devices: <none>
GPT qismini ikkinchi yangi diskka nusxalashdan so'ng, bo'lim shunday ko'rinadi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
├─sda1 8:1 0 1G 0 part
└─sda2 8:2 0 222,5G 0 part
sdb 8:16 0 223,6G 0 disk
├─sdb1 8:17 0 1G 0 part
└─sdb2 8:18 0 222,5G 0 part
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md126 9:126 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md127 9:127 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Keyin GRUB-ni yangi disklarga o'rnating.
CentOS uchun o'rnatish:
grub2-install /dev/sdX
Debian/Ubuntu uchun o'rnatish:
grub-install /dev/sdX
qayerda X
— haydovchi harfi, bizning holatlarimizda haydovchilar /dev/sdc
и /dev/sdd
.
Biz massiv haqidagi ma'lumotlarni yangilaymiz.
CentOS uchun:
mdadm --detail --scan --verbose > /etc/mdadm.conf
Debian/Ubuntu uchun:
echo "DEVICE partitions" > /etc/mdadm/mdadm.conf
mdadm --detail --scan --verbose | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf
Tasvir yangilanmoqda initrd
:
CentOS uchun:
dracut -f -v --regenerate-all
Debian/Ubuntu uchun:
update-initramfs -u -k all
GRUB konfiguratsiyasini yangilaymiz.
CentOS uchun:
grub2-mkconfig -o /boot/grub2/grub.cfg
Debian/Ubuntu uchun:
update-grub
Qadamlarni bajargandan so'ng, eski disklarni olib tashlash mumkin.
6. Ildiz bo'limining fayl tizimi kengaytmasi (ext4).
Tizimni 2 x 3TB disklarga (RAID-1) o'tkazgandan so'ng, fayl tizimini kengaytirishdan oldin diskni qismlarga ajratish.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
sdb 8:16 0 223,6G 0 disk
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md126 9:126 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md126 9:126 0 222,4G 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Endi bo'limlar /dev/sdc3
и /dev/sdd3
2.7 TB ni egallaydi. Biz GPT jadvali bilan yangi disk tartibini yaratganimiz sababli, 3-bo'limning o'lchami darhol maksimal mumkin bo'lgan disk maydoniga o'rnatildi, bu holda bo'limni kengaytirishning hojati yo'q.
Buning uchun:
- md126 massivini kengaytirish,
- PVni kengaytirish (jismoniy hajm),
- LV (mantiqiy hajm) vg0-rootni kengaytirish,
- Fayl tizimini kengaytiring.
1. Massivni kengaytiring md126
maksimal darajada.
mdadm --grow /dev/md126 --size=max
Massiv kengaytirilgandan keyin md126
egallagan joyning hajmi 2.7 TB ga oshdi.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 223,6G 0 disk
sdb 8:16 0 223,6G 0 disk
sdc 8:32 0 2,7T 0 disk
├─sdc1 8:33 0 2M 0 part
├─sdc2 8:34 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdc3 8:35 0 2,7T 0 part
└─md126 9:126 0 2,7T 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
sdd 8:48 0 2,7T 0 disk
├─sdd1 8:49 0 2M 0 part
├─sdd2 8:50 0 1G 0 part
│ └─md127 9:127 0 1023M 0 raid1 /boot
└─sdd3 8:51 0 2,7T 0 part
└─md126 9:126 0 2,7T 0 raid1
├─vg0-root 253:0 0 206,4G 0 lvm /
└─vg0-swap 253:1 0 16G 0 lvm [SWAP]
Kengaytirilmoqda jismoniy hajm.
Kengaytirishdan oldin, egallangan maydonning joriy qiymatini tekshiring PV /dev/md126
.
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md126 vg0 lvm2 a-- 222,40g 0
Quyidagi buyruq bilan PV ni kengaytiramiz.
pvresize /dev/md126
Biz tugallangan harakatni tekshiramiz.
[root@localhost ~]# pvs
PV VG Fmt Attr PSize PFree
/dev/md126 vg0 lvm2 a-- <2,73t 2,51t
Kengaytirilmoqda mantiqiy hajm vg0-root.
PV ni kengaytirgandan so'ng, ishg'ol qilingan VG maydonini tekshiramiz.
[root@localhost ~]# vgs
VG #PV #LV #SN Attr VSize VFree
vg0 1 2 0 wz--n- <2,73t 2,51t
Keling, LV egallagan joyni tekshiramiz.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root vg0 -wi-ao---- <206,41g
swap vg0 -wi-ao---- <16,00g
vg0-root hajmi 206.41 GBni egallaydi.
Biz LV ni maksimal disk maydoniga kengaytiramiz.
lvextend -l +100%FREE /dev/mapper/vg0-root
Kengayishdan keyin LV bo'shlig'ini tekshirish.
[root@localhost ~]# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
root vg0 -wi-ao---- 2,71t
swap vg0 -wi-ao---- <16,00g
Fayl tizimini kengaytirish (ext4).
Fayl tizimining joriy hajmini tekshiramiz.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 204G 1,4G 192G 1% /
/dev/md127 1007M 141M 816M 15% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
/dev/mapper/vg0-root hajmi LV kengaytirilgandan keyin 204 GB ni egallaydi.
Fayl tizimini kengaytirish.
resize2fs /dev/mapper/vg0-root
Fayl tizimini kengaytirgandan so'ng uning hajmini tekshirish.
[root@localhost ~]# df -h
Файловая система Размер Использовано Дост Использовано% Cмонтировано в
devtmpfs 16G 0 16G 0% /dev
tmpfs 16G 0 16G 0% /dev/shm
tmpfs 16G 9,6M 16G 1% /run
tmpfs 16G 0 16G 0% /sys/fs/cgroup
/dev/mapper/vg0-root 2,7T 1,4G 2,6T 1% /
/dev/md127 1007M 141M 816M 15% /boot
tmpfs 3,2G 0 3,2G 0% /run/user/0
Fayl tizimining hajmi butun hajmni qoplash uchun oshirildi.
Manba: www.habr.com