Quid LVM et Matryoshka commune habent?

Bonus dies.
Vellem communicare cum communitate experientiam meam practicam aedificandi systema reponendi KVM utendi md RAID + LVM.

Programma complectetur:

  • AEDI Md RAID 1 a NVMe SSD.
  • Congregans md RAID 6 ex SATA SSD et regulariter agitet.
  • Features Trim/disfacere operationem in SSD INCURSUS 1/6.
  • Excursio bootable md 1/6 creans aciem in communi orbe terrarum statuto.
  • Instruendo systema in NVMe RAID 1 cum nulla NVMe subsidium in BIOS est.
  • LVM cache utens et LVM tenuis.
  • Usura BTRFS snapshots et mittere/accipere pro tergum.
  • Utens LVM snapshots gracilis et bracteolae ad tergum stili BTRFS.

Si interest, cat videbis.

application

Auctor nullam habet responsabilitatem pro consequentibus utendi vel non utendi materiae/exempla/code/tips/datarum ex hoc articulo. Hanc materiam quoquo modo legendo vel utens, responsabilitatem assumis omnium harum actionum consequentium. Consecutiones possibiles includunt:

  • CRISTIS FRIXIT NVME SSDs.
  • Omnino usus est recordatio resource ac defectus SSD agitet.
  • Amissio omnium notitiarum in omnibus agitet, inclusis exemplaribus tergum.
  • Utinam eu lectus odio.
  • Frustra, nervis et pecunia.
  • Aliae consequentia aliquae quae non supra positae sunt.

ferrum

Praesto erant:

Motherboard from around 2013 with Z87 chipset, complete with Intel Core i7 / Haswell.

  • Processus 4 coros, 8 stamina
  • 32 GB DDR3 RAM
  • 1 x 16 or 2 x 8 Plu 3.0
  • 1 x 4 + 1 x 1 Plu 2.0
  • 6 x 6 GBps SATA 3 connectors

SAS adaptor LSI SAS9211-8I emicuit IT / HBA modus. RAD-enabled firmware has been intentionly replaced with HBA firmware to:

  1. Hunc adaptatorem aliquando ejicere potes ac reponere apud quemvis alium quem invenisti.
  2. Trim/discard abdicans laboraverunt in orbe terrarum, quia... in RADIX firmware haec mandata omnino non sustinentur, et HBA generatim non curat quae mandata transmittuntur.

Durum agitet - 8 partes HGST Travelstar 7K1000 cum capacitate 1 TB in factor 2.5 formarum, sicut laptop. Hae fugae antea in expeditio 6 ordinatae erant. Habebunt etiam usum in nova ratione. Condere locorum tergum.

Accedit additur:

6 coniecit SATA SSD exemplar HTC 860 QVO 2TB. Hae SSDs magnum volumen requirebant, praesentiam SLC cache, fidem et pretium parvo desiderati sunt. Firmamentum ad disfacere/nullam requisitum est, quod a linea in dmesg cohibetur:

kernel: ata1.00: Enabling discard_zeroes_data

2 fragmenta NVME SSD exemplar Samsung SSD 970 EVO 500GB.

Pro his SSDs, temere legunt / scribe velocitatem et facultatem facultates pro necessitatibus tuis magni momenti sunt. Radiator pro iis. Ne- cessario. Absolute. Alioquin frige donec CALAMISTRATUS primo incursio synchronisationi.

StarTech PEX8M2E2 adaptor pro 2 x NVMe SSD in Plu 3.0 8x socors installatus est. Hoc rursus HBA iustum est, sed pro NVMe. Differt ab adaptors vilis in eo quod fulcrum bifurcationis non requirit a materna ob praesentiam aedificati-in Plu switch. Etiam in antiquissima ratione cum Plu laborabit, etiamsi x1 Plu 1.0 socors est. Nempe in opportunitate celeritatis. Nullae RAIDS ibi. Nulla in tabula BIOS constructa est. Ergo ratio tua magice non discet cum NVMe tabernus, nedum NVMe RADIDO gratias hac arte.

Haec compositio unice ob praesentiam unius tantum liberae 8x Plu 3.0 in systemate est, et, si 2 liberi foramina sunt, facile restitui potest duobus denariis PEX4M2E1 vel analogis, quae usquam pretio 600 emi possunt. rubles.

Reiectio omnium generum ferramentorum vel in chipset/BIOS RAIDs consulto facta est, ut totum systema integrum restituere possit, excepto ipsarum SSD/HDD, servatis omnibus indiciis. Specimen, ut etiam ratio operandi inauguratus servare possis, cum movens ad omnia bene nova/diversis ferramentis. Summa est quod sunt SATA et Plu portus. Vivo similis est CD vel mico bootable coegi, modo velocissimus et paulo ponderosus.

humorAlioquin scis quid acciderit - interdum instanter necesse est ut totam aciem tecum auferas. Sed data amittere nolo. Ad hoc faciendum, omnia nominata media in 5.25 sinubus regulae casui commode positae sunt.

Bene, et quidem, ad experiendas diversas methodos SSD caching in Linux.

Odiosis incursiones odio. Vertere. Aut facit aut non. Et apud mdadm semper sunt optiones.

mollis

Antea Debian 8 Jessie in ferramentis inauguratus est, qui prope EOL est. INCURSIO 6 ex supradictis HDDs iunctis cum LVM convenerunt. Cucurrit virtualis machinis in kvm/libvirt.

Quod Auctor experientiam idoneam habet in creandis mico portabilibus SATA/NVMe agitet, ac etiam, ne ad consuetum aptum exemplar frangatur, Ubuntu 18.04 electa est ut ratio scopo, quae iam satis stabilita est, sed adhuc per 3 annos habet. subsidium futurum.

Systema praedictum continet omnes exactores ferramentorum quae ex arca egent. Non opus est aliqua tertia-parte programmatis vel rectoribus.

Praeparans ut Install

Ad institutionem systematis Ubuntu Desktop Image opus est. Distributorii ratio quaedam habet instrumentum strenuum, quod ostendit immoderatam independentiam quae UEFI systematis partitionem in unum orbis trudendo debilitatum esse non potest, omnem pulchritudinem corrumpens. Itaque solum in UEFI modo instituitur. Aliquam optiones non praebet.

In hoc non sumus beati.

Quid?Infeliciter, UEFI caliga valde male compatitur cum programmatibus scatendis INCURSUS, quia... Nemo exceptiones nobis offert pro partitione UEFI ESP. Recipes sunt online quae suggerunt ponens ESP partitionem in mico coegi in portu USB, sed hoc punctum defectionis est. Recipes sunt usus programmatum mdadm RAID 1 cum versione metadata 0.9 quae UEFI BIOS hanc partitionem videre non prohibent, sed hoc vivit usque ad laetum momentum cum BIOS vel alio ferramento OS aliquid ad ESP scribit et obliviscitur eam aliis synchronizare. speculis.

Praeterea UEFI caliga ab NVRAM pendet, quae cum disco ad novum systema non movetur, quia est pars motherboard.

Non ergo novam rotam refricabimus. Iam habemus cursoriam avi probatam, nunc probatam, nunc Legatum/BIOS tabernus nomine, superbum nomen CSM in systematibus UEFI-compatilibus portantes. Sumemus tantum de pluteo, lubricate, summas sentite et panno humido abstergam.

Versio desktop Ubuntu etiam recte institui non potest apud Legatum bootloader, sed hic, ut aiunt, saltem optiones sunt.

Itaque colligemus ferramenta et systema onerant ex Ubuntu Vive mico cogente bootable. Opus est ut sarcinas deponas, sic retiaculum quod tibi operatur ponemus. Si non operatur, necessarias sarcinas onerare potes in antecessum mico coegi.

In ambitum Desktop imus, aemulatorem terminalem deprimimus, ac procul imus:

#sudo bash

Quam…?Linea supra felis canonicae pro holiwars de sudo. C bоmaiora occasiones venient etоmaiore cura. Quaeritur an a te ipso possis accipere. Multi putant hoc modo uti sudo saltem non cavere. Sed:

#apt-get install mdadm lvm2 thin-provisioning-tools btrfs-tools util-linux lsscsi nvme-cli mc

Quidni ZFS...?Cum programmatum instituimus in computatro nostro, essentialiter ferramenta nostra praebemus ad tincidunt programmatis huius ad depellendos.
Cum hunc programmatum cum salute notitiae nostrae confidimus, mutuum accipimus aequalem cum pecunia huius notitiae restituendae, quam aliquando reddere debebimus.

Ex hac sententia ZFS Ferrarius est, et mdadm+lvm bicyclo similior est.

Subiective auctor mavult cursoriam fidem dare hominibus ignotis loco Ferrarii. Ibi pretium non altum. Iura eget nulla. Simplicius quam praecepta negotiationis. Quisque a libero est. Cross-patria melius facultatem. Crura birotae semper affigere potes, et birotam propriis manibus reparare potes.

Cur ergo BTRFS...?Ut systema operantem tabernus, ratio limae opus est quae in Legato/BIOS GRUB ex archa sustentatur et simul subsidia snaculorum vivant. Utemur eo ad partitionem tabernus. Praeterea auctor mavult hoc FS pro / (radix) uti, non praeteriens notare quod pro qualibet alia programmata separare potes partitiones in LVM creare et eas conscendere in directoriis necessariis.

Imagines machinis virtualis vel databases in hac FS non recondemus.
Hoc FS tantum adhiberi potest ut snapshots systematis conficiat quin eam avertat ac deinde has snapshots transferat ad orbis tergum utens mittat/recipiat.

Praeterea auctor plerumque mavult minimum programmatis directe in ferramentis servare et omnia alia programmata in virtualis machinis uti rebus quasi promovendis GPUs et PCI-USB moderatoris Hostiae ad KVM per IOMMU.

Sola quae supersunt in ferramentis sunt notitia repono, virtualisatio et tergum.

Si plus confidis ZFS, in principio, ad certum applicationem convertuntur.

Attamen auctor consulto ignorat constructas-in speculis/RAID et redundantias notas quas ZFS, BRTFS et LVM habent.

Addito argumento, BTRFS facultatem habet temere scribens in sequentia vertere, quae maxime positivum effectum habet in celeritate synchroni snapshots/terga in HDD.

Lets rescan omnes cogitationes;

#udevadm control --reload-rules && udevadm trigger

Inspice circum:

#lsscsi && nvme list
[0:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sda
[1:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sdb
[2:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sdc
[3:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sdd
[4:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sde
[5:0:0:0] disk ATA Samsung SSD 860 2B6Q /dev/sdf
[6:0:0:0] disk ATA HGST HTS721010A9 A3J0 /dev/sdg
[6:0:1:0] disk ATA HGST HTS721010A9 A3J0 /dev/sdh
[6:0:2:0] disk ATA HGST HTS721010A9 A3J0 /dev/sdi
[6:0:3:0] disk ATA HGST HTS721010A9 A3B0 /dev/sdj
[6:0:4:0] disk ATA HGST HTS721010A9 A3B0 /dev/sdk
[6:0:5:0] disk ATA HGST HTS721010A9 A3B0 /dev/sdl
[6:0:6:0] disk ATA HGST HTS721010A9 A3J0 /dev/sdm
[6:0:7:0] disk ATA HGST HTS721010A9 A3J0 /dev/sdn
Node SN Model Namespace Usage Format FW Rev
---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- --------
/dev/nvme0n1 S466NXXXXXXX15L Samsung SSD 970 EVO 500GB 1 0,00 GB / 500,11 GB 512 B + 0 B 2B2QEXE7
/dev/nvme1n1 S5H7NXXXXXXX48N Samsung SSD 970 EVO 500GB 1 0,00 GB / 500,11 GB 512 B + 0 B 2B2QEXE7

Orbis layout

NVMe SSD

Sed eos nullo modo notare volumus. Sed tamen haec noster BIOS non videt. Sic plane ad programmatum INCURSIO ibunt. Partes etiam ibi creabimus. Si vis "canonem" vel principaliter sequi, unam magnam partitionem fac, sicut HDD.

DIABOLUS HDD

Nihil opus est hic aliquid speciale confingere. Unam partem in omnibus creabimus. Partitionem creabimus quia BIOS has orbis videt et etiam ex illis tabernus conantur. Etiam in his disci postea GRUB instituemus ut systema hoc subito facere possit.

#cat >hdd.part << EOF
label: dos
label-id: 0x00000000
device: /dev/sdg
unit: sectors

/dev/sdg1 : start= 2048, size= 1953523120, type=fd, bootable
EOF
#sfdisk /dev/sdg < hdd.part
#sfdisk /dev/sdh < hdd.part
#sfdisk /dev/sdi < hdd.part
#sfdisk /dev/sdj < hdd.part
#sfdisk /dev/sdk < hdd.part
#sfdisk /dev/sdl < hdd.part
#sfdisk /dev/sdm < hdd.part
#sfdisk /dev/sdn < hdd.part

DIABOLUS SSD

Haec ubi res nobis interesting.

Primo agitationes nostrae sunt 2 TB in magnitudine. Hoc est intra ambitum gratum MBR, quod est quod utemur. Si opus est, reponi potest cum GPT. GPT orbis iacuit convenientiam habent quae systemata MBR compatibilis permittit videre primas 4 partitiones si in primis 2 terabytis sitae sunt. Praecipuum est ut partitio tabernus et partitio bios_grub in his orbis orbis initio sit. Hoc etiam permittit vos ut tabernus a Legatum GPT/BIOS expellit.

Sed hoc non est nobis.

Duas hic sectiones creabimus. Prima erit 1 GB magnitudine et pro INCURSU 1/boot adhibita.

Alter pro INCURSU 6 adhibebitur et residuos omnes spatium vacuum capiet excepto parva area in fine coegi.

Quae est haec regio non nota?Iuxta fontes in retiaculis, nostri SATA SSDs in tabula habent SLC cache dynamice expansi magnitudine pervagantes ab 6 ad 78 gigabytes. 6 gigabytes "gratis" consequimur propter differentiam inter "gigabytes" et "gibibytes" in notitia schedae coegi. Reliquae LXXII gigabytae ex insueto spatio sortiuntur.

Hic notandum est, quod habemus SLC cache, et spatium occupatum in 4 frenum MLC modum. Quod nobis efficaciter significat quod pro singulis 4 gigabytis spatii liberi tantum 1 gigabytum SLC cache tenebimus.

Multiplica 72 gigabytes per IIII et posside 4 gigabytes. Hoc spatium liberum est quod notare non volumus ut impulsus ad plenam SLC cache uti sinamus.

Ita efficaciter ascendemus ad 312 gigabytae latibuli SLC ex summa sex pellium. Omnium autem incessus, 2 in incursio pro redundantia utendum erit.

Haec copia cella nobis perraro in vita reali permittit occurrere ubi res scribat non ad cella. Hoc apprime compensat tristissimae memoriae QLC incommodum - scribe velocitatem nimis humilem, cum notitia scripta est cella praeteriens. Si onera tua huic non respondeant, commendo te difficile putare quomodo tempus SSD tuum sub tali onere duratura sit, habita ratione TBW ex scheda data.

#cat >ssd.part << EOF
label: dos
label-id: 0x00000000
device: /dev/sda
unit: sectors

/dev/sda1 : start= 2048, size= 2097152, type=fd, bootable
/dev/sda2 : start= 2099200, size= 3300950016, type=fd
EOF
#sfdisk /dev/sda < ssd.part
#sfdisk /dev/sdb < ssd.part
#sfdisk /dev/sdc < ssd.part
#sfdisk /dev/sdd < ssd.part
#sfdisk /dev/sde < ssd.part
#sfdisk /dev/sdf < ssd.part

partum Arrays

Primum in secunda nomine machinae opus est. Hoc necessarium est, quia nomen exercitus pars est nomen ordinata alicubi intra mdadm et afficit aliquid alicubi. Utique vestit renamed potest postea, sed hoc est superfluum gradum.

#mcedit /etc/hostname
#mcedit /etc/hosts
#hostname
vdesk0

NVMe SSD

#mdadm --create --verbose --assume-clean /dev/md0 --level=1 --raid-devices=2 /dev/nvme[0-1]n1

Cur munda-...?Ad vitare vestit initializing. Utraque INCURSUS gradus 1 et 6 hoc validum est. Omnia sine initializatione operari possunt, si novus ordo est. Praeterea, ordinatae in SSD creationis initializing resource TBW vastum est. TRIM/DISCARD utimur ubi fieri potest ut in convenerunt SSD vestit eos "initialize" eos.

Pro SSD vestit, INCURSIO 1 DISCARDO sustentatur ex archa.

Pro SSD RAID 6 DISCARDO vestit, debes eum in moduli moduli parametri nucleo efficere.

Hoc solum fieri debet si omnes SSDs in gradu 4/5/6 vestiti in hoc systemate laborantes subsidium pro abiiciendis_zeroes_data habent. Aliquando per alienas agitationes quae nuntiant nucleum hoc munus sustineri, re vera ibi non est, vel munus non semper operatur. In momento, subsidia fere ubique praesto sunt, tamen vetera errata depellit ac firmware. Quam ob rem, auxilium abiiciendo debilitatum defectione pro VI INCURSUS.

Animadverte, hoc praeceptum omnia data in NVMe destruet per "initializationem" ordinatam cum "zeros".

#blkdiscard /dev/md0

Si quid siet, proba gradum.

#blkdiscard --step 65536 /dev/md0

DIABOLUS SSD

#mdadm --create --verbose --assume-clean /dev/md1 --level=1 --raid-devices=6 /dev/sd[a-f]1
#blkdiscard /dev/md1
#mdadm --create --verbose --assume-clean /dev/md2 --chunk-size=512 --level=6 --raid-devices=6 /dev/sd[a-f]2

Quid tam magnum...?FRUSTUM amplitudo augens effectum positivum effectum habet in velocitate lectionis incerti in caudices usque ad magnitudinem inclusive. Hoc autem contingit, quia una operatio conveniens magnitudinis vel minoris potest totaliter compleri in una fabrica. Ideo IOPS ab omnibus artibus concluditur. Secundum statisticam, 99% IO non excedit 512K.

INCURSUS habet VI IOPS per scribe semper minor vel aequalis IOPS unius coegi. Cum, ut temere legeris, IOPS pluries maior esse potest quam una coegi, et hic clausus magni momenti est.
Auctor punctum non videt in trying ut optimize parametri malum in INCURSU 6 consilio et loco optimize quid sit bonum INCURSIO 6 .
Compensamus temere pro pauperibus scribere de INCURSU 6 cum NVMe cella et strophis gracilibus cibariis.

Nondum discessionem paravimus pro INCURSU 6. Ita hoc modo aciem "initializare" non volumus. Hoc postea faciemus, insertis OS.

DIABOLUS HDD

#mdadm --create --verbose --assume-clean /dev/md3 --chunk-size=512 --level=6 --raid-devices=8 /dev/sd[g-n]1

LVM in NVMe RAID

Ad celeritatem, in NVMe RAID 1 quod est /dev/md0, vis radicem fasciculi collocare.
Nihilominus tamen hoc celeriter instructum ad alias necessitates adhuc indigebimus, ut RES, metadata et LVM-cache et LVM-tenue metadata, sic LVM VG in hoc ordine creabimus.

#pvcreate /dev/md0
#vgcreate root /dev/md0

Partem faciamus ad systematis fasciculi radix.

#lvcreate -L 128G --name root root

Partem faciamus permutando secundum magnitudinem RAM.

#lvcreate -L 32G --name swap root

OS institutionem

In summa omnia habemus necessaria ad systema instituendum.

Lorem systema institutionem veneficus ex Ubuntu Live environment. Normalis institutionem. Tantum in spectaculo ad institutionem orbis eligendi, sequentia denotare debes:

  • /dev/md1, - conscende punctum /boot, FS - BTRFS
  • / (radix), FS - BTRFS.
  • /dev/root/ VERTO (aka /dev/mapper/root-VERTO) - uti ad VERTO partitione
  • Install bootloader in / dev/sda

Cum BTRFS ut systematis fasciculi radicem deliges, installator duo BTRFS volumina "@" pro / (radix), et "@domum" pro /domo nominata sponte creabit.

Initium institutionis...

Institutionem finiet cum arca alternis modalibus indicans errorem in instituendo bootloader. Infeliciter, non poteris hoc dialogo utendo normas medias exire et institutionem continuare. Nos e systemate stipes et iterum ini, in puro Ubuntu vive desktop desinendo. Aperi terminum, et iterum;

#sudo bash

Chroot amet creare institutionem pergere:

#mkdir /mnt/chroot
#mount -o defaults,space_cache,noatime,nodiratime,discard,subvol=@ /dev/mapper/root-root /mnt/chroot
#mount -o defaults,space_cache,noatime,nodiratime,discard,subvol=@home /dev/mapper/root-root /mnt/chroot/home
#mount -o defaults,space_cache,noatime,nodiratime,discard /dev/md1 /mnt/chroot/boot
#mount --bind /proc /mnt/chroot/proc
#mount --bind /sys /mnt/chroot/sys
#mount --bind /dev /mnt/chroot/dev

Configurare retis et hostname in chroot:

#cat /etc/hostname >/mnt/chroot/etc/hostname
#cat /etc/hosts >/mnt/chroot/etc/hosts
#cat /etc/resolv.conf >/mnt/chroot/etc/resolv.conf

Eamus in ambitum chroot:

#chroot /mnt/chroot

Primum omnium fasciculos trademus;

apt-get install --reinstall mdadm lvm2 thin-provisioning-tools btrfs-tools util-linux lsscsi nvme-cli mc debsums hdparm

Sit scriptor reprehendo et figere omnes fasciculos prave installed propter incompletam institutionem systematis:

#CORRUPTED_PACKAGES=$(debsums -s 2>&1 | awk '{print $6}' | uniq)
#apt-get install --reinstall $CORRUPTED_PACKAGES

Si aliquid non elaborat, debes recensere /etc/apt/sources.list prius

Let's adjust parametri ad INCURSUS VI moduli ad enable TRIM / DISCARD:

#cat >/etc/modprobe.d/raid456.conf << EOF
options raid456 devices_handle_discard_safely=1
EOF

Dimittamus nos paululum vestit;

#cat >/etc/udev/rules.d/60-md.rules << EOF
SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/stripe_cache_size", ATTR{md/stripe_cache_size}="32768"
SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/sync_speed_min", ATTR{md/sync_speed_min}="48000"
SUBSYSTEM=="block", KERNEL=="md*", ACTION=="change", TEST=="md/sync_speed_max", ATTR{md/sync_speed_max}="300000"
EOF
#cat >/etc/udev/rules.d/62-hdparm.rules << EOF
SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", RUN+="/sbin/hdparm -B 254 /dev/%k"
EOF
#cat >/etc/udev/rules.d/63-blockdev.rules << EOF
SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", RUN+="/sbin/blockdev --setra 1024 /dev/%k"
SUBSYSTEM=="block", ACTION=="add|change", KERNEL=="md*", RUN+="/sbin/blockdev --setra 0 /dev/%k"
EOF

Quid illa..?Nos udev statuto creavimus praecepta quae sequuntur;

  • Pone clausuram magnitudinis pro INCURSUS 2020 pro 6. Valor defectus, ut videtur, ab creatione Linux non mutatus est, et diu satis non fuit.
  • Minimum IO reservate pro duratione ordinatarum / synchronisationum. Hoc est, ne vestimenta tua adhæsit in statu aeternae synchronisationi sub onere.
  • Circumscribere maximum IO in compescit/synchronisation vestit. Hoc necessarium est ut synchronum/reprehendo SSD RAIDs tuam ad rigidas impellit non frige. Hoc maxime valet pro NVME. (Memini radiator? Non iocari.)
  • Prohibe orbes ne gyrationis fusi sistendo (HDD) per APM et ponant somnum timeout pro orbis moderatoris ad 7 horas. Omnino disable APM potes, si id facere potes (-B 255). Valore defectu, post quinque secundis agitationes cessabunt. Tunc OS orbis cache vult reset, orbis iterum nent, et omnia iterum incipient. Discus maximum numerum fusum gyrationis limitatum habent. Talis cyclus defaltus simplex in duobus annis disci facile potest interficere. Non omnes orbis ab hoc patiuntur, sed nostri "laptop" sunt, cum occasus defectus congruis, quae ANCILLAM mini-fandi speciem faciunt.
  • Install readahead in disks (rototing) I megabyte - duo cuneos / FRUSTUM INCURSUS VI
  • Inactivare readahead in se vestit.

Recensere /etc/fstab:

#cat >/etc/fstab << EOF
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
# file-system mount-point type options dump pass
/dev/mapper/root-root / btrfs defaults,space_cache,noatime,nodiratime,discard,subvol=@ 0 1
UUID=$(blkid -o value -s UUID /dev/md1) /boot btrfs defaults,space_cache,noatime,nodiratime,discard 0 2
/dev/mapper/root-root /home btrfs defaults,space_cache,noatime,nodiratime,discard,subvol=@home 0 2
/dev/mapper/root-swap none swap sw 0 0
EOF

Cur est..?Nos / partitionem tabernus per UUID quaeramus. Ordinata nominatio speculativa mutare potuit.

Reliquas sectiones per nomina LVM in notatione /dev/mapper/vg-lv quaesibimus, quia partitiones satis singulares agnoscunt.

UUID LVM quia non utimur UUID voluminum LVM et eorum snapshots idem esse potest.Mount /dev/mapper/root-root.. bis?Ita. Prorsus. Pluma BTRFS. Haec ratio fasciculi pluries cum diversis subvols conscendi potest.

Ob eandem hanc notam commendamus numquam LVM snapshots creandi volumina BTRFS activae. Mirari licet cum reboot.

Ad mdadm config renatos;

#/usr/share/mdadm/mkconf | sed 's/#DEVICE/DEVICE/g' >/etc/mdadm/mdadm.conf

Let's adjust occasus LVM:

#cat >>/etc/lvm/lvmlocal.conf << EOF

activation {
thin_pool_autoextend_threshold=90
thin_pool_autoextend_percent=5
}
allocation {
cache_pool_max_chunks=2097152
}
devices {
global_filter=["r|^/dev/.*_corig$|","r|^/dev/.*_cdata$|","r|^/dev/.*_cmeta$|","r|^/dev/.*gpv$|","r|^/dev/images/.*$|","r|^/dev/mapper/images.*$|","r|^/dev/backup/.*$|","r|^/dev/mapper/backup.*$|"] issue_discards=1
}
EOF

Quid illa..?Automatae dilatationem LVM piscinarum tenuium potuimus attingere 90% spatium occupati per 5% voluminis.

Maximum numerum caudices cella pro LVM cache auximus.

LVM prohibuimus ab inquisitione voluminum LVM supra;

  • cogitationes quibus LVM cache (cdata)
  • inventa conditivo utens LVM cache, praeteriens latibulum ( _corig). Hoc in casu, fabrica ipsa fabrica conditivo adhuc per latibulum lustrabitur ).
  • machinis quae LVM cache metadata (cmeta)
  • omnia figmenta in VG nominibus imaginibus. Hic habebimus imagines orbis virtualis machinis, et nolumus LVM in exercitu movere volumina ad hospitem OS pertinentia.
  • omnes cogitationes in VG nomine tergum. Exempla tergum hic habebimus imagines machinae virtualis.
  • omnes cogitationes cuius nomen desinit "gpv" (hospes corporis volumen)

Sustentationem abiicias potuimus cum liberum spatium liberandi in LVM VG. Cave. Hoc LVs delendo faciet in SSD tempus satis consumendi. Hoc praecipue competit SSD INCURSIO 6. Tamen, secundum consilium, provisione tenui utemur, ne hoc nos omnino impediat.

In initramfs imaginem Lets renovatio:

#update-initramfs -u -k all

Instrue et configurare vermis;

#apt-get install grub-pc
#apt-get purge os-prober
#dpkg-reconfigure grub-pc

Quod orbis eligere?Omnes qui sunt sd*. Systema tabernus a quovis sata impulsu vel SSD operando valere debet.

Cur addiderunt prob..?Ob nimiam libertatem et lascivos manus.

Non recte laborat si una incursio est in statu degradato. Investigare conatur OS in partitionibus quae in machinis virtualibus in hoc ferramento decurrentibus adhibitae sunt.

Si opus est, relinquere potes, sed omnia supradicta memineris. Suadeo vultus forcipes ad tollendum injuriosum manus online.

Hac initialem institutionem absolvimus. Tempus est reboot in OS nuper installed. Vivere bootable CD/USB tollere noli oblivisci.

#exit
#reboot

Eligite aliquem ex DIABOLUS SSDs ut tabernus fabrica.

LVM in DIABOLUS SSD

Hic iam in novum OS densatus, retiaculum configuratum, aptum, aemulum terminalem aperuit et cucurrit;

#sudo bash

Perge.

"Initialize" ordinata ex DIABOLUS SSD:

#blkdiscard /dev/md2

Si non operatur, tenta:

#blkdiscard --step 65536 /dev/md2
Create LVM VG in DIABOLUS SSD:

#pvcreate /dev/md2
#vgcreate data /dev/md2

Cur alius VG.?Nam radicem iam VG nominatam habemus. Quidni omnia in unum VG addas?

Si plures sint PVs in VG, tum ad VG recte movendum, omnes PVs adesse debent. Exceptio est LVM INCURSIO qua consulto non utimur.

Profecto volumus quod si defectus (legis notitiarum deminutio) in aliquo ex RAD VI vestit, systema operandi normaliter tabernus et occasionem solvendae quaestionis nobis tribuet.

Ad hoc faciendum, in primo gradu abstractionis singulas species corporum "mediorum" separabimus in separato VG.

Scientice loquendo, diversae INCURSUS vestit ad diversas "redias fideles ditiones". Commune etiam defectionis punctum illis non facias saginando in unum VG.

Praesentia LVM in gradu "hardware" nos permittet ad libitum secare partes diversorum RURSUS vestium componendo diversimode. Exempli gratia - run eodem bcache + LVM tenuis, bcache + BTRFS, LVM cache + LVM tenuis, complexus ZFS conformatio cum cachulis, vel aliqua alia mixtura infernalis ad tentandum et comparandum omnia.

In gradu "odio" nihil aliud quam vetus "crass" LVM voluminibus non utemur. Exceptio huius regulae potest esse partitio tergum.

Hoc momento, opinor, multi lectores iam aliquid de pupa nidificandi suspicari coepta sunt.

LVM in DIABOLUS HDD

#pvcreate /dev/md3
#vgcreate backup /dev/md3

Novum VG iterum..?Vere volumus quod, si orbis ordinatus est ut in notitia tergum deficiat utemur, ratio operativa nostra normaliter operari perget, servato accessu ad non-tergum notitia ut solet. Itaque, ad vitandas difficultates activationis VG, separatum VG efficimus.

LVM cache eriges

Faciamus LV super NVMe INCURSUS 1 uti in caching fabrica.

#lvcreate -L 70871154688B --name cache root

Cur tam parum...?Ita est, quod NVMe SSDs nostri etiam thesaurum SLC habent. 4 gigabytae "liberi" et 18 gigabytae dynamici propter spatium liberum in 3-bit MLC occupatum. Cum hoc cella defecerit, NVMe SSDs non multo velocius erit quam cum cella nostra SATA SSD. Profecto, ob hanc causam, nihil sapit nobis ut partitionem LVM cache multo maiorem quam duplam amplitudinis SLC cellae NVMe pellant. For the NVMe drives used, the author it racionable to make 32-64 gigabytes of cache.

Partitio data amplitudo requiritur ut 64 gigabytes cella, metadata cache, et metadata tergum instituant.

Accedit quod post claudit rationem sordida notabo, LVM totum cache notabit tam sordidum et iterum synchronizabit. Hoc autem omne tempus lvchangel iterabitur in hac arte adhibita, donec systema iterum recreetur. Quapropter admoneo statim utendo scripto convenienti cella recreando.

Faciamus LV in DIABOLUS 6 sata uti ea pro fabrica conditivo.

#lvcreate -L 3298543271936B --name cache data

Cur tantum tres terabytes..?Ita ut, si opus sit, SATA SSD INCURSUS 6 alias necessitates uti possis. Magnitudo spatii conditivorum dynamice augeri potest, in musca, sine intermissione systematis. Ad hoc faciendum, debes ad tempus sistere et re-creare latibulum, sed commodum proprium LVM-cache super, verbi gratia, bcache est hoc fieri in musca.

Novam VG caching faciamus.

#pvcreate /dev/root/cache
#pvcreate /dev/data/cache
#vgcreate cache /dev/root/cache /dev/data/cache

Faciamus LV in fabrica conditivo.

#lvcreate -L 3298539077632B --name cachedata cache /dev/data/cache

Hic statim totum spatium vacuum in /dev/data/cache cepimus, ut omnes aliae necessariae partitiones statim in dev/root/cache creatae sunt. Si aliquid in loco iniquum creavit, utendo pvmovere potes.

Faciamus et effice latibulum:

#lvcreate -y -L 64G -n cache cache /dev/root/cache
#lvcreate -y -L 1G -n cachemeta cache /dev/root/cache
#lvconvert -y --type cache-pool --cachemode writeback --chunksize 64k --poolmetadata cache/cachemeta cache/cache
#lvconvert -y --type cache --cachepool cache/cache cache/cachedata

Quid tale chunksize..?Per experimenta practica auctor invenire potuit optimum exitum esse consecutum si magnitudo obstructionum LVM cache coincidit cum magnitudine obstructionum tenuium LVM. Praeterea, quanto minor est, melior figuratio in temere actis peragit.

64k est magnitudo minimi scandali pro LVM tenuis concessa.

Cave rescribas..!Ita. Hoc genus cella refert synchronisationem in fabrica conditivo scribere. Hoc significat ut si cella deperdita sit, notitias in fabrica conditivo amittere possis. Postea auctor tibi quid remediis indicabit, praeter NVMe INCURSIO 1, pro hoc periculo sumi potest.

Hoc cache genus studiose electum est ad compensandum temere pro paupere scribendo observantiam INCURSUS VI.

Sit scriptor reprehendo quod nos obtinuit:

#lvs -a -o lv_name,lv_size,devices --units B cache
LV LSize Devices
[cache] 68719476736B cache_cdata(0)
[cache_cdata] 68719476736B /dev/root/cache(0)
[cache_cmeta] 1073741824B /dev/root/cache(16384)
cachedata 3298539077632B cachedata_corig(0)
[cachedata_corig] 3298539077632B /dev/data/cache(0)
[lvol0_pmspare] 1073741824B /dev/root/cache(16640)

Tantum [cachedata_corig] in /dev/data/cache collocari debet. Si quid mali est, utere pvmove.

Latibulum disable si necesse est uno mandato potes:

#lvconvert -y --uncache cache/cachedata

Hoc fit in linea. LVM simpliciter sync latibulum ad disci, id remove, et renominationum ad cachedata_corig.

LVM tenuis profecta est

Quantum spatii opus est ad LVM metadata tenuia cogitemus;

#thin_metadata_size --block-size=64k --pool-size=6terabytes --max-thins=100000 -u bytes
thin_metadata_size - 3385794560 bytes estimated metadata area size for "--block-size=64kibibytes --pool-size=6terabytes --max-thins=100000"

Circum ad 4 gigabytes: 4294967296B

Multiplica per duo et adde 4194304B pro LVM PV metadata: 8594128896B
Partem separatam in NVMe RAID faciamus 1 ut metadatam tenuem collocemus et eorum tergum exemplum in eo:

#lvcreate -L 8594128896B --name images root

Quod..?Hic quaestio oriri potest: quare locus LVM tenuis metadata separatim erit si in NVMe conditivo adhuc erit et celeriter laborabit.

Etsi celeritas hic momenti est, longe a principali ratione est. Sermo latibulum est punctum defectionis. Aliquid ei accidere potest, et si LVM tenue metadata conditum est, omnia funditus amittenda faciet. Sine metadata integra, in tenuia volumina colligere fere impossibile erit.

Metadata movendo ad non-conditum separatum, sed celeriter volumen, salutem metadatae in eventu cache damnum vel corruptionem praestamus. In hoc casu, omne damnum ex cache detrimento intra volumina tenuia collocabitur, quae per ordinem magnitudinis iussus simpliciorem reddet. Probabiliter alta damna haec restituentur utens FS acta.

Praeterea, si sphaerulus tenuis voluminis antea captus est, et postea saltem semel cella plene congruens, tum ob tenuem LVM internum consilium, integritas snapshots praestabitur in eventu iacturae cache. .

Novam VG faciamus quae gravis cibaria praestabit:

#pvcreate /dev/root/images
#pvcreate /dev/cache/cachedata
#vgcreate images /dev/root/images /dev/cache/cachedata

Stagnam faciamus:

#lvcreate -L 274877906944B --poolmetadataspare y --poolmetadatasize 4294967296B --chunksize 64k -Z y -T images/thin-pool
Cur -Z y *Praeter ea quae hic modus actu destinatur - ne notitia ab uno machinae virtualis in aliud apparatus virtualis diffluat, cum spatium rediensuret - zerum accedit etiam ad augendam celeritatem scripturae incerti in caudices minores quam 64k. Quodvis minus quam 64k scribes ad aream voluminis tenuioris antea positam fiet 64K in cella limbo varius. Haec operatio per cella totum perficietur, figmentum conditivum praeteriens.

Moveamus LVs ad debitam PVs:

#pvmove -n images/thin-pool_tdata /dev/root/images /dev/cache/cachedata
#pvmove -n images/lvol0_pmspare /dev/cache/cachedata /dev/root/images
#pvmove -n images/thin-pool_tmeta /dev/cache/cachedata /dev/root/images

Sit scriptor reprehendo:

#lvs -a -o lv_name,lv_size,devices --units B images
LV LSize Devices
[lvol0_pmspare] 4294967296B /dev/root/images(0)
thin-pool 274877906944B thin-pool_tdata(0)
[thin-pool_tdata] 274877906944B /dev/cache/cachedata(0)
[thin-pool_tmeta] 4294967296B /dev/root/images(1024)

Gracile volumen faciamus pro probationibus:

#lvcreate -V 64G --thin-pool thin-pool --name test images

Nos sarcinas instituemus pro probationibus et vigilantia:

#apt-get install sysstat fio

Ita est, quomodo animadvertere possis mores configurationis nostrae in tempore reali;

#watch 'lvs --rows --reportformat basic --quiet -ocache_dirty_blocks,cache_settings cache/cachedata && (lvdisplay cache/cachedata | grep Cache) && (sar -p -d 2 1 | grep -E "sd|nvme|DEV|md1|md2|md3|md0" | grep -v Average | sort)'

Ita conformationem nostram probare possumus:

#fio --loops=1 --size=64G --runtime=4 --filename=/dev/images/test --stonewall --ioengine=libaio --direct=1
--name=4kQD32read --bs=4k --iodepth=32 --rw=randread
--name=8kQD32read --bs=8k --iodepth=32 --rw=randread
--name=16kQD32read --bs=16k --iodepth=32 --rw=randread
--name=32KQD32read --bs=32k --iodepth=32 --rw=randread
--name=64KQD32read --bs=64k --iodepth=32 --rw=randread
--name=128KQD32read --bs=128k --iodepth=32 --rw=randread
--name=256KQD32read --bs=256k --iodepth=32 --rw=randread
--name=512KQD32read --bs=512k --iodepth=32 --rw=randread
--name=4Kread --bs=4k --rw=read
--name=8Kread --bs=8k --rw=read
--name=16Kread --bs=16k --rw=read
--name=32Kread --bs=32k --rw=read
--name=64Kread --bs=64k --rw=read
--name=128Kread --bs=128k --rw=read
--name=256Kread --bs=256k --rw=read
--name=512Kread --bs=512k --rw=read
--name=Seqread --bs=1m --rw=read
--name=Longread --bs=8m --rw=read
--name=Longwrite --bs=8m --rw=write
--name=Seqwrite --bs=1m --rw=write
--name=512Kwrite --bs=512k --rw=write
--name=256write --bs=256k --rw=write
--name=128write --bs=128k --rw=write
--name=64write --bs=64k --rw=write
--name=32write --bs=32k --rw=write
--name=16write --bs=16k --rw=write
--name=8write --bs=8k --rw=write
--name=4write --bs=4k --rw=write
--name=512KQD32write --bs=512k --iodepth=32 --rw=randwrite
--name=256KQD32write --bs=256k --iodepth=32 --rw=randwrite
--name=128KQD32write --bs=128k --iodepth=32 --rw=randwrite
--name=64KQD32write --bs=64k --iodepth=32 --rw=randwrite
--name=32KQD32write --bs=32k --iodepth=32 --rw=randwrite
--name=16KQD32write --bs=16k --iodepth=32 --rw=randwrite
--name=8KQD32write --bs=8k --iodepth=32 --rw=randwrite
--name=4kQD32write --bs=4k --iodepth=32 --rw=randwrite
| grep -E 'read|write|test' | grep -v ioengine

Diligenter! Resource!Hoc signum discurrentes 36 diversae probationes, unaquaeque currit pro 4 secundis. Dimidium rei testium sunt ad muniendum. Multum in NVMe in 4 secundis notare potes. III gigabytes usque ad secundam. Singula igitur scripturarum probationum curricula usque ad 3 gigabytes of SSD subsidii a vobis edunt.

Lectio et scriptura mixta?Ita. Sensum facit currere legere et scribere probat separatim. Praeterea sensum efficit ut omnes tabulae synchronizatae sint ut ante factum scribens lectu non afficit.

Eventus in primis launchis et subsequentibus multum variabunt ut volumen cella et tenue compleant, et etiam secundum an systema cella per ultimam launchem impleta componatur.

Inter alia, commendo celeritatem mensurae in volumine tenui iam pleno, ex quo mox saxum sumptum est. Auctor occasionem habuit observandi quam temere scribit acerrime accelerare statim post primum snapshot partum, praesertim cum cella nondum plena est. Hoc accidit ex exemplarium scribentium scribendarum semanticarum, noctis cella et graciles voluminis caudices, et quod temere scribis INCURSIO 6 vertit in temere lecta ex INCURSUS 6 post scribendo cella. In nostra configuratione, temere lectio e RAID 6 usque ad sexiens (numerus SATA SSDs in ordine) velocior est quam scriptura. Quod stipites enim CoW e tenui stagno successiue collocantur, dein memoria, ut plurimum, etiam fit in sequentem.

Utraque haec lineamenta ad utilitatem vestram adhiberi possunt.

Cache "cohaeret" snapshots

Ad redigendum periculum notitiae damni in casu damni in casu damni, auctor usum strophorum rotandi inducere proponit ut integritatem suam in hoc casu spondeat.

Primum, quia volumen metadata tenuis in fabrica vacua residet, metadata constabit et damna possibilia in notitia stipitiorum semotus erunt.

Sequenti rotatio cycli snapshot integritatem notitiarum intra snapshots praestat in casu damni cache;

  1. Uterque volumen tenue cum nomine <nomen>, snapshot nomen crea cum nomine <nomen> .cached
  2. Limen migrationis rationabilem magni pretii ponamus: #lvchange --quiet --cachesettings "migration_threshold=16384" cache/cachedata
  3. In fascia numerum stipituum sordidorum in cella coarctatur: #lvs --rows --reportformat basic --quiet -ocache_dirty_blocks cache/cachedata | awk '{print $2}' Donec ut nulla. Si nulla diutius desit, creari potest ex tempore mutandi cache ad modum scribendae. Attamen, ratione celeritatis characterum nostrorum SATA et NVMe SSD vestit, ac eorum subsidia TBW, vel momentum cito capere poteris sine mutatione cache modus, vel ferramenta omnia totam eius subsidia in paucis diebus. Ob limitationes resource, systema est in principio, non potest esse sub onere 100% scribe omni tempore. Nostrum NVMe SSDs sub 100% scribe onus omnino exhauriet in eopia 3 4 dies,. DIABOLUS SSDs durabit tantum duplo longior. Ponemus igitur pleraque oneris ad legendum, et inter nos breve tempus erumpit actionis altissimae summae coniuncta cum parvo onere in medium ad scribendum.
  4. Simulac nullam cepimus (or feci) nominamus <nomen>.cached to <name>.committed. Vetus <nomen> .commissum deletum est.
  5. Optionally, si latibulum plenum est, scriptione recreari potest, eoque purgato. Cum dimidii cella vacua, ratio multo celerius in scribendo operatur.
  6. Nullam vestibulum ante posuere vehicula nisi: #lvchange --quiet --cachesettings "migration_threshold=0" cache/cachedata Hoc tempus obstabit quominus latibulum synchronum cum instrumentis praecipuis impediat.
  7. Exspectemus donec satis multae mutationes cumulant in cache #lvs --rows --reportformat basic --quiet -ocache_dirty_blocks cache/cachedata | awk '{print $2}' aut timor abeat.
  8. Iterum repetimus.

Cur difficultas migrationis limen...?Res est in usu reali, "temere" memoria prorsus non temere. Si aliquid scripserimus ad sectorem 4 chilioctetis in magnitudine, magna est probabilitas quod in proximis duobus minutis memoria fiet eadem vel una ex vicinis (+- 32K) partibus.

Ponendo limine migrationis ad nihilum, differamus synchronizationem in SATA SSD scribere et aggregata varia mutationes ad unum 64K scandalum in cella. Hoc signanter resource SATA SSD servat.

Ubi est signum..?Infeliciter, auctor se parum idoneum putat in evolutione scriptorum vernaculorum, quia ipse est 100% auto-doctus et exercitia "google" evolutionis "google" agitatae, ideo credit terribilem codicem, qui e manibus exit, ab aliquo uti non debet. aliud.

Existimo doctores in hoc campo independenter totam logicam supra descriptam, si opus sit, depingere posse, et fortasse etiam pulchre designare ut ministerium systematum, ut auctor facere conatus est.

Talis simplex machinae gyrationis snapshot nobis permittit non solum ut constanter unum snapshotum plene congruentem in SATA SSD habeas, sed etiam nobis, utendo tenuissima utilitate, utilia invenire quae post eius creationem stipites mutati sunt, et sic damnum in localizo volumina principalia, receptam valde simplicem.

Trim/discard in libvirt/KVM

Quod notitia repositionis ad KVM currentem libvirt adhibebitur, tunc utilem fore VMs nostros docere non solum spatium vacuum capere, sed etiam liberare quod iam non opus est.

Hoc fit per TRIM aemulando/discarde auxilium de orbis virtualis. Ad hoc, debes mutare genus ad virtio-scsi et xml emendare.

#virsh edit vmname
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='writethrough' io='threads' discard='unmap'/>
<source dev='/dev/images/vmname'/>
<backingStore/>
<target dev='sda' bus='scsi'/>
<alias name='scsi0-0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>

<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</controller>

Tales ab hospite OSs ab LVM discessiones recte discursum sunt, et caudices recte liberantur tam in cella quam in tenui piscina. In casu nostro, hoc maxime morae modo accidit, cum proximum snapshot delendo.

BTRFS Tergum

Utere parata scripta cum extremus cautio et simul suo periculo. Auctor hunc codicem ipse et solum pro se scripsit. Pro certo habeo multos users periti Linux similia instrumenta habere, et non opus est alterius exemplum exscribere.

Faciamus volumen in tergum fabrica:

#lvcreate -L 256G --name backup backup

Sit scriptor format in BTRFS:

#mkfs.btrfs /dev/backup/backup

Ascendere punctos faciamus et radices ordines fasciculi systematis conscendamus;

#mkdir /backup
#mkdir /backup/btrfs
#mkdir /backup/btrfs/root
#mkdir /backup/btrfs/back
#ln -s /boot /backup/btrfs
# cat >>/etc/fstab << EOF

/dev/mapper/root-root /backup/btrfs/root btrfs defaults,space_cache,noatime,nodiratime 0 2
/dev/mapper/backup-backup /backup/btrfs/back btrfs defaults,space_cache,noatime,nodiratime 0 2
EOF
#mount -a
#update-initramfs -u
#update-grub

Facere directoria pro tergum:

#mkdir /backup/btrfs/back/remote
#mkdir /backup/btrfs/back/remote/root
#mkdir /backup/btrfs/back/remote/boot

Faciamus directorium scriptorum pro tergum:

#mkdir /root/btrfs-backup

Sit scriptor scriptum effingo:

Sortes FORMIDULOSUS vercundus codice. Utere tuo periculo. Noli scribere iratus litteris auctori...#cat >/root/btrfs-backup/btrfs-backup.sh << EOF
#!/bin/bash
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

SCRIPT_FILE="$(realpath $0)"
SCRIPT_DIR="$(dirname $SCRIPT_FILE)"
SCRIPT_NAME="$(basename -s .sh $SCRIPT_FILE)"

LOCK_FILE="/dev/shm/$SCRIPT_NAME.lock"
DATE_PREFIX='%Y-%m-%d'
DATE_FORMAT=$DATE_PREFIX'-%H-%M-%S'
DATE_REGEX='[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
BASE_SUFFIX=".@base"
PEND_SUFFIX=".@pend"
SNAP_SUFFIX=".@snap"
MOUNTS="/backup/btrfs/"
BACKUPS="/backup/btrfs/back/remote/"

function terminate ()
{
echo "$1" >&2
exit 1
}

function wait_lock()
{
flock 98
}

function wait_lock_or_terminate()
{
echo "Wating for lock..."
wait_lock || terminate "Failed to get lock. Exiting..."
echo "Got lock..."
}

function suffix()
{
FORMATTED_DATE=$(date +"$DATE_FORMAT")
echo "$SNAP_SUFFIX.$FORMATTED_DATE"
}

function filter()
{
FORMATTED_DATE=$(date --date="$1" +"$DATE_PREFIX")
echo "$SNAP_SUFFIX.$FORMATTED_DATE"
}

function backup()
{
SOURCE_PATH="$MOUNTS$1"
TARGET_PATH="$BACKUPS$1"
SOURCE_BASE_PATH="$MOUNTS$1$BASE_SUFFIX"
TARGET_BASE_PATH="$BACKUPS$1$BASE_SUFFIX"
TARGET_BASE_DIR="$(dirname $TARGET_BASE_PATH)"
SOURCE_PEND_PATH="$MOUNTS$1$PEND_SUFFIX"
TARGET_PEND_PATH="$BACKUPS$1$PEND_SUFFIX"
if [ -d "$SOURCE_BASE_PATH" ] then
echo "$SOURCE_BASE_PATH found"
else
echo "$SOURCE_BASE_PATH File not found creating snapshot of $SOURCE_PATH to $SOURCE_BASE_PATH"
btrfs subvolume snapshot -r $SOURCE_PATH $SOURCE_BASE_PATH
sync
if [ -d "$TARGET_BASE_PATH" ] then
echo "$TARGET_BASE_PATH found out of sync with source... removing..."
btrfs subvolume delete -c $TARGET_BASE_PATH
sync
fi
fi
if [ -d "$TARGET_BASE_PATH" ] then
echo "$TARGET_BASE_PATH found"
else
echo "$TARGET_BASE_PATH not found. Synching to $TARGET_BASE_DIR"
btrfs send $SOURCE_BASE_PATH | btrfs receive $TARGET_BASE_DIR
sync
fi
if [ -d "$SOURCE_PEND_PATH" ] then
echo "$SOURCE_PEND_PATH found removing..."
btrfs subvolume delete -c $SOURCE_PEND_PATH
sync
fi
btrfs subvolume snapshot -r $SOURCE_PATH $SOURCE_PEND_PATH
sync
if [ -d "$TARGET_PEND_PATH" ] then
echo "$TARGET_PEND_PATH found removing..."
btrfs subvolume delete -c $TARGET_PEND_PATH
sync
fi
echo "Sending $SOURCE_PEND_PATH to $TARGET_PEND_PATH"
btrfs send -p $SOURCE_BASE_PATH $SOURCE_PEND_PATH | btrfs receive $TARGET_BASE_DIR
sync
TARGET_DATE_SUFFIX=$(suffix)
btrfs subvolume snapshot -r $TARGET_PEND_PATH "$TARGET_PATH$TARGET_DATE_SUFFIX"
sync
btrfs subvolume delete -c $SOURCE_BASE_PATH
sync
btrfs subvolume delete -c $TARGET_BASE_PATH
sync
mv $SOURCE_PEND_PATH $SOURCE_BASE_PATH
mv $TARGET_PEND_PATH $TARGET_BASE_PATH
sync
}

function list()
{
LIST_TARGET_BASE_PATH="$BACKUPS$1$BASE_SUFFIX"
LIST_TARGET_BASE_DIR="$(dirname $LIST_TARGET_BASE_PATH)"
LIST_TARGET_BASE_NAME="$(basename -s .$BASE_SUFFIX $LIST_TARGET_BASE_PATH)"
find "$LIST_TARGET_BASE_DIR" -maxdepth 1 -mindepth 1 -type d -printf "%fn" | grep "${LIST_TARGET_BASE_NAME/$BASE_SUFFIX/$SNAP_SUFFIX}.$DATE_REGEX"
}

function remove()
{
REMOVE_TARGET_BASE_PATH="$BACKUPS$1$BASE_SUFFIX"
REMOVE_TARGET_BASE_DIR="$(dirname $REMOVE_TARGET_BASE_PATH)"
btrfs subvolume delete -c $REMOVE_TARGET_BASE_DIR/$2
sync
}

function removeall()
{
DATE_OFFSET="$2"
FILTER="$(filter "$DATE_OFFSET")"
while read -r SNAPSHOT ; do
remove "$1" "$SNAPSHOT"
done < <(list "$1" | grep "$FILTER")

}

(
COMMAND="$1"
shift

case "$COMMAND" in
"--help")
echo "Help"
;;
"suffix")
suffix
;;
"filter")
filter "$1"
;;
"backup")
wait_lock_or_terminate
backup "$1"
;;
"list")
list "$1"
;;
"remove")
wait_lock_or_terminate
remove "$1" "$2"
;;
"removeall")
wait_lock_or_terminate
removeall "$1" "$2"
;;
*)
echo "None.."
;;
esac
) 98>$LOCK_FILE

EOF

Quid etiam facit.Copiam mandatorum simplicium continet ad snapshots BTRFS creandos et describendos eos ad alterum FS utendo BTRFS mitte/accipiendum.

Lorem prima potest esse longa relatio, quia... In principio omnia notitia exscribenda erunt. Etiam sollicitudin iaculis ipsum sit amet, consectetuer quia... Mutationes tantum exscriptae erunt.

Aliud scriptum quod nos in cron ponemus;

Quidam magis bash codice#cat >/root/btrfs-backup/cron-daily.sh << EOF
#!/bin/bash
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

SCRIPT_FILE="$(realpath $0)"
SCRIPT_DIR="$(dirname $SCRIPT_FILE)"
SCRIPT_NAME="$(basename -s .sh $SCRIPT_FILE)"

BACKUP_SCRIPT="$SCRIPT_DIR/btrfs-backup.sh"
RETENTION="-60 day"
$BACKUP_SCRIPT backup root/@
$BACKUP_SCRIPT removeall root/@ "$RETENTION"
$BACKUP_SCRIPT backup root/@home
$BACKUP_SCRIPT removeall root/@home "$RETENTION"
$BACKUP_SCRIPT backup boot/
$BACKUP_SCRIPT removeall boot/ "$RETENTION"
EOF

Quid enim faciam..?Creat et conformat incrementales snapshots voluminum BTRFS in tergum FS inscriptorum. Post hoc, omnes imagines ante 60 dies creatas delet. Post launch, datas snapshots voluminum inscripti apparebit in subdirectoria /backup/btrfs/back/remotis.

Demus signum exsecutionis iura:

#chmod +x /root/btrfs-backup/cron-daily.sh
#chmod +x /root/btrfs-backup/btrfs-backup.sh

Reprehendamus eam et in cron ponatur;

#/usr/bin/nice -n 19 /usr/bin/ionice -c 3 /root/btrfs-backup/cron-daily.sh 2>&1 | /usr/bin/logger -t btrfs-backup
#cat /var/log/syslog | grep btrfs-backup
#crontab -e
0 2 * * * /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /root/btrfs-backup/cron-daily.sh 2>&1 | /usr/bin/logger -t btrfs-backup

LVM tenuis tergum

Gravem piscinam faciamus in tergum fabrica:

#lvcreate -L 274877906944B --poolmetadataspare y --poolmetadatasize 4294967296B --chunksize 64k -Z y -T backup/thin-pool

Ddrescue instituamus, quia ... scriptor hoc instrumentum utetur:

#apt-get install gddrescue

Faciamus scriptor directorium pro:

#mkdir /root/lvm-thin-backup

Sit scriptorum effingo:

Aliquam a mattis intus...#cat >/root/lvm-thin-backup/lvm-thin-backup.sh << EOF
#!/bin/bash
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

SCRIPT_FILE="$(realpath $0)"
SCRIPT_DIR="$(dirname $SCRIPT_FILE)"
SCRIPT_NAME="$(basename -s .sh $SCRIPT_FILE)"

LOCK_FILE="/dev/shm/$SCRIPT_NAME.lock"
DATE_PREFIX='%Y-%m-%d'
DATE_FORMAT=$DATE_PREFIX'-%H-%M-%S'
DATE_REGEX='[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]-[0-9][0-9]'
BASE_SUFFIX=".base"
PEND_SUFFIX=".pend"
SNAP_SUFFIX=".snap"
BACKUPS="backup"
BACKUPS_POOL="thin-pool"

export LVM_SUPPRESS_FD_WARNINGS=1

function terminate ()
{
echo "$1" >&2
exit 1
}

function wait_lock()
{
flock 98
}

function wait_lock_or_terminate()
{
echo "Wating for lock..."
wait_lock || terminate "Failed to get lock. Exiting..."
echo "Got lock..."
}

function suffix()
{
FORMATTED_DATE=$(date +"$DATE_FORMAT")
echo "$SNAP_SUFFIX.$FORMATTED_DATE"
}

function filter()
{
FORMATTED_DATE=$(date --date="$1" +"$DATE_PREFIX")
echo "$SNAP_SUFFIX.$FORMATTED_DATE"
}

function read_thin_id {
lvs --rows --reportformat basic --quiet -othin_id "$1/$2" | awk '{print $2}'
}

function read_pool_lv {
lvs --rows --reportformat basic --quiet -opool_lv "$1/$2" | awk '{print $2}'
}

function read_lv_dm_path {
lvs --rows --reportformat basic --quiet -olv_dm_path "$1/$2" | awk '{print $2}'
}

function read_lv_active {
lvs --rows --reportformat basic --quiet -olv_active "$1/$2" | awk '{print $2}'
}

function read_lv_chunk_size {
lvs --rows --reportformat basic --quiet --units b --nosuffix -ochunk_size "$1/$2" | awk '{print $2}'
}

function read_lv_size {
lvs --rows --reportformat basic --quiet --units b --nosuffix -olv_size "$1/$2" | awk '{print $2}'
}

function activate_volume {
lvchange -ay -Ky "$1/$2"
}

function deactivate_volume {
lvchange -an "$1/$2"
}

function read_thin_metadata_snap {
dmsetup status "$1" | awk '{print $7}'
}

function thindiff()
{
DIFF_VG="$1"
DIFF_SOURCE="$2"
DIFF_TARGET="$3"
DIFF_SOURCE_POOL=$(read_pool_lv $DIFF_VG $DIFF_SOURCE)
DIFF_TARGET_POOL=$(read_pool_lv $DIFF_VG $DIFF_TARGET)

if [ "$DIFF_SOURCE_POOL" == "" ] then
(>&2 echo "Source LV is not thin.")
exit 1
fi

if [ "$DIFF_TARGET_POOL" == "" ] then
(>&2 echo "Target LV is not thin.")
exit 1
fi

if [ "$DIFF_SOURCE_POOL" != "$DIFF_TARGET_POOL" ] then
(>&2 echo "Source and target LVs belong to different thin pools.")
exit 1
fi

DIFF_POOL_PATH=$(read_lv_dm_path $DIFF_VG $DIFF_SOURCE_POOL)
DIFF_SOURCE_ID=$(read_thin_id $DIFF_VG $DIFF_SOURCE)
DIFF_TARGET_ID=$(read_thin_id $DIFF_VG $DIFF_TARGET)
DIFF_POOL_PATH_TPOOL="$DIFF_POOL_PATH-tpool"
DIFF_POOL_PATH_TMETA="$DIFF_POOL_PATH"_tmeta
DIFF_POOL_METADATA_SNAP=$(read_thin_metadata_snap $DIFF_POOL_PATH_TPOOL)

if [ "$DIFF_POOL_METADATA_SNAP" != "-" ] then
(>&2 echo "Thin pool metadata snapshot already exist. Assuming stale one. Will release metadata snapshot in 5 seconds.")
sleep 5
dmsetup message $DIFF_POOL_PATH_TPOOL 0 release_metadata_snap
fi

dmsetup message $DIFF_POOL_PATH_TPOOL 0 reserve_metadata_snap
DIFF_POOL_METADATA_SNAP=$(read_thin_metadata_snap $DIFF_POOL_PATH_TPOOL)

if [ "$DIFF_POOL_METADATA_SNAP" == "-" ] then
(>&2 echo "Failed to create thin pool metadata snapshot.")
exit 1
fi

#We keep output in variable because metadata snapshot need to be released early.
DIFF_DATA=$(thin_delta -m$DIFF_POOL_METADATA_SNAP --snap1 $DIFF_SOURCE_ID --snap2 $DIFF_TARGET_ID $DIFF_POOL_PATH_TMETA)

dmsetup message $DIFF_POOL_PATH_TPOOL 0 release_metadata_snap

echo $"$DIFF_DATA" | grep -E 'different|left_only|right_only' | sed 's/</"/g' | sed 's/ /"/g' | awk -F'"' '{print $6 "t" $8 "t" $11}' | sed 's/different/copy/g' | sed 's/left_only/copy/g' | sed 's/right_only/discard/g'

}

function thinsync()
{
SYNC_VG="$1"
SYNC_PEND="$2"
SYNC_BASE="$3"
SYNC_TARGET="$4"
SYNC_PEND_POOL=$(read_pool_lv $SYNC_VG $SYNC_PEND)
SYNC_BLOCK_SIZE=$(read_lv_chunk_size $SYNC_VG $SYNC_PEND_POOL)
SYNC_PEND_PATH=$(read_lv_dm_path $SYNC_VG $SYNC_PEND)

activate_volume $SYNC_VG $SYNC_PEND

while read -r SYNC_ACTION SYNC_OFFSET SYNC_LENGTH ; do
SYNC_OFFSET_BYTES=$((SYNC_OFFSET * SYNC_BLOCK_SIZE))
SYNC_LENGTH_BYTES=$((SYNC_LENGTH * SYNC_BLOCK_SIZE))
if [ "$SYNC_ACTION" == "copy" ] then
ddrescue --quiet --force --input-position=$SYNC_OFFSET_BYTES --output-position=$SYNC_OFFSET_BYTES --size=$SYNC_LENGTH_BYTES "$SYNC_PEND_PATH" "$SYNC_TARGET"
fi

if [ "$SYNC_ACTION" == "discard" ] then
blkdiscard -o $SYNC_OFFSET_BYTES -l $SYNC_LENGTH_BYTES "$SYNC_TARGET"
fi
done < <(thindiff "$SYNC_VG" "$SYNC_PEND" "$SYNC_BASE")
}

function discard_volume()
{
DISCARD_VG="$1"
DISCARD_LV="$2"
DISCARD_LV_PATH=$(read_lv_dm_path "$DISCARD_VG" "$DISCARD_LV")
if [ "$DISCARD_LV_PATH" != "" ] then
echo "$DISCARD_LV_PATH found"
else
echo "$DISCARD_LV not found in $DISCARD_VG"
exit 1
fi
DISCARD_LV_POOL=$(read_pool_lv $DISCARD_VG $DISCARD_LV)
DISCARD_LV_SIZE=$(read_lv_size "$DISCARD_VG" "$DISCARD_LV")
lvremove -y --quiet "$DISCARD_LV_PATH" || exit 1
lvcreate --thin-pool "$DISCARD_LV_POOL" -V "$DISCARD_LV_SIZE"B --name "$DISCARD_LV" "$DISCARD_VG" || exit 1
}

function backup()
{
SOURCE_VG="$1"
SOURCE_LV="$2"
TARGET_VG="$BACKUPS"
TARGET_LV="$SOURCE_VG-$SOURCE_LV"
SOURCE_BASE_LV="$SOURCE_LV$BASE_SUFFIX"
TARGET_BASE_LV="$TARGET_LV$BASE_SUFFIX"
SOURCE_PEND_LV="$SOURCE_LV$PEND_SUFFIX"
TARGET_PEND_LV="$TARGET_LV$PEND_SUFFIX"
SOURCE_BASE_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_BASE_LV")
SOURCE_PEND_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_PEND_LV")
TARGET_BASE_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_BASE_LV")
TARGET_PEND_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_PEND_LV")

if [ "$SOURCE_BASE_LV_PATH" != "" ] then
echo "$SOURCE_BASE_LV_PATH found"
else
echo "Source base not found creating snapshot of $SOURCE_VG/$SOURCE_LV to $SOURCE_VG/$SOURCE_BASE_LV"
lvcreate --quiet --snapshot --name "$SOURCE_BASE_LV" "$SOURCE_VG/$SOURCE_LV" || exit 1
SOURCE_BASE_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_BASE_LV")
activate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
echo "Discarding $SOURCE_BASE_LV_PATH as we need to bootstrap."
SOURCE_BASE_POOL=$(read_pool_lv $SOURCE_VG $SOURCE_BASE_LV)
SOURCE_BASE_CHUNK_SIZE=$(read_lv_chunk_size $SOURCE_VG $SOURCE_BASE_POOL)
discard_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
sync
if [ "$TARGET_BASE_LV_PATH" != "" ] then
echo "$TARGET_BASE_LV_PATH found out of sync with source... removing..."
lvremove -y --quiet $TARGET_BASE_LV_PATH || exit 1
TARGET_BASE_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_BASE_LV")
sync
fi
fi
SOURCE_BASE_SIZE=$(read_lv_size "$SOURCE_VG" "$SOURCE_BASE_LV")
if [ "$TARGET_BASE_LV_PATH" != "" ] then
echo "$TARGET_BASE_LV_PATH found"
else
echo "$TARGET_VG/$TARGET_LV not found. Creating empty volume."
lvcreate --thin-pool "$BACKUPS_POOL" -V "$SOURCE_BASE_SIZE"B --name "$TARGET_BASE_LV" "$TARGET_VG" || exit 1
echo "Have to rebootstrap. Discarding source at $SOURCE_BASE_LV_PATH"
activate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
SOURCE_BASE_POOL=$(read_pool_lv $SOURCE_VG $SOURCE_BASE_LV)
SOURCE_BASE_CHUNK_SIZE=$(read_lv_chunk_size $SOURCE_VG $SOURCE_BASE_POOL)
discard_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
TARGET_BASE_POOL=$(read_pool_lv $TARGET_VG $TARGET_BASE_LV)
TARGET_BASE_CHUNK_SIZE=$(read_lv_chunk_size $TARGET_VG $TARGET_BASE_POOL)
TARGET_BASE_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_BASE_LV")
echo "Discarding target at $TARGET_BASE_LV_PATH"
discard_volume "$TARGET_VG" "$TARGET_BASE_LV"
sync
fi
if [ "$SOURCE_PEND_LV_PATH" != "" ] then
echo "$SOURCE_PEND_LV_PATH found removing..."
lvremove -y --quiet "$SOURCE_PEND_LV_PATH" || exit 1
sync
fi
lvcreate --quiet --snapshot --name "$SOURCE_PEND_LV" "$SOURCE_VG/$SOURCE_LV" || exit 1
SOURCE_PEND_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_PEND_LV")
sync
if [ "$TARGET_PEND_LV_PATH" != "" ] then
echo "$TARGET_PEND_LV_PATH found removing..."
lvremove -y --quiet $TARGET_PEND_LV_PATH
sync
fi
lvcreate --quiet --snapshot --name "$TARGET_PEND_LV" "$TARGET_VG/$TARGET_BASE_LV" || exit 1
TARGET_PEND_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_PEND_LV")
SOURCE_PEND_LV_SIZE=$(read_lv_size "$SOURCE_VG" "$SOURCE_PEND_LV")
lvresize -L "$SOURCE_PEND_LV_SIZE"B "$TARGET_PEND_LV_PATH"
activate_volume "$TARGET_VG" "$TARGET_PEND_LV"
echo "Synching $SOURCE_PEND_LV_PATH to $TARGET_PEND_LV_PATH"
thinsync "$SOURCE_VG" "$SOURCE_PEND_LV" "$SOURCE_BASE_LV" "$TARGET_PEND_LV_PATH" || exit 1
sync

TARGET_DATE_SUFFIX=$(suffix)
lvcreate --quiet --snapshot --name "$TARGET_LV$TARGET_DATE_SUFFIX" "$TARGET_VG/$TARGET_PEND_LV" || exit 1
sync
lvremove --quiet -y "$SOURCE_BASE_LV_PATH" || exit 1
sync
lvremove --quiet -y "$TARGET_BASE_LV_PATH" || exit 1
sync
lvrename -y "$SOURCE_VG/$SOURCE_PEND_LV" "$SOURCE_BASE_LV" || exit 1
lvrename -y "$TARGET_VG/$TARGET_PEND_LV" "$TARGET_BASE_LV" || exit 1
sync
deactivate_volume "$TARGET_VG" "$TARGET_BASE_LV"
deactivate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
}

function verify()
{
SOURCE_VG="$1"
SOURCE_LV="$2"
TARGET_VG="$BACKUPS"
TARGET_LV="$SOURCE_VG-$SOURCE_LV"
SOURCE_BASE_LV="$SOURCE_LV$BASE_SUFFIX"
TARGET_BASE_LV="$TARGET_LV$BASE_SUFFIX"
TARGET_BASE_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_BASE_LV")
SOURCE_BASE_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_BASE_LV")

if [ "$SOURCE_BASE_LV_PATH" != "" ] then
echo "$SOURCE_BASE_LV_PATH found"
else
echo "$SOURCE_BASE_LV_PATH not found"
exit 1
fi
if [ "$TARGET_BASE_LV_PATH" != "" ] then
echo "$TARGET_BASE_LV_PATH found"
else
echo "$TARGET_BASE_LV_PATH not found"
exit 1
fi
activate_volume "$TARGET_VG" "$TARGET_BASE_LV"
activate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
echo Comparing "$SOURCE_BASE_LV_PATH" with "$TARGET_BASE_LV_PATH"
cmp "$SOURCE_BASE_LV_PATH" "$TARGET_BASE_LV_PATH"
echo Done...
deactivate_volume "$TARGET_VG" "$TARGET_BASE_LV"
deactivate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
}

function resync()
{
SOURCE_VG="$1"
SOURCE_LV="$2"
TARGET_VG="$BACKUPS"
TARGET_LV="$SOURCE_VG-$SOURCE_LV"
SOURCE_BASE_LV="$SOURCE_LV$BASE_SUFFIX"
TARGET_BASE_LV="$TARGET_LV$BASE_SUFFIX"
TARGET_BASE_LV_PATH=$(read_lv_dm_path "$TARGET_VG" "$TARGET_BASE_LV")
SOURCE_BASE_LV_PATH=$(read_lv_dm_path "$SOURCE_VG" "$SOURCE_BASE_LV")

if [ "$SOURCE_BASE_LV_PATH" != "" ] then
echo "$SOURCE_BASE_LV_PATH found"
else
echo "$SOURCE_BASE_LV_PATH not found"
exit 1
fi
if [ "$TARGET_BASE_LV_PATH" != "" ] then
echo "$TARGET_BASE_LV_PATH found"
else
echo "$TARGET_BASE_LV_PATH not found"
exit 1
fi
activate_volume "$TARGET_VG" "$TARGET_BASE_LV"
activate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
SOURCE_BASE_POOL=$(read_pool_lv $SOURCE_VG $SOURCE_BASE_LV)
SYNC_BLOCK_SIZE=$(read_lv_chunk_size $SOURCE_VG $SOURCE_BASE_POOL)

echo Syncronizing "$SOURCE_BASE_LV_PATH" to "$TARGET_BASE_LV_PATH"

CMP_OFFSET=0
while [[ "$CMP_OFFSET" != "" ]] ; do
CMP_MISMATCH=$(cmp -i "$CMP_OFFSET" "$SOURCE_BASE_LV_PATH" "$TARGET_BASE_LV_PATH" | grep differ | awk '{print $5}' | sed 's/,//g' )
if [[ "$CMP_MISMATCH" != "" ]] ; then
CMP_OFFSET=$(( CMP_MISMATCH + CMP_OFFSET ))
SYNC_OFFSET_BYTES=$(( ( CMP_OFFSET / SYNC_BLOCK_SIZE ) * SYNC_BLOCK_SIZE ))
SYNC_LENGTH_BYTES=$(( SYNC_BLOCK_SIZE ))
echo "Synching $SYNC_LENGTH_BYTES bytes at $SYNC_OFFSET_BYTES from $SOURCE_BASE_LV_PATH to $TARGET_BASE_LV_PATH"
ddrescue --quiet --force --input-position=$SYNC_OFFSET_BYTES --output-position=$SYNC_OFFSET_BYTES --size=$SYNC_LENGTH_BYTES "$SOURCE_BASE_LV_PATH" "$TARGET_BASE_LV_PATH"
else
CMP_OFFSET=""
fi
done
echo Done...
deactivate_volume "$TARGET_VG" "$TARGET_BASE_LV"
deactivate_volume "$SOURCE_VG" "$SOURCE_BASE_LV"
}

function list()
{
LIST_SOURCE_VG="$1"
LIST_SOURCE_LV="$2"
LIST_TARGET_VG="$BACKUPS"
LIST_TARGET_LV="$LIST_SOURCE_VG-$LIST_SOURCE_LV"
LIST_TARGET_BASE_LV="$LIST_TARGET_LV$SNAP_SUFFIX"
lvs -olv_name | grep "$LIST_TARGET_BASE_LV.$DATE_REGEX"
}

function remove()
{
REMOVE_TARGET_VG="$BACKUPS"
REMOVE_TARGET_LV="$1"
lvremove -y "$REMOVE_TARGET_VG/$REMOVE_TARGET_LV"
sync
}

function removeall()
{
DATE_OFFSET="$3"
FILTER="$(filter "$DATE_OFFSET")"
while read -r SNAPSHOT ; do
remove "$SNAPSHOT"
done < <(list "$1" "$2" | grep "$FILTER")

}

(
COMMAND="$1"
shift

case "$COMMAND" in
"--help")
echo "Help"
;;
"suffix")
suffix
;;
"filter")
filter "$1"
;;
"backup")
wait_lock_or_terminate
backup "$1" "$2"
;;
"list")
list "$1" "$2"
;;
"thindiff")
thindiff "$1" "$2" "$3"
;;
"thinsync")
thinsync "$1" "$2" "$3" "$4"
;;
"verify")
wait_lock_or_terminate
verify "$1" "$2"
;;
"resync")
wait_lock_or_terminate
resync "$1" "$2"
;;
"remove")
wait_lock_or_terminate
remove "$1"
;;
"removeall")
wait_lock_or_terminate
removeall "$1" "$2" "$3"
;;
*)
echo "None.."
;;
esac
) 98>$LOCK_FILE

EOF

Quid enim faciam...?Statutum imperii continet ad snapshots tenues tractandi et synchronum differentiam inter duos snapshots tenues receptas per tenue_delta ad alium fabricam stipitem utens ddrescue et blkdiscardum.

Aliud scriptum quod in cron ponemus;

Paulo plus vercundus#cat >/root/lvm-thin-backup/cron-daily.sh << EOF
#!/bin/bash
PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

SCRIPT_FILE="$(realpath $0)"
SCRIPT_DIR="$(dirname $SCRIPT_FILE)"
SCRIPT_NAME="$(basename -s .sh $SCRIPT_FILE)"

BACKUP_SCRIPT="$SCRIPT_DIR/lvm-thin-backup.sh"
RETENTION="-60 days"

$BACKUP_SCRIPT backup images linux-dev
$BACKUP_SCRIPT backup images win8
$BACKUP_SCRIPT backup images win8-data
#etc

$BACKUP_SCRIPT removeall images linux-dev "$RETENTION"
$BACKUP_SCRIPT removeall images win8 "$RETENTION"
$BACKUP_SCRIPT removeall images win8-data "$RETENTION"
#etc

EOF

Quid enim faciam...?Priore scripto utitur ad tergum creandi et synchronise voluminum tenuium inscriptorum. Scriptum otiosum relinquet snapshots voluminum inscripti, quae opus est ad mutationes vestiendas ab ultima synchronizatione.

Scriptum hoc debet editum esse, denotans indicem voluminum tenuium pro quibus in tergum exemplaribus constituendum est. Nomina data sunt tantum ad illustrandum finem. Si vis, scriptam scribere potes quae omnia volumina synchronizabit.

Demus iura;

#chmod +x /root/lvm-thin-backup/cron-daily.sh
#chmod +x /root/lvm-thin-backup/lvm-thin-backup.sh

Reprehendamus eam et in cron ponatur;

#/usr/bin/nice -n 19 /usr/bin/ionice -c 3 /root/lvm-thin-backup/cron-daily.sh 2>&1 | /usr/bin/logger -t lvm-thin-backup
#cat /var/log/syslog | grep lvm-thin-backup
#crontab -e
0 3 * * * /usr/bin/nice -n 19 /usr/bin/ionice -c 3 /root/lvm-thin-backup/cron-daily.sh 2>&1 | /usr/bin/logger -t lvm-thin-backup

Lorem prima sit tempor, nam... tenuia volumina plene perstringuntur exscribendo omnes usus spatii. Per LVM metadata tenuia scimus quae caudices actualiter in usu sunt, ergo solum in usu tenuis voluminis caudices transcribendi erunt.

Sequens currentis notitias incrementaliter notas imitabitur gratiarum mutationis vestigia per LVM metadata tenuia.

Videamus quid acciderit;

#time /root/btrfs-backup/cron-daily.sh
real 0m2,967s
user 0m0,225s
sys 0m0,353s

#time /root/lvm-thin-backup/cron-daily.sh
real 1m2,710s
user 0m12,721s
sys 0m6,671s

#ls -al /backup/btrfs/back/remote/*
/backup/btrfs/back/remote/boot:
total 0
drwxr-xr-x 1 root root 1260 мар 26 09:11 .
drwxr-xr-x 1 root root 16 мар 6 09:30 ..
drwxr-xr-x 1 root root 322 мар 26 02:00 .@base
drwxr-xr-x 1 root root 516 мар 6 09:39 [email protected]
drwxr-xr-x 1 root root 516 мар 6 09:39 [email protected]
...
/backup/btrfs/back/remote/root:
total 0
drwxr-xr-x 1 root root 2820 мар 26 09:11 .
drwxr-xr-x 1 root root 16 мар 6 09:30 ..
drwxr-xr-x 1 root root 240 мар 26 09:11 @.@base
drwxr-xr-x 1 root root 22 мар 26 09:11 @home.@base
drwxr-xr-x 1 root root 22 мар 6 09:39 @[email protected]
drwxr-xr-x 1 root root 22 мар 6 09:39 @[email protected]
...
drwxr-xr-x 1 root root 240 мар 6 09:39 @[email protected]
drwxr-xr-x 1 root root 240 мар 6 09:39 @[email protected]
...

#lvs -olv_name,lv_size images && lvs -olv_name,lv_size backup
LV LSize
linux-dev 128,00g
linux-dev.base 128,00g
thin-pool 1,38t
win8 128,00g
win8-data 2,00t
win8-data.base 2,00t
win8.base 128,00g
LV LSize
backup 256,00g
images-linux-dev.base 128,00g
images-linux-dev.snap.2020-03-08-10-09-11 128,00g
images-linux-dev.snap.2020-03-08-10-09-25 128,00g
...
images-win8-data.base 2,00t
images-win8-data.snap.2020-03-16-14-11-55 2,00t
images-win8-data.snap.2020-03-16-14-19-50 2,00t
...
images-win8.base 128,00g
images-win8.snap.2020-03-17-04-51-46 128,00g
images-win8.snap.2020-03-18-03-02-49 128,00g
...
thin-pool <2,09t

Quid hoc ad nidificationem s?

Verisimile est LVM LV volumina logicalia esse LVM PV in aliis VGs voluminibus physicis esse. LVM recursivum esse potest, sicut pupae aerarii. LVM hoc extremam mollitiem dat.

PS

In proximo articulo conabimur pluribus similibus rationibus mobilibus repositionis uti/KVM ut fundamentum ad repositionis geo-distributae/vm botrum cum redundantia in compluribus continentibus utentes in desktops, domum interretialem et P2P retiacula.

Source: www.habr.com

Add a comment