Backup and Restore LVM in Linux
Recovering LVM (Logical Volume Manager) volumes can be a bit complex and depends on the nature of the issue you're facing.
As a lab environment, we'll start with a 10 GB drive from scratch.
My drive here is sdb. Let's give this drive a little makeover and split it into three partitions!
Alrighty, let's keep the party going by crafting a shiny new volume group and logical volume on sdb1!
Guess what? We've now enlisted /dev/sdb1 for our test-lv logical volume!
Then create ext4 on newly created test-lv:
# mkfs.ext4 /dev/mapper/test--vg-test--lv
Hold up, before we dive into the next phase, let's play it safe and create a backup of the LVM metadata.
The default location for backup files
Quick heads-up: in this setup, we've got both archives and backups hanging around in /etc/lvm/backup/ folder. Plus, whenever you fire off an LVM command, it automatically plays it safe and creates a backup just in case things get a little wild.
# vgcfgbackup -f /etc/lvm/backup/testvgbckp test-vg
# ll /etc/lvm/backup/
total 12
-rw-------. 1 root root 1709 Jun 11 12:32 cs
-rw-------. 1 root root 1314 Jun 11 14:09 test-vg
-rw-------. 1 root root 1295 Jun 11 14:09 testvgbckp
A quick sum up for /etc/lvm/backup and /etc/lvm/archive folders. Here are a few points to understand the behavior:
领英推荐
After creating a backup of your lvm metadata, let's make some extra changes:
# vgextend test-vg /dev/sdb2
# vgs
VG #PV #LV #SN Attr VSize VFree
cs 1 2 0 wz--n- <29.00g 0
test-vg 2 1 0 wz--n- 5.99g <3.00g
# lvcreate -n test-lv2 -l +50%FREE test-vg
# mkfs.ext4 /dev/mapper/test--vg-test--lv2
# mkdir /test-lv2
# mount /dev/mapper/test--vg-test--lv2 /test-lv2/
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 30G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 29G 0 part
├─cs-root 253:0 0 26G 0 lvm /
└─cs-swap 253:1 0 3G 0 lvm [SWAP]
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 3G 0 part
│ └─test--vg-test--lv 253:2 0 3G 0 lvm
├─sdb2 8:18 0 3G 0 part
│ └─test--vg-test--lv2 253:3 0 1.5G 0 lvm /test-lv2
└─sdb3 8:19 0 4G 0 part
Create one more backup of lvm metadata to test afterward:
# vgcfgbackup -f /etc/lvm/backup/testlv2 test-vg
Now let's check to revert the volume group to its state before the changes, and restore it from the backup created before as /etc/lvm/backup/testvgbckp.
Before restoring the metadata, deactivate the volume group
# vgchange -an test-vg
0 logical volume(s) in volume group "test-vg" now active
Run vgck --updatemetadata to ensure the volume group metadata is up-to-date and consistent.
# vgck --updatemetadata test-vg
Let's try to redo our changes to the state we created in file /etc/lvm/backup/testlv2:
To reactivate a volume group (VG) after it has been deactivated or after restoring its metadata, you can use the vgchange command with the -ay option.
# vgchange -ay test-vg
2 logical volume(s) in volume group "test-vg" now active
After all changes have taken effect, remount the restored logical volumes to their respective directories and continue from where you left off.
By following these steps, you ensure you have a reliable backup of your volume group metadata before making any changes. Unmounting filesystems and deactivating the volume group before restoring the metadata ensures there are no conflicts with active logical volumes. This approach provides a safe way to revert to the previous state if necessary, preventing potential data loss or configuration issues.
Linux / DevOps engineer
9 个月Very informative