This guide outlines the steps to recover software RAID arrays and LVM volumes after a system reinstall or similar scenario where the storage configuration needs to be reactivated.
Table of Contents
1. Prerequisites
First, install the necessary packages:
sudo apt-get update
sudo apt-get install mdadm lvm2
2. Identifying RAID Arrays and LVM Volumes
Check for RAID Arrays:
# Check for existing RAID arrays
sudo mdadm --detail --scan
sudo cat /proc/mdstat
Identify LVM Configuration:
# Examine partition types to identify RAID and LVM members
sudo blkid
# Check for LVM physical volumes
sudo pvs
# Check volume groups
sudo vgs
# Check logical volumes
sudo lvs
# Get detailed information about physical volumes
sudo pvdisplay --maps
3. Activating RAID Arrays
# Assemble all detected RAID arrays
sudo mdadm --assemble --scan
# Verify successful assembly
sudo cat /proc/mdstat
# Get detailed information about a specific RAID array
# Replace md0 with the actual device name if different
sudo mdadm --detail /dev/md0
If the normal assembly fails, you can try forcing it:
# Only use this if normal assembly fails
sudo mdadm --assemble --force /dev/md0 /dev/nvme1n1p3 /dev/nvme3n1p3
4. Activating LVM Volumes
# Scan for volume groups
sudo vgscan
# Activate all volume groups
sudo vgchange -ay
# Verify all logical volumes are available
sudo lvs -a
If specific volumes need to be activated:
sudo lvchange -ay vg_name/lv_name
# For example:
sudo lvchange -ay vg_home/lv_home
5. Mounting the Volumes
# Create mount points if they don't exist
sudo mkdir -p /mnt/home /mnt/home_lfs /mnt/storage /mnt/trash
# Mount the logical volumes
sudo mount /dev/vg_home/lv_home /mnt/home
sudo mount /dev/vg_home_lfs/home_lfs /mnt/home_lfs
sudo mount /dev/vg_storage/lv_storage /mnt/storage
sudo mount /dev/vg_trash/lv_trash /mnt/trash
# Verify successful mounting
df -h
6. Making Mounts Permanent (Optional)
To make these mounts permanent across reboots, add them to the /etc/fstab
file:
# First, get the UUID of each logical volume
sudo blkid | grep /dev/mapper
# Then add entries to /etc/fstab
sudo vim /etc/fstab
Add lines similar to the following:
/dev/mapper/vg_home-lv_home /mnt/home ext4 defaults 0 2
/dev/mapper/vg_home_lfs-home_lfs /mnt/home_lfs ext4 defaults 0 2
/dev/mapper/vg_storage-lv_storage /mnt/storage ext4 defaults 0 2
/dev/mapper/vg_trash-lv_trash /mnt/trash ext4 defaults 0 2
7. Troubleshooting
If RAID Assembly Fails:
# Check RAID details even if not active
sudo mdadm --examine /dev/nvme1n1p3 /dev/nvme3n1p3
# Create the array manually if needed
sudo mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/nvme1n1p3 /dev/nvme3n1p3
If LVM Volumes Are Not Found:
# Scan physical devices for LVM metadata
sudo pvscan
# Force a volume group scan
sudo vgscan --mknodes
# Check for specific volume group info
sudo vgdisplay vg_name
If Volumes Won’t Mount:
# Check the filesystem type
sudo file -s /dev/mapper/vg_name-lv_name
# Try to repair the filesystem if needed
sudo fsck -f /dev/mapper/vg_name-lv_name
8. Quick Recovery Commands
For quick recovery, the minimal command set is:
sudo apt-get update && sudo apt-get install -y mdadm lvm2
sudo mdadm --assemble --scan
sudo vgscan
sudo vgchange -ay
sudo lvs -a
# Create mount points and mount volumes as needed
This guide covers the basics of recovering your specific RAID and LVM configuration. Adjust commands as needed for your particular setup.
For your specific configuration with RAID-0 on nvme1n1p3 and nvme3n1p3, and LVM volume groups vg_home, vg_home_lfs, vg_storage, and vg_trash, these commands should restore your storage setup after a system reinstall.