This is two methods to move a complete harddrive to a new bigger harddrive live on a running system with no downtime cool eh? This guide could be applied to moving things to a smaller drive, but will require some additonal trickery mostly lvm stuff, i will add it here when i and if have to do it.
Method 1 mdadm
This method tends to take forever but slow and steady sometimes wins the race right? right! Plus use it if you got it mentality is a work here mdadm that is. It does a sector by sector copy, and can be done completely online takes also take care of /etc/mdadm/mdadm.conf. Since already running most of my systems with RAID 1 in degraded mode anyway this method was a no brain-er.
Do an fdisk -l to get the lay of the land i am working with sda and sdb. Should probably be using cfdisk with a fancy partition table copy command, but since partition sizes are usually changing anyway moving to new drives and such. Call me old school, but I can really wield fdisk with maximum proficiency so i still use it. The only place i can see any problem is if /dev/sdb1 or but somehow gets rounded to a smaller size which also might not matter. However i have only ever seen it get larger and it should not matter as it is not full and i do a bunch of resizing stuff to it later.
Please excuse the brevity of this post as my eyes are tired from to much computing.
delete all existing partitions if any exist
d 1 d 2 d 3 d
make /boot or sdb1 or md0 partition
n p 1 enter +1G (should i make 1.1G just in case?)
/ partition or sdb2 or md1 and lvm root swap data and PVE KVMs
n p 2 enter enter
Set type to raid and make /dev/sdb1 bootable write changes and exit
t 1 fd t 2 fd a 1 w
RAID Phase 1
mdadm --zero-super-block /dev/sdb
mdadm /dev/md0 --add /dev/sdb1
mdadm /dev/md0 --add /dev/sdb2
Zero super block gives error if no mdadm superblock found this is a good error then wait forever for raid_sync cat /proc/mdstat
While waiting for raid_sync…
This was a bit scary grub or grub2 is a bit like black magic, but i just accepted defaults then add grub to /dev/sda and /dev/sdb. Note you used to be able to add grub to /dev/md0, but some in Debian said it was a bug not a feature so it cannot be easily done anymore. It was in the list with the command above, but when i tried it i gave me errors. If you are feeling brave, don’t care about down time or in hurry to see if you changes are worked you can reboot to test this you must leave both drives attached and mdadm raid_sync will continue where it left off.
…after raid_sync is finished
RAID Phase 2
mdadm --grow /dev/md0 -z max
I know it is tempting but growing /dev/md1 like above is not what you need to do here as you are working with LVM
Logical volume management
This single magical command grows the physical volume extents to the full size of the new /dev/md1 drive and enlarges the the logical volume group to match. Very nice indeed, and took me literally years to find probably cause i did’nt really know what i was looking for 🙂
Done! now would probably be a good time to do some fancy lvcreate or lvextend commands to use up some of that extra space if you got it.
If you are not running hotswap drives thier will be some down time during testing
Shutdown -h now unplug the old drive and test booing on the new drive.
Your welcome because you will not find a better how-to for this anywhere on the net trust me i looked and that is why i wrote this one.
Method 2 faster fsarchive
STILL WORKING ON THIS NOT TESTED YET
fsdisk new disk /dev/sdb like above
do not create the array
umount /boot should not be a problem gentoo does this after bootup by default
fsachive /dev/sda1 to /dev/sdb1 using copy-boot.sh snapshot script in my home directory.
fsachive /dev/sda2 to /dev/sdb2 using copy-root.sh
do grub from above should i go back and add grub to /dev/md0 ??
Also see this post
reboot with new drive