Migrate raid 1 to 5

Reshape takes forever and unless the drives are full might not be worth your time
madadm
root# mdadm –stop /dev/md0 mdadm: stopped /dev/md0
root# mdadm –create /dev/md0 –level=5 –raid-devices=2 /dev/sda1 /dev/sdb1 mdadm: /dev/sda1 appears to contain an ext2fs file system
size=1048512K mtime=Fri Dec 18 13:23:04 2009
mdadm: /dev/sda1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
mdadm: /dev/sdb1 appears to contain an ext2fs file system
size=1048512K mtime=Fri Dec 18 13:23:04 2009
mdadm: /dev/sdb1 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
Continue creating array? y
mdadm: array /dev/md0 started.

If you do a “cat /proc/mdstat” now you’ll see the raid array start to re-cover as a RAID-5:

root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5]
md0 : active raid5 sdb1[2] sda1[0]
1048512 blocks level 5, 64k chunk, algorithm 2 [2/1] [U_]
[==>………………] recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec

Once it has finished re-building, we add the third volume, and “grow” the array to encompass all three disks:

root# mdadm –add /dev/md0 /dev/sdc1 mdadm: added /dev/sdc1
root# mdadm –grow /dev/md0 –raid-devices=3 mdadm: Need to backup 128K of critical section..
mdadm: … critical section passed.

At this point, the array will re-distribute or “re-shape” the current data on the disks. This part can take a substantial amount of time. On 1TB disks, this took around 18 hours to complete. You can continue to use the array, although file performance and re-shaping performance will be significantly degraded. The re-shaping process can be monitored via “cat /proc/mdstat”

root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5]
md0 : active raid5 sdc1[2] sdb1[1] sda1[0]
1048512 blocks super 0.91 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
[==>………………] reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec

Once, completed you should run a file system check and then re-size the file system on the RAID volume to encompass the additional space:

root# e2fsck -f /dev/md0 root# resize2fs /dev/md0 resize2fs 1.41.9 (22-Aug-2009)
Resizing the filesystem on /dev/md0 to 524256 (4k) blocks.
The filesystem on /dev/md0 is now 524256 blocks long.

blah blah

Boot from Debian net install disk jump threw the hoops auto assemble raid mount /dev/vg0/root as /
Check /etc/mdadm/mdadm.conf it is probably wrong so fix it. I also read some posts about just removing this file as auto is better

mdadm –detail –scan >> /etc/mdadm/mdadm.conf

Not sure if you have to do this but i did just to make sure after a 900+ minute reshape safety first right
update-initramfs -u

reboot

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.