Raid1 on existing system

WORK IN PROGRESS

Introduction

Backup first and as you will see below i have a Western Digital Green 2T (SLOW) disk called /dev/sdc with partiton /dev/sdc1 mounted at /mnt/2T. This is probaly the last i will mention of this so take heed my warning and backup first!

In this tutorial I’m using a Debian Squeeze system with two hard drives, /dev/sda and /dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda has the following partitions:
/dev/sda1: /boot partition, ext4;
/dev/sda2: / partition, ext4; LVM with /root, swap, /pve-data
In the end I want to have the following situation:

/dev/md1 (made up of /dev/sda1 and /dev/sdb1): /boot partition, ext4;
/dev/md2 (made up of /dev/sda2 and /dev/sdb2): / partition, ext4; LVM with /root, swap, pve-data

This is the current situation:

root@proxmox:/mnt/2T# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/pve-root 95G 49G 42G 54% /
tmpfs 2.0G 0 2.0G 0% /lib/init/rw
udev 1.9G 224K 1.9G 1% /dev
tmpfs 2.0G 3.1M 2.0G 1% /dev/shm
/dev/mapper/pve-data 803G 207G 596G 26% /var/lib/vz
/dev/sda1 495M 72M 398M 16% /boot
/dev/sdc1 1.8T 105G 1.6T 7% /mnt/2T
/dev/fuse 30M 20K 30M 1% /etc/pve

root@proxmox:/mnt/2T# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 8e Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System

WARNING: GPT (GUID Partition Table) detected on ‘/dev/sdc’! The util fdisk doesn’t support GPT. Use GNU Parted.

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00000000

Device Boot Start End Blocks Id System
/dev/sdc1 1 243202 1953514583+ ee GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/dm-0: 103.1 GB, 103079215104 bytes
255 heads, 63 sectors/track, 12532 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-0 doesn’t contain a valid partition table

Disk /dev/dm-1: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-1 doesn’t contain a valid partition table

Disk /dev/dm-2: 875.1 GB, 875116363776 bytes
255 heads, 63 sectors/track, 106393 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/dm-2 doesn’t contain a valid partition table

Installing mdadm

As you u can see by the first command this this was already done on my system by default, but i put it in here for completeness.

root@proxmox:/mnt/2T# lsmod | grep raid
raid10 29304 0
raid456 69041 0
async_raid6_recov 6427 1 raid456
async_pq 4759 2 raid456,async_raid6_recov
raid6_pq 79925 2 async_raid6_recov,async_pq
async_xor 3707 3 raid456,async_raid6_recov,async_pq
async_memcpy 2060 2 raid456,async_raid6_recov
async_tx 2972 5 raid456,async_raid6_recov,async_pq,async_xor,async_memcpy
raid1 30092 0
raid0 11886 0

root@server1:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices:
root@server1:~#

If you do not see raid kernel modules or the empty mdadm personalites you will need to install them like this apt-get install initramfs-tools mdadm

MD arrays needed for the root file system:

Afterwards, we load a few kernel modules (to avoid a reboot):

modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

To create a RAID1 array on our already running system, we must prepare the /dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive to it, and finally add /dev/sda to the RAID1 array.

First, we copy the partition table from /dev/sda to /dev/sdb so that both disks have exactly the same layout:

root@proxmox:/mnt/2T# sfdisk -d /dev/sda | sfdisk --force /dev/sdb
Checking that no-one is using this disk right now …
OK

Disk /dev/sdb: 121601 cylinders, 255 heads, 63 sectors/track
Old situation:
Units = cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0

Device Boot Start End #cyls #blocks Id System
/dev/sdb1 0 – 0 0 0 Empty
/dev/sdb2 0 – 0 0 0 Empty
/dev/sdb3 0 – 0 0 0 Empty
/dev/sdb4 0 – 0 0 0 Empty
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1048575 1046528 83 Linux
/dev/sdb2 1048576 1953523711 1952475136 8e Linux LVM
/dev/sdb3 0 – 0 0 Empty
/dev/sdb4 0 – 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table …

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

Check things with fdisk

root@proxmox:/mnt/2T# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 8e Linux LVM

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 8e Linux LVM

WARNING: GPT (GUID Partition Table) detected on ‘/dev/sdc’! The util fdisk doesn’t support GPT. Use GNU Parted.

Disk /dev/sdc: 2000.4 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0×00000000

Device Boot Start End Blocks Id System
/dev/sdc1 1 243202 1953514583+ ee GPT
Partition 1 does not start on physical sector boundary.

Disk /dev/dm-0: 103.1 GB, 103079215104 bytes
255 heads, 63 sectors/track, 12532 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0×00000000

Disk /dev/dm-0 doesn’t contain a valid partition table

Disk /dev/dm-1: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0×00000000

Disk /dev/dm-1 doesn’t contain a valid partition table

Disk /dev/dm-2: 875.1 GB, 875116363776 bytes
255 heads, 63 sectors/track, 106393 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0×00000000

Disk /dev/dm-2 doesn’t contain a valid partition table

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)

Next we must change the partition type of our two partitions on /dev/sdb to Linux raid autodetect:

root@proxmox:/mnt/2T# fdisk /dev/sdb

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): p

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 8e Linux LVM

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000762db

Device Boot Start End Blocks Id System
/dev/sdb1 * 1 66 523264 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sdb2 66 121602 976237568 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

To make sure that there are no remains from previous RAID installations on /dev/sdb, we run the following commands If there are no remains from previous RAID installations, each of the above commands will throw an error like this one (which is nothing to worry about) otherwise the commands will not display anything at all.

root@proxmox:/mnt/2T# mdadm --zero-superblock /dev/sdb1
mdadm: Unrecognised md component device – /dev/sdb1
root@proxmox:/mnt/2T# mdadm --zero-superblock /dev/sdb2
mdadm: Unrecognised md component device – /dev/sdb

Now let’s create our RAID arrays /dev/md0, /dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to /dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and /dev/sda3 can’t be added right now (because the system is currently running on them), therefore we use the placeholder missing in the following three commands:

root@proxmox:/mnt/2T# mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
–metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
root@proxmox:/mnt/2T# mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store ‘/boot’ on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
–metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
root@proxmox:/mnt/2T# cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active (auto-read-only) raid1 sdb2[1]
976236408 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb1[1]
523252 blocks super 1.2 [2/1] [_U]

unused devices:

The standard proxmox install uses /dev/sda1 for the boot partition and uses lvm on /dev/sda2 for the root, swap and data partitions. We are going to change all that below we will create a new lvm system called pve1 on our new raid1 array, make filesystems there and copy our existing data to the new lvm system.

root@proxmox:/mnt/2T# lvscan
ACTIVE ‘/dev/pve/swap’ [4.00 GiB] inherit
ACTIVE ‘/dev/pve/root’ [96.00 GiB] inherit
ACTIVE ‘/dev/pve/data’ [815.02 GiB] inherit
root@proxmox:/mnt/2T# pvcreate /dev/md2
Writing physical volume data to disk “/dev/md2”
Physical volume “/dev/md2” successfully created
root@proxmox:/mnt/2T# pvscan
PV /dev/sda2 VG pve lvm2 [931.01 GiB / 16.00 GiB free]
PV /dev/md2 lvm2 [931.01 GiB]
Total: 2 [1.82 TiB] / in use: 1 [931.01 GiB] / in no VG: 1 [931.01 GiB]
root@proxmox:/mnt/2T# vgcreate pve1 /dev/md2
Volume group “pve1” successfully created
root@proxmox:/mnt/2T# lvcreate –name swap –size 4.00G pve1
Logical volume “swap” created
root@proxmox:/mnt/2T# lvcreate –name root –size 96.00G pve1
Logical volume “root” created
root@proxmox:/mnt/2T# lvcreate –name data –size 815.02G pve1
Rounding up size to full physical extent 815.02 GiB
Logical volume “data” created
root@proxmox:/mnt/2T# lvscan
ACTIVE ‘/dev/pve1/swap’ [4.00 GiB] inherit
ACTIVE ‘/dev/pve1/root’ [96.00 GiB] inherit
ACTIVE ‘/dev/pve1/data’ [815.02 GiB] inherit
ACTIVE ‘/dev/pve/swap’ [4.00 GiB] inherit
ACTIVE ‘/dev/pve/root’ [96.00 GiB] inherit
ACTIVE ‘/dev/pve/data’ [815.02 GiB] inherit
root@proxmox:/mnt/2T#mkfs.ext4 /dev/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 523252 blocks
26162 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 21 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@proxmox:/mnt/2T# mkswap /dev/pve1/swap -f
Setting up swapspace version 1, size = 4194300 KiB
no label, UUID=8eb60ea0-31ef-498f-b813-98b20e29ecf9
root@proxmox:/mnt/2T# mkfs.ext4 /dev/pve1/root
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
6291456 inodes, 25165824 blocks
1258291 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
768 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 38 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@proxmox:/mnt/2T# mkfs.ext4 /dev/pve1/data
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
53420032 inodes, 213653504 blocks
10682675 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
6521 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 35 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@proxmox:/mnt/2T# mkdir /mnt/boot
root@proxmox:/mnt/2T# mkdir /mnt/root
root@proxmox:/mnt/2T# mkdir /mnt/data
root@proxmox:/mnt/2T# mount /dev/md1 /mnt/boot/
root@proxmox:/mnt/2T#mount /dev/pve1/root /mnt/root
root@proxmox:/mnt/2T# mount /dev/pve1/data /mnt/data

copy data

Now we edit /etc/fstab changeing boot to /dev/md1 and all instances of pve to pve1

vi /etc/fstab

#      
/dev/pve1/root / ext3 errors=remount-ro 0 1
/dev/pve1/data /var/lib/vz ext3 defaults 0 1
/dev/md1 /boot ext3 defaults 0 1
/dev/pve1/swap none swap sw 0 0
proc /proc proc defaults 0 0
/dev/sdc1 /mnt/2T   ext4  defaults 0 2

GRUB

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Proxmox, with RAID1' --class proxmox --class gnu-linux --class gnu --class os {
    insmod raid
    insmod mdraid
    insmod part_msdos
    insmod ext2
    set root='(md/1)'
    echo    'Loading Proxmox with RAID ...'
    linux   /vmlinuz-2.6.32-16-pve root=/dev/mapper/pve1-root ro  quiet
    echo    'Loading initial ramdisk ...'
    initrd  /initrd.img-2.6.32-16-pve
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running:

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use root=/dev/mapper/pve1-root in the linux line.

The important part in our new menuentry stanza is the line set root='(md/1)’ – it makes sure that we boot from our RAID1 array /dev/md1 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails – the system will still be able to boot.

Because we don’t use UUIDs anymore for our block devices, open /etc/default/grub and uncomment the line GRUB_DISABLE_LINUX_UUID=true:
vi /etc/default/grub
Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Copy files I like to use rsync seems to handle the backuppc cpool better i think, but do not forget rsync treats bind and rbind mounts as part of the file system so you must exlude those types of mounts see below. Note it is way better to do this before you run backups otherwise it takes forever.

rsync -aAxv --delete --one-file-system --exclude /home/c0mputerking/mondoarchives --progress / /mnt/root/
rsync -aAxz --delete --one-file-system --progress /boot/ /mnt/boot/
rsync -aAxv --delete --one-file-system --exclude /var/lib/vz/root/203/srv/samba --exclude /var/lib/vz/root/202/var/lib/backppc --progress /var/lib/vz/ /mnt/data/ | tee /home/c0mputerking/$(date+"%F_%R")-data-rsync.log

update shutdown into single user mode then copy files for a more accurate copy

time rsync --archive --one-file-system --hard-links --acls --xattrs --human-readable --inplace --numeric-ids --delete --delete-excluded --progress --exclude /home/c0mputerking/mondoarchives --log-file=/home/c0mputerking/rsync.log / /mnt/root/

time rsync --archive --one-file-system --hard-links --acls --xattrs --human-readable --inplace --numeric-ids --delete --delete-excluded --progress --log-file=/home/c0mputerking/rsync.log /boot/ /mnt/boot/

time ionice -c2 -n7 rsync --archive --one-file-system --hard-links --acls --xattrs --human-readable --inplace --numeric-ids --delete --delete-excluded --progress --exclude-from=/home/c0mputerking/rsync-exclude --log-file=/home/c0mputerking/rsync.log /var/lib/vz/ /mnt/data/

reboot

If all goes well you should be able to see our new logical volumes root and data and /dev/md0 mounted:

mount

root@proxmox:~# mount
/dev/mapper/pve1-root on / type ext4 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve1-data on /var/lib/vz type ext4 (rw)
/dev/md0 on /boot type ext4 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)

Now we need to remove the volume group pve:

lvremove /dev/pve/root
lvremove /dev/pve/swap
lvremove /dev/pve/data
vgremove /dev/pve
pvremove /dev/sda2

root@proxmox:~# lvremove /dev/pve/root
Do you really want to remove active logical volume root? [y/n]: y
Logical volume “root” successfully removed
root@proxmox:~# lvremove /dev/pve/swap
Do you really want to remove active logical volume swap? [y/n]: y
Logical volume “swap” successfully removed
root@proxmox:~# lvremove /dev/pve/data
Do you really want to remove active logical volume data? [y/n]: y
Logical volume “data” successfully removed
root@proxmox:~# vgremove /dev/pve
Volume group “pve” successfully removed
root@proxmox:~# pvremove /dev/sda2
Labels on physical volume “/dev/sda2” successfully wiped

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

root@proxmox:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help): t
Partition number (1-4): 1
Hex code (type L to list codes): fd
Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help): t
Partition number (1-4): 2
Hex code (type L to list codes): fd
Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help): p

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x0009f7a7

Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 121602 976237568 fd Linux raid autodetect

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Now we can add /dev/sda1 and /dev/sda2 to /dev/md0 and /dev/md1:

mdadm –add /dev/md0 /dev/sda1
mdadm –add /dev/md1 /dev/sda2

Now take a look at:

cat /proc/mdstat

… and you should see that the RAID arrays are being synchronized.

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

Now we delete /etc/grub.d/09_swraid1_setup…

rm -f /etc/grub.d/09_swraid1_setup

… and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0) or (hd1)), that’s why we don’t need /etc/grub.d/09_swraid1_setup anymore.

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Reboot the system:

reboot

It should boot without problems.

That’s it – you’ve successfully set up software RAID1 on your Proxmox system!

Enjoy!

CREATE A NEW POST STARTING HERE FOR TESTING AND RECOVERY

Create filesystems on our RAID arrays

root@proxmox:/mnt/2T# mkfs.ext4 /dev/md1
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 523252 blocks
26162 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
64 block groups
8192 blocks per group, 8192 fragments per group
2048 inodes per group
Superblock backups stored on blocks:
8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 29 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.
root@proxmox:/mnt/2T# mkfs.ext4 /dev/md2
mke2fs 1.41.12 (17-May-2010)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
61022208 inodes, 244059102 blocks
12202955 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=4294967296
7449 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848

Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 26 mounts or
180 days, whichever comes first. Use tune2fs -c or -i to override.

Next we must adjust /etc/mdadm/mdadm.conf (which doesn’t contain any information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

root@proxmox:/mnt/2T# cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Mon, 17 Sep 2012 00:08:22 -0600
# by mkconf 3.1.4-1+8efb9d1+squeeze1
ARRAY /dev/md/1 metadata=1.2 UUID=17c01d59:0bcc870c:de50d85f:84d8a42f name=proxmox:1
ARRAY /dev/md/2 metadata=1.2 UUID=137902b6:1abd960b:812477e6:bc4bddb3 name=proxmox:2

Adjusting The System To RAID1

Now let’s mount /dev/md1 and /dev/md2

mkdir /mnt/md1
mkdir /mnt/md2
mount /dev/md1 /mnt/md1
mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

root@proxmox:/mnt/2T# mount
/dev/mapper/pve-root on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw)
/dev/sda1 on /boot type ext3 (rw)
/dev/sdc1 on /mnt/2T type ext4 (rw)
fusectl on /sys/fs/fuse/connections type fusectl (rw)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,name=beancounter)
container on /proc/vz/container type cgroup (rw,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,name=fairsched)
/var/lib/vz/solar/venus on /var/lib/vz/root/202/var/lib/backuppc type none (rw,bind)
/var/lib/vz/solar/earth on /var/lib/vz/root/203/srv/samba/earth type none (rw,bind)
/dev/md1 on /mnt/md1 type ext4 (rw)
/dev/md2 on /mnt/md2 type ext4 (rw)

Next we modify /etc/fstab. Comment out the current /, /boot, and swap partitions and add new lines for them where you replace the UUIDs with /dev/md0 (for the /boot partition), /dev/md1 (for the swap partition) and /dev/md2 (for the / partition) so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#                
proc            /proc           proc    defaults        0       0
# / was on /dev/sda3 during installation
#UUID=e4e38871-0115-477d-94f9-34b079d26248 /               ext4    errors=remount-ro 0       1
/dev/md2 /               ext4    errors=remount-ro 0       1
# /boot was on /dev/sda1 during installation
#UUID=7e2fb013-073e-4312-a669-f34b35069bfb /boot           ext4    defaults        0       2
/dev/md0 /boot           ext4    defaults        0       2
# swap was on /dev/sda2 during installation
#UUID=1a5951f8-d0ab-4e0e-b42a-871f81b6fd82 none            swap    sw              0       0
/dev/md1 none            swap    sw              0       0
/dev/scd0       /media/cdrom0   udf,iso9660 user,noauto     0       0
/dev/fd0        /media/floppy0  auto    rw,user,noauto  0       0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in /etc/mtab:

vi /etc/mtab

/dev/md2 / ext4 rw,errors=remount-ro 0 0
tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0 0
/dev/md0 /boot ext4 rw 0 0
/dev/md0 /mnt/md0 ext4 rw 0 0
/dev/md2 /mnt/md2 ext4 rw 0 0

Now up to the GRUB2 boot loader. Create the file /etc/grub.d/09_swraid1_setup as follows:

cp /etc/grub.d/40_custom /etc/grub.d/09_swraid1_setup
vi /etc/grub.d/09_swraid1_setup

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries.  Simply type the
# menu entries you want to add after this comment.  Be careful not to change
# the 'exec tail' line above.
menuentry 'Debian GNU/Linux, with Linux 2.6.32-5-amd64' --class debian --class gnu-linux --class gnu --class os {
        insmod raid
        insmod mdraid
        insmod part_msdos
        insmod ext2
        set root='(md/0)'
        echo    'Loading Linux 2.6.32-5-amd64 ...'
        linux   /vmlinuz-2.6.32-5-amd64 root=/dev/md2 ro  quiet
        echo    'Loading initial ramdisk ...'
        initrd  /initrd.img-2.6.32-5-amd64
}

Make sure you use the correct kernel version in the menuentry stanza (in the linux and initrd lines). You can find it out by running

uname -r

or by taking a look at the current menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section in /boot/grub/grub.cfg. Also make sure that you use root=/dev/md2 in the linux line.

The important part in our new menuentry stanza is the line set root='(md/0)’ – it makes sure that we boot from our RAID1 array /dev/md0 (which will hold the /boot partition) instead of /dev/sda or /dev/sdb which is important if one of our hard drives fails – the system will still be able to boot.

Because we don’t use UUIDs anymore for our block devices, open /etc/default/grub…

vi /etc/default/grub

… and uncomment the line GRUB_DISABLE_LINUX_UUID=true:

# If you change this file, run 'update-grub' afterwards to update
# /boot/grub/grub.cfg.

GRUB_DEFAULT=0
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet"
GRUB_CMDLINE_LINUX=""

# Uncomment to enable BadRAM filtering, modify to suit your needs
# This works with Linux (no patch required) and with any kernel that obtains
# the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...)
#GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef"

# Uncomment to disable graphical terminal (grub-pc only)
#GRUB_TERMINAL=console

# The resolution used on graphical terminal
# note that you can use only modes which your graphic card supports via VBE
# you can see them in real GRUB with the command `vbeinfo'
#GRUB_GFXMODE=640x480

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux
GRUB_DISABLE_LINUX_UUID=true

# Uncomment to disable generation of recovery mode menu entries
#GRUB_DISABLE_LINUX_RECOVERY="true"

# Uncomment to get a beep at grub start
#GRUB_INIT_TUNE="480 440 1"

Run

update-grub

to write our new kernel stanza from /etc/grub.d/09_swraid1_setup to /boot/grub/grub.cfg.

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and /dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

Preparing GRUB2 (Part 1)

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Now we reboot the system and hope that it boots ok from our RAID arrays:

reboot

7 Preparing /dev/sda

If all goes well, you should now find /dev/md0 and /dev/md2 in the output of

df -h

root@server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 4.0G 714M 3.1G 19% /
tmpfs 249M 0 249M 0% /lib/init/rw
udev 244M 132K 244M 1% /dev
tmpfs 249M 0 249M 0% /dev/shm
/dev/md0 472M 25M 423M 6% /boot
root@server1:~#

The output of

cat /proc/mdstat

should be as follows:

root@server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[1]
4241396 blocks super 1.2 [2/1] [_U]

md1 : active (auto-read-only) raid1 sdb2[1]
499700 blocks super 1.2 [2/1] [_U]

md0 : active raid1 sdb1[1]
498676 blocks super 1.2 [2/1] [_U]

unused devices:
root@server1:~#

Now we must change the partition types of our three partitions on /dev/sda to Linux raid autodetect as well:

fdisk /dev/sda

root@server1:~# fdisk /dev/sda

WARNING: DOS-compatible mode is deprecated. It’s strongly recommended to
switch off the mode (command ‘c’) and change display units to
sectors (command ‘u’).

Command (m for help):Partition number (1-4):Hex code (type L to list codes):Changed system type of partition 1 to fd (Linux raid autodetect)

Command (m for help):Partition number (1-4):Hex code (type L to list codes):Changed system type of partition 2 to fd (Linux raid autodetect)

Command (m for help):Partition number (1-4):Hex code (type L to list codes):Changed system type of partition 3 to fd (Linux raid autodetect)

Command (m for help):……….] recovery = 54.6% (2319808/4241396) finish=0.7min speed=45058K/sec

md1 : active raid1 sda2[2] sdb2[1]
499700 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
498676 blocks super 1.2 [2/2] [UU]

unused devices:
root@server1:~#

(You can run

watch cat /proc/mdstat

to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

root@server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
4241396 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[2] sdb2[1]
499700 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[2] sdb1[1]
498676 blocks super 1.2 [2/2] [UU]

unused devices:
root@server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm –examine –scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local system
HOMEHOST 

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays

# This file was auto-generated on Tue, 24 May 2011 14:09:09 +0200
# by mkconf 3.1.4-1+8efb9d1
ARRAY /dev/md/0 metadata=1.2 UUID=b40c3165:17089af7:5d5ee79b:8783491b name=server1.example.com:0
ARRAY /dev/md/1 metadata=1.2 UUID=62e4a606:878092a0:212209c5:c91b8fef name=server1.example.com:1
ARRAY /dev/md/2 metadata=1.2 UUID=94e51099:d8475c57:4ff1c60f:9488a09a name=server1.example.com:2

Preparing GRUB2 (Part 2)

Now we delete /etc/grub.d/09_swraid1_setup…

rm -f /etc/grub.d/09_swraid1_setup

… and update our GRUB2 bootloader configuration:

update-grub
update-initramfs -u

Now if you take a look at /boot/grub/grub.cfg, you should find that the menuentry stanzas in the ### BEGIN /etc/grub.d/10_linux ### section look pretty much the same as what we had in /etc/grub.d/09_swraid1_setup (they should now also be set to boot from /dev/md0 instead of (hd0) or (hd1)), that’s why we don’t need /etc/grub.d/09_swraid1_setup anymore.

Afterwards we must make sure that the GRUB2 bootloader is installed on both hard drives, /dev/sda and /dev/sdb:

grub-install /dev/sda
grub-install /dev/sdb

Reboot the system:

reboot

It should boot without problems.

That’s it – you’ve successfully set up software RAID1 on your running Debian Squeeze system!

Testing

Now let’s simulate a hard drive failure. It doesn’t matter if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb has failed.

To simulate the hard drive failure, you can either shut down the system and remove /dev/sdb from the system, or you (soft-)remove it like this:

mdadm –manage /dev/md0 –fail /dev/sdb1
mdadm –manage /dev/md1 –fail /dev/sdb2
mdadm –manage /dev/md2 –fail /dev/sdb3

mdadm –manage /dev/md0 –remove /dev/sdb1
mdadm –manage /dev/md1 –remove /dev/sdb2
mdadm –manage /dev/md2 –remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you should now put /dev/sdb in /dev/sda’s place and connect the new HDD as /dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

root@server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sda3[2]
4241396 blocks super 1.2 [2/1] [U_]

md1 : active (auto-read-only) raid1 sda2[2]
499700 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[2]
498676 blocks super 1.2 [2/1] [U_]

unused devices:
root@server1:~#

The output of

fdisk -l

should look as follows:

root@server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x000e0f78

Device Boot Start End Blocks Id System
/dev/sda1 * 1 63 498688 fd Linux raid autodetect
Partition 1 does not end on cylinder boundary.
/dev/sda2 63 125 499712 fd Linux raid autodetect
Partition 2 does not end on cylinder boundary.
/dev/sda3 125 653 4242432 fd Linux raid autodetect
Partition 3 does not end on cylinder boundary.

Disk /dev/sdb: 5368 MB, 5368709120 bytes
255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn’t contain a valid partition table

Disk /dev/md0: 510 MB, 510644224 bytes
2 heads, 4 sectors/track, 124669 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md0 doesn’t contain a valid partition table

Disk /dev/md1: 511 MB, 511692800 bytes
2 heads, 4 sectors/track, 124925 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md1 doesn’t contain a valid partition table

Disk /dev/md2: 4343 MB, 4343189504 bytes
2 heads, 4 sectors/track, 1060349 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/md2 doesn’t contain a valid partition table
root@server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk –force /dev/sdb

root@server1:~# sfdisk -d /dev/sda | sfdisk –force /dev/sdb
Checking that no-one is using this disk right now …
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature
/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 999423 997376 fd Linux raid autodetect
/dev/sdb2 999424 1998847 999424 fd Linux raid autodetect
/dev/sdb3 1998848 10483711 8484864 fd Linux raid autodetect
/dev/sdb4 0 – 0 0 Empty
Warning: partition 1 does not end at a cylinder boundary
Successfully wrote the new partition table

Re-reading the partition table …

If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=512 count=1
(See fdisk(8).)
root@server1:~#

Afterwards we remove any remains of a previous RAID array from /dev/sdb…

mdadm –zero-superblock /dev/sdb1
mdadm –zero-superblock /dev/sdb2
mdadm –zero-superblock /dev/sdb3

… and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1
mdadm -a /dev/md1 /dev/sdb2
mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

root@server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[3] sda3[2]
4241396 blocks super 1.2 [2/1] [U_]
[======>…………..] recovery = 32.2% (1367168/4241396) finish=1.0min speed=44102K/sec

md1 : active raid1 sdb2[3] sda2[2]
499700 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[3] sda1[2]
498676 blocks super 1.2 [2/2] [UU]

unused devices:
root@server1:~#

Wait until the synchronization has finished:

root@server1:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sdb3[3] sda3[2]
4241396 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[3] sda2[2]
499700 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[3] sda1[2]
498676 blocks super 1.2 [2/2] [UU]

unused devices:
root@server1:~#

Then install the bootloader on both HDDs:

grub-install /dev/sda
grub-install /dev/sdb

That’s it. You’ve just replaced a failed hard drive in your RAID1 array.

Leave a comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.