Well yesterday I receive daily e-mail and saw that my raid is failing .
cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]
md2 : active raid1 sdb3[2](F) sda3[0]
1462766336 blocks [2/1] [U_]
So that mean that sdb3 is marked as failed drive and U_ mean that raid is degraded.
Well from this point I remove the sdb1 and sdb2 from raid but before I mark them as failed
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm /dev/md0 -r /dev/sdb1
mdadm /dev/md1 -r /dev/sdb2
mdadm /dev/md2 -r /dev/sdb3
After replacement of hard drive I have to recreate the same partition on new sdb and add it to raid.
sfdisk -d /dev/sda | sfdisk /dev/sdb
mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb3
Now watch how your raid is recovering
watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Tue Sep 4 09:52:52 2012
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]
md2 : active raid1 sdb3[2] sda3[0]
1462766336 blocks [2/1] [U_]
[===>.................] recovery = 16.0% (234480192/1462766336) finish=412.8min speed=49580K/sec
unused devices:
However the speed my by low so how to increase that speed ?
cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000
Now I increase the min limit to 50000
echo 50000 >/proc/sys/dev/raid/speed_limit_min
Now if you watch cat /proc/mdstat again you will see that your speed is improved and your time get low.