How to , and other stuff about linux, photo, php … A linux, photography blog. To remember some linux situation, and fix them quickly.

September 22, 2015

centos /dev/md127 problem after reboot

Filed under: Linux — Tags: , , , , — admin @ 1:06 pm

Hello
Well today I want to explain about my raid experience.
I have to setup a hybrid server on hetzner. I setup the ssd on raid1 using installimage and those other hard drive I have create a raid from linux and mounted them on /raid1 directory .
However after reboot, my /dev/md4 disappear and a /dev/md127 appear.
To create the raid I have use this
fdisk /dev/sdc and /dev/sdd . Make a primary partition and set it up as fd software raid partition .

mdadm –create /dev/md4 –level=1 –raid-devices=2 /dev/sdc1 /dev/sdd1
mkfs.ext4 /dev/md4
mkdir /raid1
mount /dev/md4 /raid1

And insert this into fstab file
/dev/md4 /raid1 ext4 noatime,rw 0 0

So how to fix this ?
First I try to input some information into /etc/mdadm.conf but without luck . It appear that the linux read this late and the centos/ubuntu initalize this from initrd . So in order to fix this some steps have to be done .
After reboot try to fill the information into mdadm.conf with
ARRAY /dev/md/4 UUID=b3c33fe5:3b078681:e2776e37:4f9fd991
The UUID I have taken from

mdadm –detail /dev/md4

First unmount and stop the raid

mdadm –stop /dev/md127
After this assemble it again with your desire
mdadm –assemble /dev/md4 /dev/sdc1 /dev/sdd1

After this I have copy the
/boot/initramfs-2.6.32-573.3.1.el6.x86_64.img to a backup file ( initramfs-2.6.32-573.3.1.el6.x86_64.img-back ) in case something go wrong.

After this run
dracut –force

If is centos 5 or less you have to copy initrd file and to recreate use
mkinitrd -f -v /boot/initrd-$(uname -r).img $(uname -r)

On ubuntu you have to run
sudo update-initramfs -u

And restart the server.

September 4, 2012

Replacing a defective drive from a raid 1

Filed under: Linux — Tags: , , , , , , , , — admin @ 10:51 am

Well yesterday I receive daily e-mail and saw that my raid is failing .

cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]

md2 : active raid1 sdb3[2](F) sda3[0]
1462766336 blocks [2/1] [U_]

So that mean that sdb3 is marked as failed drive and U_ mean that raid is degraded.
Well from this point I remove the sdb1 and sdb2 from raid but before I mark them as failed

mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md0 --fail /dev/sdb1

mdadm /dev/md0 -r /dev/sdb1
mdadm /dev/md1 -r /dev/sdb2
mdadm /dev/md2 -r /dev/sdb3

After replacement of hard drive I have to recreate the same partition on new sdb and add it to raid.

sfdisk -d /dev/sda | sfdisk /dev/sdb
mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb3

Now watch how your raid is recovering
watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Tue Sep 4 09:52:52 2012

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]

md2 : active raid1 sdb3[2] sda3[0]
1462766336 blocks [2/1] [U_]
[===>.................] recovery = 16.0% (234480192/1462766336) finish=412.8min speed=49580K/sec

unused devices:

However the speed my by low so how to increase that speed ?

cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000

Now I increase the min limit to 50000
echo 50000 >/proc/sys/dev/raid/speed_limit_min

Now if you watch cat /proc/mdstat again you will see that your speed is improved and your time get low.

January 13, 2012

RAID10 ephemeral storage on AWS EC2

Filed under: Linux — Tags: , , , , , , , , — admin @ 3:54 pm

If you’re thinking of doing Radi10  on the ephemeral storage disks attached to an Ec2 instance, this post is for you. Well first of all you have to chose a instance with 4 drives.

You may chose m1.xlarge or c1.xlarge  or cc2.8xlarge . This are only instances with 4 drives. On other instances you may chose to make Raid0, witch in some case is good also.

Well first of all you have to boot your instance, after this please check if you have mounted one of them ( in some cases only one is mounted as ephemeral0 )

You have to umount that drive.

After that fdisk on oll of them and make one drive with full size ( in this manner we will make a raid10 ) with all space.

To see your drivers just run

ls -1 /dev/sd*

and you will see something like :
/dev/sda1
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde

What I want to do is to make sdb, sdc, sdd, sde one raid10

I’ll just create a single partition on each one. Using fdisk, I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I’m done, each drive looks like this:

fdisk -l /dev/sdb

Disk /dev/sdb: 450.9 GB, 450934865920 bytes
255 heads, 63 sectors/track, 54823 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc696d4f6

Device Boot Start End Blocks Id System
/dev/sdb1 1 54823 440365716 fd Linux raid autodetect

Now I create the raid

mdadm  -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1 /dev/xvde1

This is taking some time, so to verify if the construction of raid is ready run

 watch cat /proc/mdstat

When is ready you have to see something like this

Personalities : [raid10]
md127 : active raid10 xvdc1[1] xvdb1[0] xvdd1[2] xvde1[3]
880729088 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>

Ok now we have to create a new partition. Just fdisk /dev/md0  create your partition.

fdisk /dev/md0 

mkfs -t ext4 /dev/md0p1

mkdir /mnt/raidd

mount  /dev/md0p1  /mnt/raidd

After you done this, reboot your server . After the server is up and running, on amazon it appear that they will be rename . So md0p1 will be someting lik md127

You may run

grep md /var/log/dmesg

and you will see something like this
[ 0.436792] md: bind<xvde1>
[ 0.444720] md: bind<xvdd1>
[ 0.526356] md: bind<xvdb1>
[ 0.543458] md: bind<xvdc1>
[ 0.547763] md: raid10 personality registered for level 10
[ 0.548234] md/raid10:md127: active with 4 out of 4 devices
[ 0.548311] md127: detected capacity change from 0 to 901866586112

After this you may add to fstab bellow line:

/dev/md127p1 /mnt/raidd auto defaults,comment=cloudconfig 0 2

Now if you run

mount /mnt/raidd 

you shuld have raid mounted

df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 2.4G 7.0G 26% /
tmpfs 7.4G 0 7.4G 0% /dev/shm
/dev/md127p1 827G 6.7G 779G 1% /mnt/raidd

 

Powered by WordPress