If you’re thinking of doing Radi10Â Â on the ephemeral storage disks attached to an Ec2Â instance, this post is for you. Well first of all you have to chose a instance with 4 drives.
You may chose m1.xlarge or c1.xlarge  or cc2.8xlarge . This are only instances with 4 drives. On other instances you may chose to make Raid0, witch in some case is good also.
Well first of all you have to boot your instance, after this please check if you have mounted one of them ( in some cases only one is mounted as ephemeral0 )
You have to umount that drive.
After that fdisk on oll of them and make one drive with full size ( in this manner we will make a raid10 ) with all space.
To see your drivers just run
ls -1 /dev/sd*
and you will see something like :
/dev/sda1
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde
What I want to do is to make sdb, sdc, sdd, sde one raid10
I’ll just create a single partition on each one. Using fdisk
, I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I’m done, each drive looks like this:
fdisk -l /dev/sdb
Disk /dev/sdb: 450.9 GB, 450934865920 bytes
255 heads, 63 sectors/track, 54823 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc696d4f6
Device Boot Start End Blocks Id System
/dev/sdb1 1 54823 440365716 fd Linux raid autodetect
Now I create the raid
mdadm  -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1 /dev/xvde1
This is taking some time, so to verify if the construction of raid is ready run
 watch cat /proc/mdstat
When is ready you have to see something like this
Personalities : [raid10]
md127 : active raid10 xvdc1[1] xvdb1[0] xvdd1[2] xvde1[3]
880729088 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
Ok now we have to create a new partition. Just fdisk /dev/md0
 create your partition.
fdisk /dev/md0Â
mkfs -t ext4 /dev/md0p1
mkdir /mnt/raidd
mount  /dev/md0p1  /mnt/raidd
After you done this, reboot your server . After the server is up and running, on amazon it appear that they will be rename . So md0p1 will be someting lik md127
You may run
grep md /var/log/dmesg
and you will see something like this
[ 0.436792] md: bind<xvde1>
[ 0.444720] md: bind<xvdd1>
[ 0.526356] md: bind<xvdb1>
[ 0.543458] md: bind<xvdc1>
[ 0.547763] md: raid10 personality registered for level 10
[ 0.548234] md/raid10:md127: active with 4 out of 4 devices
[ 0.548311] md127: detected capacity change from 0 to 901866586112
After this you may add to fstab bellow line:
/dev/md127p1 /mnt/raidd auto defaults,comment=cloudconfig 0 2
Now if you run
mount /mnt/raiddÂ
you shuld have raid mounted
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 2.4G 7.0G 26% /
tmpfs 7.4G 0 7.4G 0% /dev/shm
/dev/md127p1 827G 6.7G 779G 1% /mnt/raidd
Hi!
Great tutorial. I want to ask actually about the root volume. How did you set the cc2.8xlarge to 10 Gb (default is 8 Gb)?
I have been trying without success so far to set the root volume to 30 Gb. The ephemeral volumes are much easier to work with.
If you have any insight, that will be great.
thanks,
mw
Comment by mw — November 1, 2013 @ 9:01 am
@mw
The easiest way to increase the root volume is to create new instance and specify a larger volume size at creation time. You will probably have to extend the underlying file system as well.
In case you have an instance already running and want to extent its root volume do this:
1) Create a snapshot of the volume
2) From the snapshot create a new volume with larger size
3) Attach the new volume to a running instance (can be a different one). You will also have to extend the underlying file system (resize2fs for ext based filesystems).
4) Once happy with the changes detach the new volume.
5) Stop your instance and swap old and new volumes
NB: you will have to pay attention to block device names, UUIDs, LABELs, /etc/fstab and such depending on how your system and storage is configured.
To shrink the ROOT volume size is a bit more complicated, see:
http://atodorov.org/blog/2014/02/07/aws-tip-shrinking-ebs-root-volume-size/
Comment by Alex — March 13, 2014 @ 3:24 am
@admin – have you tried RAID configurations where one/some of the volumes are EBS? I’m looking into having EBS for persistence and ephemeral storage for performance and ultimately be able to run everything as a Spot instance. Do you have any experience in this regard ?
Thanks.
Comment by Alex — March 13, 2014 @ 3:27 am
Sorry, but didn’t tested the ebs in raid. In theory you should gain some speed, however didn’t run tests, and didn’t was on my to do list . But if I will have the occasion I will do a tests.
Radu
Comment by admin — April 16, 2014 @ 12:58 pm