How to , and other stuff about linux, photo, php … A linux, photography blog. To remember some linux situation, and fix them quickly.

February 9, 2012

EBS hangs in “attaching” mode

Filed under: Linux — Tags: , , , , , — admin @ 11:30 am

Well with the script bellow automatic script for backup up redis on a amazon ebs
I saw that was a problem.
From time to time the ebs hang with “attaching”
And the mount command die in D state. And the load on that server grow.
Well, we have modify the script to wait for the volume to be actually “attached” .

This is a improvement version of the script.


#!/bin/bash
date
instance=i-xxxxx
volume=vol-xxxx
iplocalhost=10.00.00.111

export EC2_HOME=/path/ec2-1.4.4.2
export JAVA_HOME=/usr/lib/jvm/jre
export CLASSPATH=${EC2_HOME}/lib
export EC2_PRIVATE_KEY=/root/path/pk.pem
export EC2_CERT=/root/path/cert.pem
export PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/opt/aws/bin:/root/bin

if [ ! -d /mnt/backup ]; then
mkdir /mnt/backup
fi

if [ ! -f /tmp/redis1 ]; then
echo "Cron Started"
touch /tmp/redis1
else
echo "Cron already running"
exit 0
fi
atached=1
while [ $atached -eq 1 ];do
echo "">/root/status.txt
/opt/aws/bin/ec2-describe-volumes -K /root/path/pk.pem -C /root/path/cert.pem > /root/status.txt
if grep -q "available" "/root/status.txt" ; then
echo "Volume is available."
echo "Attaching Volume"
/opt/aws/bin/ec2-attach-volume -K /root/path/pk.pem -C /root/path/cert.pem $volume -i $instance -d /dev/sdh
atached=0
else
echo "Volume is already attached on other instance. Sleeping for 60 seconds"
sleep 60
fi
done
avaiable=1
COUNTER=1
while [ $avaiable -eq 1 ];do
/opt/aws/bin/ec2-describe-volumes -K /root/path/pk.pem -C /root/path/cert.pem $volume > /root/status.txt
if grep -q "attached" "/root/status.txt" ; then
echo "Volume is attached."
avaiable=0
echo "Mounting Volume"
mount /dev/sdh /mnt/backup
sleep 10
if [ -f /mnt/backup/montat ]; then
rsync -vrplogDtH /var/lib/redis/ /mnt/backup/redis
fi
else
echo "Volume is attaching. Sleeping for 60 seconds"
sleep 60
fi
if [ $COUNTER -eq 5 ]; then
echo "Quiting ater 5 minutes"
break
fi
let COUNTER+=1
done
umount -lf /mnt/backup
sleep 5
/opt/aws/bin/ec2-detach-volume -f -K /root/path/pk.pem -C /root/path/cert.pem $volume -i $instance -d /dev/sdh
sleep 5
rm -rf /tmp/redis1
exit;

February 7, 2012

service php-fpm restart with error after upgrade from 5.3.8 to 5.3.10

Filed under: Linux — Tags: , , , , , , , — admin @ 4:32 pm

Well
I have to upgrade a server from php 5.3.8 to 5.3.10 on a centos server.
After this the service php-fpm restart didn’t work.
Stopping php-fpm: [FAILED]

So after a short look to the problem, what I discover is that problem is on this file
/etc/init.d/php-fpm

What I have was
pidfile=${PIDFILE-/var/run/php-fpm/php-fpm.pid}
and this line should be
pidfile=${PIDFILE-/var/run/php-fpm.pid}

January 26, 2012

autoscaling with amazon ec2 and elb

Filed under: Linux — Tags: , , , , — admin @ 11:19 am

Creating an auto scaled system using an Amazon load balancer is an interesting task that I did recently.

Here are the list of commands that I used to setup from the command line :
as-create-launch-config ec2elbconfig --image-id ami-xxxxx --instance-type m1.large --key key_name
Also if you don’t have already setup credentials you may append
-I amazonid -S secretkey
Do not forget to add –key because after the instance is up you won’t be able to log on it.

Next command is:
as-create-auto-scaling-group MyAutoScalingGroup --launch-configuration ec2elbconfig --availability-zones us-east-1c --min-size 2 --max-size 6 --load-balancers MyLoadBalancer

Where : MyLoadBalancer – is the name of your ELB ( loading balancer ) from amazon

Nex we create two policy rules, one for adding server and one for remover:
[root@server]# as-put-scaling-policy HighCpuPolicy --auto-scaling-group MyAutoScalingGroup --adjustment=1 --type ChangeInCapacity --cooldown 300
arn:aws:autoscaling:us-east-1:xxx:scalingPolicy:xxx-xx-xx-xx-xxxxx:autoScalingGroupName/MyAutoScalingGroup:policyName/HighCpuPolicy
[root@server]# as-put-scaling-policy LowCpuPolicy --auto-scaling-group MyAutoScalingGroup --adjustment=-1 --type ChangeInCapacity --cooldown 300
arn:aws:autoscaling:us-east-1:xxx:scalingPolicy:xxx-xx-xx-xx-xxxxx:autoScalingGroupName/MyAutoScalingGroup:policyName/LowCpuPolicy

Here is important to remember the output.
After this we must create two monitor rules that will scale our balancer, so will need two more rules:
mon-put-metric-alarm HighCpuAlarm --comparison-operator GreaterThanThreshold --evaluation-periods 4 --metric-name CPUUtilization --namespace "AWS/EC2" --period 60 --statistic Average --threshold 30 --alarm-actions arn:aws:autoscaling:us-east-1:xxx:scalingPolicy:xxx-xx-xx-xx-xxxxx:autoScalingGroupName/MyAutoScalingGroup:policyName/HighCpuPolicy --dimensions "AutoScalingGroupName=MyAutoScalingGroup"

and
mon-put-metric-alarm LowCpuAlarm --comparison-operator LessThanThreshold --evaluation-periods 4 --metric-name CPUUtilization --namespace "AWS/EC2" --period 60 --statistic Average --threshold 20 --alarm-actions arn:aws:autoscaling:us-east-1:xxx:scalingPolicy:xxx-xx-xx-xx-xxxxx:autoScalingGroupName/MyAutoScalingGroup:policyName/LowCpuPolicy --dimensions "AutoScalingGroupName=MyAutoScalingGroup"

After this you will have allways 2 instance up, and if the load go up then 30% more then 4 minutes the system will go up with one more server, if the load go bellow 20% more then 4 minutes will remove one server from load balancer.

If you need to remove this you must delete rules from bottom to top using bellow commands with name of your rules.
mon-delete-alarms
as-delete-policy
as-delete-auto-scaling-group
as-delete-launch-config

January 13, 2012

RAID10 ephemeral storage on AWS EC2

Filed under: Linux — Tags: , , , , , , , , — admin @ 3:54 pm

If you’re thinking of doing Radi10  on the ephemeral storage disks attached to an Ec2 instance, this post is for you. Well first of all you have to chose a instance with 4 drives.

You may chose m1.xlarge or c1.xlarge  or cc2.8xlarge . This are only instances with 4 drives. On other instances you may chose to make Raid0, witch in some case is good also.

Well first of all you have to boot your instance, after this please check if you have mounted one of them ( in some cases only one is mounted as ephemeral0 )

You have to umount that drive.

After that fdisk on oll of them and make one drive with full size ( in this manner we will make a raid10 ) with all space.

To see your drivers just run

ls -1 /dev/sd*

and you will see something like :
/dev/sda1
/dev/sdb
/dev/sdc
/dev/sdd
/dev/sde

What I want to do is to make sdb, sdc, sdd, sde one raid10

I’ll just create a single partition on each one. Using fdisk, I choose the fd (Linux raid auto) partition type and create partitions using the entire disk on each one. When I’m done, each drive looks like this:

fdisk -l /dev/sdb

Disk /dev/sdb: 450.9 GB, 450934865920 bytes
255 heads, 63 sectors/track, 54823 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0xc696d4f6

Device Boot Start End Blocks Id System
/dev/sdb1 1 54823 440365716 fd Linux raid autodetect

Now I create the raid

mdadm  -v --create /dev/md0 --level=raid10 --raid-devices=4 /dev/xvdb1 /dev/xvdc1 /dev/xvdd1 /dev/xvde1

This is taking some time, so to verify if the construction of raid is ready run

 watch cat /proc/mdstat

When is ready you have to see something like this

Personalities : [raid10]
md127 : active raid10 xvdc1[1] xvdb1[0] xvdd1[2] xvde1[3]
880729088 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]

unused devices: <none>

Ok now we have to create a new partition. Just fdisk /dev/md0  create your partition.

fdisk /dev/md0 

mkfs -t ext4 /dev/md0p1

mkdir /mnt/raidd

mount  /dev/md0p1  /mnt/raidd

After you done this, reboot your server . After the server is up and running, on amazon it appear that they will be rename . So md0p1 will be someting lik md127

You may run

grep md /var/log/dmesg

and you will see something like this
[ 0.436792] md: bind<xvde1>
[ 0.444720] md: bind<xvdd1>
[ 0.526356] md: bind<xvdb1>
[ 0.543458] md: bind<xvdc1>
[ 0.547763] md: raid10 personality registered for level 10
[ 0.548234] md/raid10:md127: active with 4 out of 4 devices
[ 0.548311] md127: detected capacity change from 0 to 901866586112

After this you may add to fstab bellow line:

/dev/md127p1 /mnt/raidd auto defaults,comment=cloudconfig 0 2

Now if you run

mount /mnt/raidd 

you shuld have raid mounted

df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 9.9G 2.4G 7.0G 26% /
tmpfs 7.4G 0 7.4G 0% /dev/shm
/dev/md127p1 827G 6.7G 779G 1% /mnt/raidd

 

« Newer Posts

Powered by WordPress