How to , and other stuff about linux, photo, php … Another blog just like others on internet

January 20, 2017

grub-install error: no such disk

Filed under: Linux — Tags: , , , — admin @ 6:42 pm

Well a new problem . After one hard failure in raid .. when I try to grub-install /dev/sda

This error appear

/usr/sbin/grub-probe: error: no such disk.
Auto-detection of a filesystem of /dev/md1 failed.
Please report this together with the output of “/usr/sbin/grub-probe –device-map=/boot/grub/device.map –target=fs -v /boot/grub” to <bug-grub@gnu.org>

So the fix is .. after you mount the partition like in this link http://matrafox.info/reinstall-grub-after-raid-crash.html

1. mv /boot/grub/device.map /boot/grub/device.map.old
2. grub-mkdevicemap
3. update-grub2 && grub-install /dev/sda && grub-install /dev/sdb

July 9, 2015

How To Restore a cPanel Server from old hard drive

Filed under: Linux — Tags: , , — admin @ 11:52 am

First of all this is a tutorial that you run it on your risk . Any way if you are in this situation a reinstall was done, and you have a snapshot mounted or old hard drive .
Also this is in case that you don’t have backup from whm and you can’t import them .

We are assuming that your old hard drive is mounted as ‘old-drive’

We sync important configuration from etc . Please after this don’t log out and try to log on your server . Here we re sync also the shadow this mean you will log using old password
cd /old-drive/etc/
rsync -avHz user* trueuser* domainips secondarymx domainalias valiases vfiltersexim* backupmxhosts proftpd* pure-ftpd* logrotate.conf passwd* group* *domain* *named* wwwacct.conf cpbackup.conf cpupdate.conf quota.conf shadow* *rndc* ips* ipaddrpool* ssl hosts spammer* skipsmtpcheckhosts relay* localdomains remotedomains my.cnf /etc

Next is apache configuration
rsync -avHz /old-drive/usr/local/apache/conf /usr/local/apache
rsync -avHz /old-drive/usr/local/apache/modules /usr/local/apache
rsync -avHz /old-drive/usr/local/apache/domlogs /usr/local/apache

Dns configuration
rsync -avHz /old-drive/var/named /var

Cpanel restore
rsync -avHz /old-drive/usr/local/cpanel /usr/local

Mysql restoration
rsync -avHz /old-drive/var/lib/mysql /var/lib

cPanel files and templates
rsync -avHz /old-drive/var/cpanel /var

SSl certificates
rsync -avHz /old-drive/usr/share/ssl /usr/share

User bandwidth
rsync -avHz /old-drive/var/log/bandwidth /var/log

Mail queue
rsync -avHz /old-drive/var/spool/cron /var/spool

Mysql root password
rsync -avHz /old-drive/root/.my.cnf /root

Home user information ( this will take some time if you have huge websites )
rsync -avHz --exclude=virtfs/ /old-drive/home/* /home

We need to recompile and rebuild some files after copy this

/scripts/upcp --force
/scripts/easyapache
/scripts/initquotas
/scripts/eximup --force
/scripts/mysqlup --force
/etc/init.d/cpanel restart
/scripts/restartsrv_apache
/scripts/restartsrv_exim
/scripts/restartsrv_named

September 4, 2012

Replacing a defective drive from a raid 1

Filed under: Linux — Tags: , , , , , , , , — admin @ 10:51 am

Well yesterday I receive daily e-mail and saw that my raid is failing .

cat /proc/mdstat
Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]

md2 : active raid1 sdb3[2](F) sda3[0]
1462766336 blocks [2/1] [U_]

So that mean that sdb3 is marked as failed drive and U_ mean that raid is degraded.
Well from this point I remove the sdb1 and sdb2 from raid but before I mark them as failed

mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md0 --fail /dev/sdb1

mdadm /dev/md0 -r /dev/sdb1
mdadm /dev/md1 -r /dev/sdb2
mdadm /dev/md2 -r /dev/sdb3

After replacement of hard drive I have to recreate the same partition on new sdb and add it to raid.

sfdisk -d /dev/sda | sfdisk /dev/sdb
mdadm /dev/md0 -a /dev/sdb1
mdadm /dev/md1 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb3

Now watch how your raid is recovering
watch cat /proc/mdstat
Every 2.0s: cat /proc/mdstat Tue Sep 4 09:52:52 2012

Personalities : [raid1] [raid0] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 sdb1[1] sda1[0]
2102464 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]
264960 blocks [2/2] [UU]

md2 : active raid1 sdb3[2] sda3[0]
1462766336 blocks [2/1] [U_]
[===>.................] recovery = 16.0% (234480192/1462766336) finish=412.8min speed=49580K/sec

unused devices:

However the speed my by low so how to increase that speed ?

cat /proc/sys/dev/raid/speed_limit_max
200000
cat /proc/sys/dev/raid/speed_limit_min
1000

Now I increase the min limit to 50000
echo 50000 >/proc/sys/dev/raid/speed_limit_min

Now if you watch cat /proc/mdstat again you will see that your speed is improved and your time get low.

Powered by WordPress