Grub
Grub
There may be an occasion when an issue with grub will cause your server to not boot, there is no need to reinstall the system, you just need to repair the installation of grub.
Fix the GRUB from the startup command line
Normally if grub can not start and your system is not damagaed, you will have a basic system prompt ... otherwise go to Chapter 2.
I presume that your SME Server is using a RAID1 of two disks, otherwise you have to adapt this HOWTO to your setup.
In a terminal at the command prompt issue the following commands
grub root (hd0,0) Filesystem type is ext2fs, partition type 0xfd setup (hd0)
Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 d (hd0)"... 16 sectors embedded succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
reboot
to /dev/sdb you issue the following commands, using one of the following disk labels (sdc, sdd, sde ....)
grub root (hd1,0) setup (hd1)
grub must be installed on all other drives to give the capacity to operate the system. To reach the boot menu of the bios, will depend on your hardware ... F12 under dell, Acer Esc, F11 from msi .... my sme is with Raid1 so I have two drives, you will need to adapt my example to your the number of disks.
Fix the GRUB with the systemrescuecd
I presume that your SME Server is on a RAID1 of two disks, otherwise you will have to adapt this HOWTO.
- start the system with your system rescue cd or you usb stick, choose your keyboard settings
- then start the server X
startx
- open a terminal to verify if your raid is initiated.
cat /proc/mdstat
- if you are lucky the output will look like this
# cat /proc/mdstat Personalities : [raid1] md99 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU] md100 : active raid1 sdb2[1] sda2[0] 262036096 blocks [2/2] [UU] unused devices: <none>
- so we need to launch the LVM
vgchange -ay
- afterward if the LVM is launched without error messages, we can mount the LVM in /mnt
lvdisplay
- Now you have successfully mounted your LVM do the following
mkdir /mnt/sysimage mount /dev/main/root /mnt/sysimage mount -o bind /dev /mnt/sysimage/dev mount -o bind /proc /mnt/sysimage/proc chroot /mnt/sysimage /bin/bash
- We will have to mount the /boot of your system, which is normally contained in / dev/md1.
To do this you must send a
cat /proc/mdstat
- note the md(X) the smallest (about 100 megs) then in your root terminal do this:
mount /dev/md(X) /boot
and then
grub root (hd0,0) setup (hd0)
you can restart and continue the tutorial
Installation of grub on the other disks
- once your Sme started, you need to login in as root
then type
grub
device (hd0) /dev/sda root (hd0,0) setup (hd0) device (hd1) /dev/sdb root (hd1,0) setup (hd1) quit
- same for other drives, you implement one each time. My SME Server is with Raid1 so I have two drives, you will need to adapt my example to your the number of disks.
from there you can reboot your server and check that the grub is installed on each hd. it is simple, with the boot menu or bios, you say on what hd you want to boot . to reach the boot menu of the bios, well it depends on your hardware ... F12 under dell, Acer Esc, F11 from msi ....
Installation of grub on a new disk
When your system is started you can install grub by the command Line on any other disk you want. You have to change by the name of your new disk.
[root@sme8dev64 ~]# grub Probing devices to guess BIOS drives. This may take a long time. GNU GRUB version 0.97 (640K lower / 3072K upper memory) [ Minimal BASH-like line editing is supported. For the first word, TAB lists possible command completions. Anywhere else TAB lists the possible completions of a device/filename.]
grub> device (hd0) /dev/sdb device (hd0) /dev/sdb
grub> root (hd0,0) root (hd0,0) Filesystem type is ext2fs, partition type 0xfd
grub> setup (hd0) setup (hd0) Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are embedded. succeeded Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd1,0)/grub/stage2 /grub/grub.conf"... succeeded
Done.
grub> quit quit
Fix Grub on a HP Proliant DL380 G2
see bugzilla:7024 for more information
The server is configured using the Hardware RAID in RAID5, 6 drives as 1 logical drive (1 spare).
The forum posts describing the issues is here:
The solution posted by CharlieBrady on the forums can be found here:
System information: HP Proliant DL380 G2 Dual P3 1.266GHz (original system processors) Memory: 2GB RAM Hard Drives: 6x 36.4GB Compaq Hot-swappable 36.4GB, 10k or 15k RPM (mixed set) (BF03664664)
To re-configure GRUB, follow these steps
1. Rename /etc/grub.conf to /etc/grub.old
cp /etc/grub.conf /etc/grub.old
2. edit /etc/grub.conf
nano etc/grub.conf
3. uncomment the line that starts with #boot=
4. edit the boot= line to read
boot=/dev/cciss/c0d0
This line previously read (if i remember correctly) #boot=/dev/md0
5. open /boot/grub/device.map
nano /boot/grub/device.map
5a. You SHOULD read
(hd0) /dev/cciss/c0d0
6. Ran GRUB with the following parameters:
/sbin/grub --batch --device-map=/boot/grub/device.map --config-file=/boot/grub/grub.conf --no-floppy
7. Ran the following commands in grub
grub> root (hd0,0) grub> setup (hd0) grub> quit
8. Restarted the server