Difference between revisions of "Grub"
Line 33: | Line 33: | ||
{{note box| We go to work with [http://www.sysresccd.org/SystemRescueCd_Homepage SystemRescueCd] which is a Linux system rescue disk available as a bootable CD-ROM or USB stick for administrating or repairing your system and data after a crash.[http://www.sysresccd.org/Download Download]. The goal is to get mounted your logical volumes on /mnt where you can save them on a usb disk.}} | {{note box| We go to work with [http://www.sysresccd.org/SystemRescueCd_Homepage SystemRescueCd] which is a Linux system rescue disk available as a bootable CD-ROM or USB stick for administrating or repairing your system and data after a crash.[http://www.sysresccd.org/Download Download]. The goal is to get mounted your logical volumes on /mnt where you can save them on a usb disk.}} | ||
− | start the system with your system rescue cd or you usb stick, choose your keyboard settings | + | *start the system with your system rescue cd or you usb stick, choose your keyboard settings |
− | then start the server X | + | *then start the server X |
startx | startx | ||
− | open a terminal to verify if your raid is initiated. | + | *open a terminal to verify if your raid is initiated. |
cat /proc/mdstat | cat /proc/mdstat | ||
− | if you are lucky the output will look like this | + | *if you are lucky the output will look like this |
# cat /proc/mdstat | # cat /proc/mdstat | ||
Line 55: | Line 55: | ||
unused devices: <none> | unused devices: <none> | ||
− | so we need to launch the LVM | + | *so we need to launch the LVM |
vgchange -ay | vgchange -ay | ||
− | afterward if the LVM is launched without error messages, we can mount the LVM in /mnt | + | *afterward if the LVM is launched without error messages, we can mount the LVM in /mnt |
{{tip box|if you have a name of logical volume who is not '''/dev/main/root''', you can type this command for knowing all your logical volume, and adapt this to your configuration.}} | {{tip box|if you have a name of logical volume who is not '''/dev/main/root''', you can type this command for knowing all your logical volume, and adapt this to your configuration.}} | ||
Line 65: | Line 65: | ||
lvdisplay | lvdisplay | ||
− | Now you have successfully mounted your LVM | + | *Now you have successfully mounted your LVM then do this |
mkdir /mnt/sysimage | mkdir /mnt/sysimage | ||
Line 73: | Line 73: | ||
chroot /mnt/sysimage /bin/bash | chroot /mnt/sysimage /bin/bash | ||
− | We will have to mount the /boot of your system, which is normally contained in / dev/md1. | + | *We will have to mount the /boot of your system, which is normally contained in / dev/md1. |
To do this you must send a | To do this you must send a | ||
cat /proc/mdstat | cat /proc/mdstat | ||
− | + | * note the md(X) the smallest (about 100 megs) then in your root terminal do this: | |
mount /dev/md(X) /boot | mount /dev/md(X) /boot | ||
and then | and then | ||
Line 88: | Line 88: | ||
===Installation of grub on the other disks=== | ===Installation of grub on the other disks=== | ||
− | once your Sme started, you need to login in Root | + | *once your Sme started, you need to login in Root |
grub | grub | ||
Line 99: | Line 99: | ||
quit | quit | ||
− | same for other drives, you implement one each time | + | *same for other drives, you implement one each time |
from there you can reboot your server and check that the grub is installed on each hd. it is simple, with the boot menu or bios, you say on what hd you want to boot . | from there you can reboot your server and check that the grub is installed on each hd. it is simple, with the boot menu or bios, you say on what hd you want to boot . | ||
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude> | <noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude> |
Revision as of 10:36, 23 June 2013
Sometime you can issue some problems with your grub often when your server dosen't want to start, there are no needs to reinstall it, just to fix your grub.
fix the GRUB from the startup command line
Normally if grub can not start your system and it is nice, you have a minimum order prompt ... otherwise go to Chapter 2.
root (hd0,0) Filesystem type is ext2fs, partition type 0xfd
setup (hd0)
Checking if "/boot/grub/stage1" exists... no Checking if "/grub/stage1" exists... yes Checking if "/grub/stage2" exists... yes Checking if "/grub/e2fs_stage1_5" exists... yes Running "embed /grub/e2fs_stage1_5 d (hd0)"... 16 sectors embedded succeeded Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded Done.
reboot
to /dev/sdb you implement of one, similar to the following disks (sdc, sdd, sde ....)
root (hd1,0) setup (hd1)
We must install grub on all other drives to give the capacity to operate the system. to reach the boot menu of the bios, well it depends on your hardware ... F12 under dell, Acer Esc, F11 from msi .... my sme is with Raid1 so I have two drives, you will need to adapt my example to your the number of disks.
fix the GRUB with the systemrescuecd
- start the system with your system rescue cd or you usb stick, choose your keyboard settings
- then start the server X
startx
- open a terminal to verify if your raid is initiated.
cat /proc/mdstat
- if you are lucky the output will look like this
# cat /proc/mdstat Personalities : [raid1] md99 : active raid1 sdb1[1] sda1[0] 104320 blocks [2/2] [UU] md100 : active raid1 sdb2[1] sda2[0] 262036096 blocks [2/2] [UU] unused devices: <none>
- so we need to launch the LVM
vgchange -ay
- afterward if the LVM is launched without error messages, we can mount the LVM in /mnt
lvdisplay
- Now you have successfully mounted your LVM then do this
mkdir /mnt/sysimage mount /dev/main/root /mnt/sysimage mount -o bind /dev /mnt/sysimage/dev mount -o bind /proc /mnt/sysimage/proc chroot /mnt/sysimage /bin/bash
- We will have to mount the /boot of your system, which is normally contained in / dev/md1.
To do this you must send a
cat /proc/mdstat
- note the md(X) the smallest (about 100 megs) then in your root terminal do this:
mount /dev/md(X) /boot
and then
grub root (hd0,0) setup (hd0)
you can restart and continue the tutorial
Installation of grub on the other disks
- once your Sme started, you need to login in Root
grub device (hd0) /dev/sda root (hd0,0) setup (hd0) device (hd1) /dev/sdb root (hd1,0) setup (hd1) quit
- same for other drives, you implement one each time
from there you can reboot your server and check that the grub is installed on each hd. it is simple, with the boot menu or bios, you say on what hd you want to boot .