Line 1: |
Line 1: |
| ==Recovering SME Server with lvm drives== | | ==Recovering SME Server with lvm drives== |
| + | |
| + | The purpose of this howto is to give to you the abilities to access to your data if the SME Server 8 is broken and can't start in a normal way. |
| + | you have several methods below, but all are done on a default Raid over LVM, therefore you might need to adapt to your configuration, if necessary. |
| + | |
| + | If your issue concerns a grub issue, you should look to this [[Grub|wiki page]] |
| + | |
| + | |
| + | ===Method with the official SME Server CDROM === |
| + | I presume that your SME Server is on a RAID over LVM, otherwise you will have to adapt this HOWTO. |
| + | |
| + | * start the system with your official SME Server CDROM |
| + | * give at prompt : '''linux rescue''' |
| + | * set your language and your keyboard |
| + | * set to '''no''' the start of network interfaces |
| + | * set to '''continue''' the question about how the system is mounted in /mnt/sysimage |
| + | * set to '''ok''' |
| + | * give at prompt : |
| + | chroot /mnt/sysimage |
| + | su - |
| + | |
| + | Now you have successfully mounted your LVM and you are able to read your data in a chroot environment, you can save them on a usb disk |
| + | |
| + | * to exit |
| + | exit |
| + | exit |
| + | halt |
| + | |
| + | ===Method with SystemRescueCd=== |
| + | |
| + | {{note box| We go to work with [http://www.sysresccd.org/SystemRescueCd_Homepage SystemRescueCd] which is a Linux system rescue disk available as a bootable CD-ROM or USB stick for administrating or repairing your system and data after a crash.[http://www.sysresccd.org/Download Download]. The goal is to get mounted your logical volumes on /mnt where you can save them on a usb disk.}} |
| + | |
| + | start the system with your system rescue cd or you usb stick, choose your keyboard settings |
| + | |
| + | then start the server X |
| + | |
| + | startx |
| + | |
| + | open a terminal to verify if your raid is initiated. |
| + | |
| + | cat /proc/mdstat |
| + | |
| + | if you are lucky the output will look like this |
| + | |
| + | # cat /proc/mdstat |
| + | Personalities : [raid1] |
| + | md99 : active raid1 sdb1[1] sda1[0] |
| + | 104320 blocks [2/2] [UU] |
| + | |
| + | md100 : active raid1 sdb2[1] sda2[0] |
| + | 262036096 blocks [2/2] [UU] |
| + | |
| + | unused devices: <none> |
| + | |
| + | so we need to launch the LVM |
| + | |
| + | vgchange -ay |
| + | |
| + | afterward if the LVM is launched without error messages, we can mount the LVM in /mnt |
| + | |
| + | mkdir /mnt/recover |
| + | mount /dev/main/root /mnt/recover |
| + | |
| + | {{tip box|if you have a name of logical volume who is not '''/dev/main/root''', you can type this command for knowing all your logical volume, and adapt this to your configuration.}} |
| + | |
| + | lvdisplay |
| + | |
| + | Now you have successfully mounted your LVM and you are able to read your data on /mnt/recover, you can save them on a usb disk with the file browser for example. |
| + | |
| + | ===Method === |
| Let’s try starting the raid and see what we get: | | Let’s try starting the raid and see what we get: |
| | | |
Line 97: |
Line 166: |
| user@user-desktop:~$ | | user@user-desktop:~$ |
| | | |
− | <noinclude>[[Category:Howto]]</noinclude> | + | ===Method with a ubuntu Cdrom=== |
| + | based on http://www.linuxjournal.com/article/8874?page=0,0 |
| + | |
| + | on ubuntu (non lvm), install mdadm and lvm2, attach server drive, |
| + | |
| + | find UUID's |
| + | |
| + | $ sudo mdadm --examine --scan /dev/sdb1 /dev/sdb2 |
| + | ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc |
| + | ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324 |
| + | |
| + | |
| + | add ARRAY lines to mdadm.conf |
| + | |
| + | $ cat /etc/mdadm/mdadm.conf |
| + | # mdadm.conf |
| + | # |
| + | # Please refer to mdadm.conf(5) for information about this file. |
| + | # |
| + | |
| + | # by default, scan all partitions (/proc/partitions) for MD superblocks. |
| + | # alternatively, specify devices to scan, using wildcards if desired. |
| + | DEVICE partitions |
| + | ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc |
| + | ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324 |
| + | |
| + | |
| + | checking |
| + | |
| + | $ sudo pvscan |
| + | PV /dev/md2 VG main lvm2 [148.94 GB / 64.00 MB free] |
| + | Total: 1 [148.94 GB] / in use: 1 [148.94 GB] / in no VG: 0 [0 ] |
| + | |
| + | checking |
| + | |
| + | $ sudo lvscan |
| + | ACTIVE '/dev/main/root' [146.94 GB] inherit |
| + | ACTIVE '/dev/main/swap' [1.94 GB] inherit |
| + | |
| + | mount, check and copy to safe location ... |
| + | |
| + | $ sudo mkdir /mnt/ga |
| + | $ sudo mount /dev/main/root /mnt/ga |
| + | $ sudo ls -la /mnt/ga/var/log/messages |
| + | lrwxrwxrwx 1 root root 32 2010-03-24 18:00 /mnt/ga/var/log/messages -> /var/log/messages.20* |
| + | |
| + | <noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude> |