Difference between revisions of "Recovering SME Server with lvm drives"

From SME Server
Jump to navigationJump to search
(another method to recover data on lvm)
Line 1: Line 1:
 +
==Recovering SME Server with lvm drives==
 +
 +
==Method A==
 
Let’s try starting the raid and see what we get:
 
Let’s try starting the raid and see what we get:
  
Line 95: Line 98:
 
  [..]
 
  [..]
 
  user@user-desktop:~$
 
  user@user-desktop:~$
 +
 +
==Method B==
 +
based on http://www.linuxjournal.com/article/8874?page=0,0
 +
 +
on ubuntu (non lvm), install mdadm and lvm2, attach server drive,
 +
 +
find UUID's
 +
 +
$ sudo mdadm --examine --scan  /dev/sdb1 /dev/sdb2
 +
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
 +
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324
 +
 +
 +
add ARRAY lines to mdadm.conf
 +
 +
$ cat /etc/mdadm/mdadm.conf
 +
# mdadm.conf
 +
#
 +
# Please refer to mdadm.conf(5) for information about this file.
 +
#
 +
 +
# by default, scan all partitions (/proc/partitions) for MD superblocks.
 +
# alternatively, specify devices to scan, using wildcards if desired.
 +
DEVICE partitions
 +
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
 +
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324
 +
 +
 +
checking
 +
 +
$  sudo pvscan
 +
  PV /dev/md2  VG main  lvm2 [148.94 GB / 64.00 MB free]
 +
  Total: 1 [148.94 GB] / in use: 1 [148.94 GB] / in no VG: 0 [0  ]
 +
 +
checking
 +
 +
$ sudo lvscan
 +
  ACTIVE            '/dev/main/root' [146.94 GB] inherit
 +
  ACTIVE            '/dev/main/swap' [1.94 GB] inherit
 +
 +
mount, check and copy to safe location ...
 +
 +
$ sudo mkdir /mnt/ga
 +
$ sudo mount /dev/main/root /mnt/ga
 +
$ sudo ls -la /mnt/ga/var/log/messages
 +
lrwxrwxrwx 1 root root 32 2010-03-24 18:00 /mnt/ga/var/log/messages -> /var/log/messages.20*
  
 
<noinclude>[[Category:Howto]]</noinclude>
 
<noinclude>[[Category:Howto]]</noinclude>

Revision as of 09:31, 29 March 2010

Recovering SME Server with lvm drives

Method A

Let’s try starting the raid and see what we get:

mdadm -E /dev/sdb1

What “mdadm -E /dev/sdb1” command shows it is a part of a raid array, what level, how many members, etc.

user@user-desktop:/mnt$ mdadm -E /dev/sdb2
/dev/sdb2:
         Magic : a92b4efc
       Version : 00.90.00
          UUID : 550e0406:c9ce50d2:825b32e4:4a9d3549
 Creation Time : Sat Sep  8 12:15:29 2007
    Raid Level : raid1
 Used Dev Size : 1991936 (1945.58 MiB 2039.74 MB)
    Array Size : 1991936 (1945.58 MiB 2039.74 MB)
  Raid Devices : 2
 Total Devices : 1
Preferred Minor : 2
   Update Time : Sat Sep  8 12:22:05 2007
         State : clean
Active Devices : 1
Working Devices : 1
Failed Devices : 1
 Spare Devices : 0
      Checksum : 22e3837f - correct
        Events : 0.991
     Number   Major   Minor   RaidDevice State
this     0       8        2        0      active sync   /dev/sda2
   0     0       8        2        0      active sync   /dev/sda2
   1     1       0        0        1      faulty removed
user@user -desktop:/mnt$

With it being a raid 1 we only need 1 member to start it.

You can also use any md device to assemble the array. You need to make sure you are using an md device that isn't already in use, to check what isn’t being used type:

cat /proc/mdstat

So we have now found which md device we can use and for our example we will use “md8”

What we will do now is assemble and run the array:

mdadm -AR /dev/md8 /dev/sdb2

If you are running other then raid1, you may need to include additional members from other drives:

mdadm -AR /dev/md8 /dev/sdb2 /dev/sdd2 /dev/sde3

  Now see if the array is assembled:

cat /proc/mdstat

See if it detects the physical volumes:

user@user-desktop:~$ pvs
 PV         VG   Fmt  Attr PSize PFree
 /dev/md8   main lvm2 a-   1.88G 32.00M
user@user-desktop:~$

To activate all known volume groups in the system:

user@user-desktop:~$ vgchange main -a n
 0 logical volume(s) in volume group "main" now active
user@user-desktop:~$ vgchange main -a y
 2 logical volume(s) in volume group "main" now active
user@user-desktop:~$

Now we should be able to mount the drive:

user@user-desktop:~$ mount /dev/main/root /mnt/oldsmeserver/
user@user-desktop:~$

Looking good so let’s show where our files are:

user@user-desktop:~$ cd /mnt/oldsmeserver/
user@user-desktop:/mnt/oldsmeserver$ dir
aquota.group  boot     etc     lib         mnt      proc  selinux  sys  var
aquota.user   command  home    lost+found  opt      root  service  tmp
bin           dev      initrd  media       package  sbin  srv      usr
user@user-desktop:/mnt/oldsmeserver$

Now you have successfully assembled your array and able to recover your data.

Notes:

  • If the existing system has lvs already installed and has a volume group called "main" there may be issues.
  • If you installed SME Server <7.0 your volume group will be different, to find out your volume group type:
user@user-desktop:~$ vgdisplay
 --- Volume group ---
 VG Name               main  The Volume group name.
 System ID
 Format                lvm2
[..]
user@user-desktop:~$

Method B

based on http://www.linuxjournal.com/article/8874?page=0,0

on ubuntu (non lvm), install mdadm and lvm2, attach server drive,

find UUID's

$ sudo mdadm --examine --scan  /dev/sdb1 /dev/sdb2
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324


add ARRAY lines to mdadm.conf

$ cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=895293be:9cfa7672:f1761508:386417bc
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=10573599:841f46aa:4c068816:67364324


checking

$  sudo pvscan
 PV /dev/md2   VG main   lvm2 [148.94 GB / 64.00 MB free]
 Total: 1 [148.94 GB] / in use: 1 [148.94 GB] / in no VG: 0 [0   ]

checking

$ sudo lvscan
 ACTIVE            '/dev/main/root' [146.94 GB] inherit
 ACTIVE            '/dev/main/swap' [1.94 GB] inherit

mount, check and copy to safe location ...

$ sudo mkdir /mnt/ga
$ sudo mount /dev/main/root /mnt/ga
$ sudo ls -la /mnt/ga/var/log/messages
lrwxrwxrwx 1 root root 32 2010-03-24 18:00 /mnt/ga/var/log/messages -> /var/log/messages.20*