Changes

Jump to navigation Jump to search
split Recovering_SME_Server_with_lvm_drives
Line 338: Line 338:  
*How should I setup my hard-drives?
 
*How should I setup my hard-drives?
 
We never recommend anything other than a '''single disk install''' or '''multiple disks of the same type'''. Anything else and you are following an unrecommended setup and you will need to navigate for yourself. Repeat, we never recommend anything other than a '''single disk install''' or '''multiple disks of the same type'''. If you're thinking of doing anything else (setup your own partitions), read this section again.
 
We never recommend anything other than a '''single disk install''' or '''multiple disks of the same type'''. Anything else and you are following an unrecommended setup and you will need to navigate for yourself. Repeat, we never recommend anything other than a '''single disk install''' or '''multiple disks of the same type'''. If you're thinking of doing anything else (setup your own partitions), read this section again.
      
*How should I setup my RAID?
 
*How should I setup my RAID?
 
A full article on RAID is found here: [[:Raid]]
 
A full article on RAID is found here: [[:Raid]]
      
*I want to use a hardware RAID. What do you suggest?
 
*I want to use a hardware RAID. What do you suggest?
 
Please see the notes in the RAID article: [[:Raid#Raid_Notes]]
 
Please see the notes in the RAID article: [[:Raid#Raid_Notes]]
    +
*How do I recovering a SME Server with lvm drives
 +
A full article on the recovery method is found here: [[:Recovering_SME_Server_with_lvm_drives]]
    
*I'm installing a RAID 5 but it seems to take a long time. Is there something wrong?
 
*I'm installing a RAID 5 but it seems to take a long time. Is there something wrong?
Line 359: Line 359:  
Some USB drives need to be plugged twice into the server to be recognized.
 
Some USB drives need to be plugged twice into the server to be recognized.
   −
===Recovering SME Server with lvm drives===
  −
Let’s try starting the raid and see what we get:
  −
  −
mdadm -E /dev/sdb1
  −
  −
What “mdadm -E /dev/sdb1” command shows it is a part of a raid array, what level, how many members, etc.
  −
  −
user@user-desktop:/mnt$ mdadm -E /dev/sdb2
  −
/dev/sdb2:
  −
          Magic : a92b4efc
  −
        Version : 00.90.00
  −
          UUID : 550e0406:c9ce50d2:825b32e4:4a9d3549
  −
  Creation Time : Sat Sep  8 12:15:29 2007
  −
    Raid Level : raid1
  −
  Used Dev Size : 1991936 (1945.58 MiB 2039.74 MB)
  −
    Array Size : 1991936 (1945.58 MiB 2039.74 MB)
  −
  Raid Devices : 2
  −
  Total Devices : 1
  −
Preferred Minor : 2
  −
    Update Time : Sat Sep  8 12:22:05 2007
  −
          State : clean
  −
Active Devices : 1
  −
Working Devices : 1
  −
Failed Devices : 1
  −
  Spare Devices : 0
  −
      Checksum : 22e3837f - correct
  −
        Events : 0.991
  −
      Number  Major  Minor  RaidDevice State
  −
this    0      8        2        0      active sync  /dev/sda2
  −
    0    0      8        2        0      active sync  /dev/sda2
  −
    1    1      0        0        1      faulty removed
  −
user@user -desktop:/mnt$
  −
  −
With it being a raid 1 we only need 1 member to start it.
  −
  −
You can also use any md device to assemble the array.  You need to make sure you are using an md device that isn't already in use, to check what isn’t being used type:
  −
  −
cat /proc/mdstat
  −
  −
So we have now found which md device we can use and for our example we will use “md8”
  −
  −
What we will do now is assemble and run the array:
  −
  −
mdadm -AR /dev/md8 /dev/sdb2
  −
  −
If you are running other then raid1, you may need to include additional members from other drives:
  −
  −
mdadm -AR /dev/md8 /dev/sdb2 /dev/sdd2 /dev/sde3
  −
  −
Now see if the array is assembled:
  −
  −
cat /proc/mdstat
  −
  −
See if it detects the physical volumes:
  −
  −
user@user-desktop:~$ pvs
  −
  PV        VG  Fmt  Attr PSize PFree
  −
  /dev/md8  main lvm2 a-  1.88G 32.00M
  −
user@user-desktop:~$
  −
  −
To activate all known volume groups in the system:
  −
  −
user@user-desktop:~$ vgchange main -a n
  −
  0 logical volume(s) in volume group "main" now active
  −
user@user-desktop:~$ vgchange main -a y
  −
  2 logical volume(s) in volume group "main" now active
  −
user@user-desktop:~$
  −
  −
Now we should be able to mount the drive:
  −
  −
user@user-desktop:~$ mount /dev/main/root /mnt/oldsmeserver/
  −
user@user-desktop:~$
  −
  −
Looking good so let’s show where our files are:
  −
  −
user@user-desktop:~$ cd /mnt/oldsmeserver/
  −
user@user-desktop:/mnt/oldsmeserver$ dir
  −
aquota.group  boot    etc    lib        mnt      proc  selinux  sys  var
  −
aquota.user  command  home    lost+found  opt      root  service  tmp
  −
bin          dev      initrd  media      package  sbin  srv      usr
  −
user@user-desktop:/mnt/oldsmeserver$
  −
  −
Now you have successfully assembled your array and able to recover your data.
  −
  −
Notes:
  −
  −
*If the existing system has lvs already installed and has a volume group called "main" there may be issues.
  −
  −
*If you installed SME Server <7.0 your volume group will be different, to find out your volume group type:
  −
  −
user@user-desktop:~$ vgdisplay
  −
  --- Volume group ---
  −
  VG Name              main  The Volume group name.
  −
  System ID
  −
  Format                lvm2
  −
[..]
  −
user@user-desktop:~$
      
===Backups & Restores===
 
===Backups & Restores===

Navigation menu