Difference between revisions of "Raid"

From SME Server
Jump to navigationJump to search
m
 
(65 intermediate revisions by 14 users not shown)
Line 1: Line 1:
{{Note box| SME Servers Raid Options are largely automated, but with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]] and [[Hard Disk Partitioning]]}}
+
{{Warning box| Please read this article before buying and deploying drives. https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
  
===Hard Drives – Raid===
+
The new type of SMR drives are NOT suitable for RAID arrays. Beware WD Red NAS drives, though recently they have made it clearer which models use SMR.
From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6. RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.  
 
  
Your server will be automatically configured as follows:
+
'''A drive failure can corrupt an entire array: RAID does not replace backup!'''}}
* 1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
 
* 2 Drives - Software RAID 1
 
* 3 Drives - Software RAID 1 + 1 Hot-spare
 
* 4-6 Drives - Software RAID 5 + 1 Hot-spare
 
* 7+ Drives - Software RAID 6 + 1 Hot-spare
 
  
====Hard Drive Layout====
+
{{Note box| SME Servers RAID Options are largely automated, but even with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }}
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly.  
 
  
The preferred method is to use the master location on each IDE channel (eg. hda and hdc). This will ensure that if you loose one channel, the other will still operate. It will also give you the best performance.  
+
===Hard Drives===
 +
A software RAID array will be automatically configured as part of the installation process for servers which contain multiple hard drives. This is to ensure redundancy, so if one disk fails the system will still function.  
  
In a 2 drive setups put each drive on a different IDE channel:
+
{{Note box|As per the release notes, SME Server 10 RAID configuration is slightly different to previous versions. See Default RAID Rationale below for more details.}}
  
IDE 1 Master - Drive 1 <br />
+
The specifics of the RAID setup depends on the number of drives available, to balance redundancy and capacity.
IDE 1 Slave - CDROM  <br />
 
IDE 2 Master - Drive 2  <br />
 
  
'''Obviously this section is completely obsolete with SATA hard drives because each disk has its own channel.'''
+
The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:
  
====Identifying Hard Drives====
+
*1 drive - no RAID
It may not always be obvious which physical hard drive maps to which logical device. The simplest method to verify this if you have a drive with S.M.A.R.T. capability is to map the serial number on the physical package with that displayed by smartctl. Assuming the device of interest is '''sda''' , (a SCSI drive), then you would issue the following command from root:
+
*2 drives - RAID 1
smartctl -i /dev/sda
+
*3 drives - RAID 1 + hot spare
 +
*4 drives - RAID 6
 +
*5+ drives - RAID 6 + hot spare
  
Or if an IDE Drive
+
The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.
smartctl -i /dev/hda
 
  
====Adding another Hard Drive Later (Raid1 array only)====
+
If you use a hardware RAID controller to manage your drives, this should be configured to present a single volume, which SME will configure without software RAID.
  
ENSURE THAT THE NEW DRIVE IS THE SAME SIZE OR LARGER AS THE CURRENT DRIVE(S)
+
<br />
* Shut down the machine
+
===Default RAID Rationale===
* Install drive as master on the second IDE channel (hdc) or the second SATA channel (sda)
+
The differences in RAID layout between SME Server 10 and previous versions is summarised below:
* Boot up
+
{| class="wikitable"
* At the login prompt log on as admin with the root password to get to the admin console
+
|+
* Go to #5 Manage disk redundancy
+
!Number of Drives
 +
|'''SME Server 10'''
 +
|'''Previous Versions'''
 +
|-
 +
|1
 +
|No software RAID
 +
|Degraded RAID 1
 +
|-
 +
|2
 +
| colspan="2" |Software RAID 1
 +
|-
 +
|3
 +
| colspan="2" |RAID 1 + hot spare
 +
|-
 +
|4
 +
|RAID 6
 +
| rowspan="3" |RAID 5 + hot spare
 +
|-
 +
|5
 +
| rowspan="3" |RAID 6 + hot spare
 +
|-
 +
|6
 +
|-
 +
|7+
 +
|RAID 6 + hot spare
 +
|}
 +
The main differences are no degraded RAID 1 for a single disk install, which better supports virtualised and hardware RAID use cases, and a preference for RAID 6 over RAID 5.
 +
This is to reduce the risk of a single disk failure bringing down the array. While consumer hard drives have got significantly larger over time, their unrecoverable read error rate (URE) has remained at 1 per 10^14 bits, or 12TB.
 +
As an example, imagine a server with 5 x 4TB drives. Under previous versions of SME Server this would have been configured as a 4 disk RAID 5 array with 1 hot spare.
 +
If one drive failed, the hot spare would become active and the array would begin to rebuild. This would require reading all 3 disks and, at some point during that 12TB operation, it’s very likely that an unrecoverable error would be encountered. At this point, the whole array would fail.
 +
In comparison, a RAID 6 array is tolerant to two disk failures. While this does not entirely solve the risk of a URE during rebuild, it significantly reduces the likelihood of it taking down the array.
 +
'''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.
  
It will show the status and progress if the drives are syncing up. Don't turn off the server until the sync is complete or it will start syncing again from the beginning. When it is done syncing, it will show a good working Raid1.
+
===Disk Layout===
 +
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may lose both drives. Also, performance will suffer slightly.  
  
If the Manage disk redundancy page displays the message "The free disk count must equal one" and "Manual intervention may be required", then you probably have additional hard drives that need to be disconnected while the RAID is set up. An external USB drive will have this effect, and should be unplugged.
+
The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.  
  
{{Note box| the addition of another drive is restricted to a Raid1 that is degraded, i.e. when the system has been installed with a single drive (/dev/hda and /dev/hdc or their SATA equivalent). The addition of a third drive to a Raid1 '''(i.e. a spare)''' is not recognized by the system. To add a spare you need to use the management tool '''mdadm''' at the command line}}
+
In a 2 drive setup put each drive on a different IDE channel:
  
{{Note box|I will assume the system is installed with a Raid1 array functioning with two disks sda and sdb and you want to add another disk sdc as a spare (for adding to the array automatically if one disk of the array fails). This HowTo can be adapted to other types of RAID as long as you want to add a spare disk.}}
+
IDE 1 Primary - Drive 1 <br />
 +
IDE 1 Secondary - CDROM  <br />
 +
IDE 2 Primary - Drive 2 
  
First we need write the partition table from  sda (or sdb) to sdc :
+
'''Obviously this section is obsolete with SATA hard drives because each disk has its own channel.'''
  
sfdisk -d /dev/sda > sfdisk_sda.output
+
<br />
sfdisk /dev/sdc < sfdisk_sda.output
 
  
Then we need to add the new partitions to the existings arrays :
+
===Identifying Hard Drives===
 +
It may not always be obvious which physical hard drive maps to which logical device.
 +
The first step would be to be able to identify all block devices present on your server. This could be done by using two commands
 +
  lsblk
  
mdadm --add /dev/md1 /dev/sdc1
+
or the following
mdadm --add /dev/md2 /dev/sdc2
+
  findmnt
  
Verify this with :
 
  
  mdadm --detail /dev/md1
+
Then, once you identified a block device , the simplest method to verify which physical drive it is, is using S.M.A.R.T. capability to map the serial number on the physical package with that displayed by smartctl. Assuming the device of interest is '''sda''', (a SCSI drive), then you would issue the following command as root:
  mdadm --detail /dev/md2
+
  smartctl -i /dev/sda
  
/dev/md1:
+
Or if an IDE Drive
        Version : 0.90
+
  smartctl -i /dev/hda
  Creation Time : Sat Feb  2 22:24:38 2013
+
<br />
      Raid Level : raid1
 
      Array Size : 104320 (101.89 MiB 106.82 MB)
 
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
 
    Raid Devices : 2
 
  Total Devices : 3
 
Preferred Minor : 1
 
    Persistence : Superblock is persistent
 
 
    Update Time : Mon Feb  4 13:28:43 2013
 
          State : clean
 
  Active Devices : 2
 
Working Devices : 3
 
  Failed Devices : 0
 
  Spare Devices : 1
 
 
            UUID : f97a86c5:8bb46daa:6854855e:558a3e16
 
          Events : 0.6
 
   
 
    Number  Major  Minor  RaidDevice State
 
        0      8        1        0      active sync  /dev/sda1
 
        1      8      17        1      active sync  /dev/sdb1
 
 
        2      8      33        -     spare  /dev/sdc1
 
 
 
Alternatively you can try this.
 
 
 
cat /proc/mdstat
 
  
cat /proc/mdstat
+
===Adding Additional Drives===
Personalities : [raid1]
 
md1 : active raid1 sdc1[2](S) sdb1[1] sda1[0]
 
      104320 blocks [2/2] [UU]
 
     
 
md2 : active raid1 sdc2[2](S) sdb2[1] sda2[0]
 
      52323584 blocks [2/2] [UU]
 
  
(S)= Spare
+
For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.
(F)= Fail
 
[0]= number of the disk
 
  
You should ensure that grub has been written correctly to the spare disk to ensure that it will boot correctly.
+
See also [[AddExtraHardDisk]]. It's an alternative for part of the data if you have only one drive and you want to use RAID1. But better solution is to reinstall SME10 with 2 drives.
  
{{Warning box|Grub is unable to install itself on an empty disk or empty partitions; to have the spare fully working and booting after a sync the boot partition on the spare drive needs to be duplicated:}}
+
'''Ensure that any new drives are the same size or larger than your existing drives.'''
  
to copy boot partition (sda=disk of the array sdc=the spare)
+
*Shut down the machine
 +
*Install one additional drive at a time
 +
*Boot up
 +
*At the login prompt log on as admin with the root password to get to the admin console
 +
*Go to #5 Manage disk redundancy
 +
*Accept the option to add an additional drive
  
dd if=/dev/sda1 of=/dev/sdc1
+
If the Manage disk redundancy page displays the message "The free disk count must equal one" and "Manual intervention may be required", then you probably have additional hard drives that need to be disconnected while the RAID is set up. An external USB drive will have this effect, and should be unplugged.
  
From within a terminal with administrator privileges :
+
====Reusing Hard Drives====
 
 
grub
 
device (hd2) /dev/sdc
 
root (hd2,0)
 
setup (hd2)
 
  
Last of all, try forcing a failure of one of the original two drives and ensure that the server boots, and the RAID rebuilds corectly. You may then have to repeat this exercise to get the drives in the correct order (i.e sda/sdb in the array with sdc as the spare)
+
*MBR formatted disks
  
 
+
If it was ever installed on a Windows machine, or any of the *BSDs, (or in some cases an old system with RAID and/or LVM) then you will need to clear the MBR first before installing it.
====Reusing Hard Drives====
 
If it was ever installed on a Windows machine (or in some cases an old system) then you will need to clear the MBR first before installing it.
 
  
 
From the linux command prompt, type the following:
 
From the linux command prompt, type the following:
 
  #dd if=/dev/zero of=/dev/hdx bs=512 count=1
 
  #dd if=/dev/zero of=/dev/hdx bs=512 count=1
 +
or
 +
#dd if=/dev/zero of=/dev/sdx bs=512 count=1
  
 
You MUST reboot so that the empty partition table gets read correctly.
 
You MUST reboot so that the empty partition table gets read correctly.
 +
 
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
 
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
 +
 +
*For disks previously formatted as  GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.
 +
*To remove the (hardware) RAID configuration that is stored at the end of the drive, do:
 +
#dd if=/dev/zero of=/dev/sdx bs=512 count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))
  
 
====Upgrading the Hard Drive Size====
 
====Upgrading the Hard Drive Size====
Line 137: Line 135:
 
Note: these instructions are only applicable if you have a RAID system with more than one drive. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311
 
Note: these instructions are only applicable if you have a RAID system with more than one drive. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311
  
* CAUTION MAKE A FULL BACKUP!  
+
*CAUTION MAKE A FULL BACKUP!
* Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]
+
*Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]
  
 
HD Scenario - Current 250gb drives, new larger 500gb drives
 
HD Scenario - Current 250gb drives, new larger 500gb drives
  
# Shut down and install larger drive in system to old . Unplug any USB-connected drives.
+
#Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
# Boot up and manage raid to add new (larger) drive to system.
+
#Boot up and login to the admin console and use option 5 to add the new (larger) drive to system.
# Wait for raid to fully sync.
+
#Wait for raid to fully sync.
# Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
+
#Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
# Ensure all drives have been replace with larger drives and array is in sync and redundant!
+
#Ensure all drives have been replace with larger drives and array is in sync and redundant!
# Issue the following commands:
+
#Issue the following commands:
 +
 
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
  
 
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 163: Line 163:
  
 
Notes :   
 
Notes :   
* All of this can be done while the server is up and running with the exception of #1.
 
* These instructions should work for any raid level you have as long as you have >= 2 drives
 
* If you have disabled lvm 
 
# you don't need the pvresize or lvresize command
 
# the final line becomes ext2online -C0 /dev/md2 (or whatever / is mounted to)
 
  
====Replacing and Upgrading Hard Drive after HD fail====
+
*All of this can be done while the server is up and running with the exception of #1.
 +
*These instructions should work for any raid level you have as long as you have >= 2 drives
 +
*If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes
 +
 
 +
ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to)
 +
or If you receive an  "command not found" error,  try this:
 +
resize2fs /dev/md2 &
 +
 
 +
====Replacing and Upgrading a Hard Drive after HD fail====
  
 
Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda.
 
Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda.
Line 175: Line 178:
 
Note: These instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311
 
Note: These instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311
  
* CAUTION MAKE A FULL BACKUP!  
+
*CAUTION MAKE A FULL BACKUP!
* Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]
+
*Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]
  
 
HD Scenario - Current 250gb drives, new larger 500gb drives
 
HD Scenario - Current 250gb drives, new larger 500gb drives
  
# Remove failed HDD from system, ensure remaining drive is on sda on its own and boot up.
+
#Remove failed HDD from system, ensure remaining drive is on sda on its own and boot up.
# Shutdown, connect one new 500gb drive as sdb and boot up
+
#Shutdown, connect one new 500gb drive as sdb and boot up
# Login to the admin panel and manage raid to add new (larger) drive to system.
+
#Login to the admin panel and manage raid to add new (larger) drive to system.
# Wait for raid to fully sync
+
#Wait for raid to fully sync
# Do full reboot with those 2 drives in place (1 original, 1 new)
+
#Do full reboot with those 2 drives in place (1 original, 1 new)
 
#Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original)
 
#Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original)
# Boot up again with just the one new drive in place, and confirm it boots OK.
+
#Boot up again with just the one new drive in place, and confirm it boots OK.
# Shutdown, and connected the other 500gb drive as sdb
+
#Shutdown, and connected the other 500gb drive as sdb
# Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync.
+
#Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync.
# Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!  
+
#Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!
# Issue the following commands:  
+
#Issue the following commands:
 +
 
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
  
 
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 205: Line 210:
  
 
Notes :   
 
Notes :   
* These instructions should work for any raid level you have as long as you have >= 2 drives
 
* If you have disabled lvm 
 
# you don't need the pvresize or lvresize command
 
# the final line becomes ext2online -C0 /dev/md2 (or whatever / is mounted to)
 
  
====Raid Notes====
+
*These instructions should work for any raid level you have as long as you have >= 2 drives
Many on board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good. You will get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.
+
*If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes
  
If your persistent on getting a hardware raid, buy a well supported raid card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.
+
ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to)
 +
or If you receive an  "command not found" error,  try this:
 +
resize2fs /dev/md2 &
 +
 
 +
====RAID Notes====
 +
Many on-board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good for Linux. You will generally get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.
 +
 
 +
If you are insistent on getting a hardware RAID, buy a well supported RAID card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.
  
 
It rarely happens, but sometimes when a device has finished rebuilding,
 
It rarely happens, but sometimes when a device has finished rebuilding,
 
its state doesn't change from "dirty" to "clean" until a reboot occurs.
 
its state doesn't change from "dirty" to "clean" until a reboot occurs.
 
This is cosmetic
 
This is cosmetic
 +
 +
====Periodic scrub of RAID arrays====
 +
A Periodic scrub of RAID arrays (weekly raid check) is performed every week on Sunday at 04:22 local time, refer [[Bugzilla:3535]] and [[Bugzilla:6160]] for more information.
 +
 +
Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.
 +
====Receive periodic check of RAID by email====
 +
 +
There are routines in SME Server to check the RAID and sent mail to the admin user, when the RAID is degraded or when the RAID is resynchronizing. But the admin user may receive a lot of emails and sometimes messages can be forgotten.
 +
So the purpose is to have a routine which sends email to the user of your choice each week.
 +
 +
nano /etc/cron.weekly/raid-status.sh
 +
 +
You have to change the variable '''DEST="stephane@your-domaine-name.org"''' to the email you decide to use.
 +
 +
#!/bin/sh
 +
# cron.weekly/mdadm-status -- weekly status of the RAID
 +
# 2013 Pierre-Alain Bandinelli
 +
# distributed under the terms of the Artistic Licence 2.0
 +
 +
# Get status from the RAID array and send the details by email.
 +
# Email will go to the address specified in the commandline.
 +
set -eu
 +
 +
MDADM=/sbin/mdadm
 +
[ -x $MDADM ] || exit 0 # package may be removed but not purged
 +
 +
'''DEST="stephane@your-domaine-name.org"'''
 +
exec $MDADM --detail  $(ls /dev/md*) | mail -s "RAID status SME Server" $DEST
 +
 +
save by ctrl+x
 +
chmod +x /etc/cron.weekly/raid-status.sh
 +
 +
each sunday a 4h00 AM you will receive a mail which looks to this :
 +
 +
/dev/md1:
 +
        Version : 0.90
 +
  Creation Time : Sun Jan  6 20:50:41 2013
 +
    Raid Level : raid1
 +
    Array Size : 104320 (101.89 MiB 106.82 MB)
 +
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
 +
  Raid Devices : 2
 +
  Total Devices : 2
 +
Preferred Minor : 1
 +
    Persistence : Superblock is persistent
 +
 +
    Update Time : Sun Dec 22 04:22:42 2013
 +
          State : clean
 +
Active Devices : 2
 +
Working Devices : 2
 +
Failed Devices : 0
 +
  Spare Devices : 0
 +
 +
          UUID : 28745adb:d9cff1f4:fcb31dd8:ff24cb0c
 +
        Events : 0.208
 +
 +
    Number  Major  Minor  RaidDevice State
 +
      0      8        1        0      active sync  /dev/sda1
 +
      1      8      17        1      active sync  /dev/sdb1
 +
/dev/md2:
 +
        Version : 0.90
 +
  Creation Time : Sun Jan  6 20:50:42 2013
 +
    Raid Level : raid1
 +
    Array Size : 262036096 (249.90 GiB 268.32 GB)
 +
  Used Dev Size : 262036096 (249.90 GiB 268.32 GB)
 +
  Raid Devices : 2
 +
  Total Devices : 2
 +
Preferred Minor : 2
 +
    Persistence : Superblock is persistent
 +
 +
    Update Time : Sun Dec 22 05:30:36 2013
 +
          State : clean
 +
Active Devices : 2
 +
Working Devices : 2
 +
Failed Devices : 0
 +
  Spare Devices : 0
 +
 +
          UUID : c343c79e:91c01009:fcde78b4:bad0b497
 +
        Events : 0.224
 +
 +
    Number  Major  Minor  RaidDevice State
 +
      0      8        2        0      active sync  /dev/sda2
 +
      1      8      18        1      active sync  /dev/sdb2
 +
 +
If you want to test the message sent without waiting the next sunday, you can do
 +
/etc/cron.weekly/raid-status.sh
  
 
====nospare====
 
====nospare====
Line 223: Line 316:
 
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 +
 +
====remove the degraded RAID message====
 +
When you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
 +
mdadm --grow /dev/md0 --force --raid-devices=1
 +
mdadm --grow /dev/md1 --force --raid-devices=1
 +
 +
after that you will see this
 +
 +
# cat /proc/mdstat
 +
Personalities : [raid1]
 +
md0 : active raid1 sda1[0]
 +
      255936 blocks super 1.0 [1/1] [U]
 +
     
 +
md1 : active raid1 sda2[0]
 +
      268047168 blocks super 1.1 [1/1] [U]
 +
      bitmap: 2/2 pages [8KB], 65536KB chunk
 +
 +
unused devices: <none>
  
 
====Resynchronising a Failed RAID====
 
====Resynchronising a Failed RAID====
Line 230: Line 341:
 
Sometimes a partition will be taken offline automatically. Admin will receive an email '''DegradedArray event on /dev/md2'''.
 
Sometimes a partition will be taken offline automatically. Admin will receive an email '''DegradedArray event on /dev/md2'''.
  
This will happen if, for example, a read or write error is detected in a disk in the RAID set, or a disk does not respond fast enough, causing a timeout. As a precaution, verify the health of your disks as documented in: http://wiki.contribs.org/Monitor_Disk_Health and specifically with the command:
+
{{note box|This will happen if, for example, a read or write error is detected in a disk in the RAID set, or a disk does not respond fast enough, causing a timeout. As a precaution, verify the health of your disks as documented in: [[Monitor_Disk_Health]] and specifically with the command:}}
 
  smartctl -a /dev/hda
 
  smartctl -a /dev/hda
 
Where hda is the device to be checked; check all of them.
 
Where hda is the device to be checked; check all of them.
Line 237: Line 348:
 
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
Current RAID status:
+
            Current RAID status:
 
+
           
Personalities : [raid1]
+
            Personalities : [raid1]
md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
+
            md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
                        38973568 blocks [2/1] [U_]
+
                                    38973568 blocks [2/1] [U_]
 
+
           
md1 : active raid1 hda1[0] hdb1[1]
+
            md1 : active raid1 hda1[0] hdb1[1]
      104320 blocks [2/2] [UU]
+
                  104320 blocks [2/2] [UU]
 
+
           
unused devices: <none>
+
            unused devices: <none>
Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
+
            Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
Manual intervention may be required.</nowiki> <-- this message.
+
            Manual intervention may be required.</nowiki> <-- this message.
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:
Line 305: Line 416:
  
 
  [root@sme]# mdadm --add /dev/md2 /dev/hda2
 
  [root@sme]# mdadm --add /dev/md2 /dev/hda2
 +
 +
Once you type the command the following message will appear, appropriate for your device.
 +
  [root@sme]  mdadm: hot added /dev/hda2
 
   
 
   
Your devices are likely to be different, and you may have more than two disks, including a hot standby, but will always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat. You can see this real time by:
+
It important to know that your devices are likely to be different, E.G your device could be /dev/sda2 or you may have more than two disks, including a hot standby. These details can always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat. You can see this real time by:
  
 
  [root@sme]# watch -n .1 cat /proc/mdstat
 
  [root@sme]# watch -n .1 cat /proc/mdstat
Line 339: Line 453:
  
 
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 +
 +
{{Tip box| we could use a shortcut for the raid rebuild :
 +
mdadm -f /dev/md2 /dev/hda2 -r /dev/hda2 -a /dev/hda2}}
  
 
====Convert Software RAID1 to RAID5====
 
====Convert Software RAID1 to RAID5====
{{Note box|msg=these instructions are only applicable if you have SME8 and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
+
{{Note box|msg=these instructions are only applicable if you have SME8 or greater and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
Line 348: Line 465:
 
Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place.
 
Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place.
 
Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.}}
 
Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.}}
<ol></li><li>Login as root
+
<ol><li>Login as root
 
</li><li>Move to /boot (we must create a new initrd image to load raid5 driver).
 
</li><li>Move to /boot (we must create a new initrd image to load raid5 driver).
 
  cd /boot
 
  cd /boot
Line 364: Line 481:
 
</li><li>Now, create on the new drive(s) the correct partition table.
 
</li><li>Now, create on the new drive(s) the correct partition table.
 
  sfdisk -d /dev/sda > tmp.out
 
  sfdisk -d /dev/sda > tmp.out
sfdisk /dev/sdc < tmp.out
+
sfdisk /dev/sdc < tmp.out
  
</li><li>Repeat the last step for each new hd (sdd, sde ecc.).
+
</li><li>Repeat the last step for each new hd (sdd, sde etc.).
 
</li><li>Create the new array
 
</li><li>Create the new array
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
mdadm: /dev/sda2 appears to be part of a raid array:
+
mdadm: /dev/sda2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
+
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
mdadm: /dev/sdb2 appears to be part of a raid array:
+
mdadm: /dev/sdb2 appears to be part of a raid array:
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
+
level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
Continue creating array? y
+
Continue creating array? y
mdadm: array /dev/md2 started.
+
mdadm: array /dev/md2 started.
  
 
</li><li>Wait for resync; monitor the status with
 
</li><li>Wait for resync; monitor the status with
 
  cat /proc/mdstat
 
  cat /proc/mdstat
+
root# cat /proc/mdstat
root# cat /proc/mdstat
+
Personalities : [raid0] [raid1] [raid5]
Personalities : [raid0] [raid1] [raid5]
+
md2 : active raid5 sdb1[2] sda1[0]
md2 : active raid5 sdb1[2] sda1[0]
+
1048512 blocks level 5, 256k chunk, algorithm 2 [2/1] [U_]
1048512 blocks level 5, 256k chunk, algorithm 2 [2/1] [U_]
+
[==>..................]  recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec
[==>..................]  recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec
 
 
</li><li>Reboot  
 
</li><li>Reboot  
 
  exit
 
  exit
Line 390: Line 506:
 
  mdadm --add /dev/md2 /dev/sdc2
 
  mdadm --add /dev/md2 /dev/sdc2
  
</li><li>Repeat the last step for each new hd (sdd2, sde2 ecc.)
+
</li><li>Repeat the last step for each new hd (sdd2, sde2 etc.)
  
 
</li><li>Grow the array
 
</li><li>Grow the array
Line 399: Line 515:
 
</li><li>Wait for array reshaping. This part can take a substantial amount of time; monitor it with
 
</li><li>Wait for array reshaping. This part can take a substantial amount of time; monitor it with
 
  cat /proc/mdstat
 
  cat /proc/mdstat
+
root# cat /proc/mdstat
root# cat /proc/mdstat
+
Personalities : [raid0] [raid1] [raid5]
Personalities : [raid0] [raid1] [raid5]
+
md2 : active raid5 sdc1[2] sdb1[1] sda1[0]
md2 : active raid5 sdc1[2] sdb1[1] sda1[0]
+
1048512 blocks super 0.91 level 5, 256k chunk, algorithm 2 [3/3] [UUU]
1048512 blocks super 0.91 level 5, 256k chunk, algorithm 2 [3/3] [UUU]
+
[==>..................]  reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec
[==>..................]  reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec
 
  
 
</li><li>Issue the following commands:
 
</li><li>Issue the following commands:
 
  pvresize /dev/md2
 
  pvresize /dev/md2
lvresize -l +100%FREE main/root
+
lvresize -l +100%FREE main/root
resize2fs /dev/main/root   
+
resize2fs /dev/main/root   
 
</li></ol>
 
</li></ol>
 
Notes :   
 
Notes :   
* If you have disabled lvm 
 
# you don't need the pvresize or lvresize command
 
# the final line becomes resize2fs /dev/md2 (or whatever / is mounted to)
 
# More info: http://www.arkf.net/blog/?p=47
 
  
 +
*If you have disabled lvm
 +
 +
#you don't need the pvresize or lvresize command
 +
#the final line becomes resize2fs /dev/md2 (or whatever / is mounted to)
 +
#More info: http://www.arkf.net/blog/?p=47
 +
 +
== Add another Raid to mount to /home/e-smith/files ==
 +
this is inspired from previous content of [[AddExtraHardDisk]] and particularly the part [[AddExtraHardDisk#Additional steps to create a raid array from multiple disks]] but updated to 2022 and SME10
 +
 +
1 you need to check what disk you want, using lsblk<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb   
 +
sdc   
 +
 +
</syntaxhighlight>then you can create the Raid array. We assume you only need onethen you need to rebuild the  grub.conf, depending on your system is EFI or legacy use the appropriate command#EFI
 +
grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
 +
<nowiki>#</nowiki>Legacy
 +
grub2-mkconfig -o /boot/grub2/grub.cfg
 +
Raid partition, and hence do not need to partition it.<syntaxhighlight lang="bash">
 +
#create array
 +
mdadm --create --verbose /dev/md11 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
 +
# add to mdadm.conf
 +
mdadm --detail --scan --verbose /dev/md11 >> /etc/mdadm.conf
 +
</syntaxhighlight>then format it and enable quotas. If you want to add a LVM Layer, this is just before that !<syntaxhighlight lang="bash">
 +
mkfs.xfs /dev/md11
 +
</syntaxhighlight>now you have<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
sdc    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
</syntaxhighlight>then you need to mount it temporary to move your content<syntaxhighlight lang="bash">
 +
mkdir /mnt/newdisk
 +
mount /dev/md11 /mnt/newdisk
 +
rsync -arv /home/e-smith/files/ /mnt/newdisk
 +
</syntaxhighlight>When happy with result simply add an entry to you fstab, according to last lsblk output in this case you should add <syntaxhighlight lang="bash">
 +
UUID=0ab4fe2a-aa81-4728-90d8-2f96d4624af8 /home/e-smith/files            xfs    uquota,gquota        0 0
 +
 +
</syntaxhighlight>To have the disk mounted on reboot, you need to alter grub<syntaxhighlight lang="bash">
 +
vim /etc/default/grub
 +
</syntaxhighlight>and alter the command line to add either "rd.md=1 rd.md.conf=1 rd.auto=1" or specifically add the uuid to mount (obviously if you add a LVM layer you will rather need to add something like rd.lvm.lv=mylvm/video  rd.lvm.lv=mylvm/files)<syntaxhighlight lang="bash" line="1">
 +
GRUB_TIMEOUT=5
 +
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
 +
GRUB_DEFAULT=saved
 +
GRUB_DISABLE_SUBMENU=true
 +
GRUB_TERMINAL_OUTPUT="gfxterm"
 +
GRUB_CMDLINE_LINUX="rhgb quiet rootflags=uquota,pquota rd.md=1 rd.md.conf=1 rd.auto=1"
 +
GRUB_DISABLE_RECOVERY="false"
 +
GRUB_BACKGROUND="/boot/grub2/smeserver10.png"
 +
GRUB_GFXMODE="1024x768"
 +
GRUB_THEME="/boot/grub2/themes/koozali/theme.txt"
 +
 +
</syntaxhighlight>then you need to make sure dracut will add the drivers<syntaxhighlight lang="bash">
 +
vim /etc/dracut.conf
 +
</syntaxhighlight>and alter the line needed (you probably will need to uncomment this line and add mdraid between the quote)<syntaxhighlight lang="bash" line="1" start="19">
 +
# dracut modules to add to the default
 +
add_dracutmodules+="lvm mdraid"
 +
 +
# install local /etc/mdadm.conf
 +
mdadmconf="yes"
 +
 +
# install local /etc/lvm/lvm.conf
 +
lvmconf="yes"
 +
 +
 +
</syntaxhighlight>Finally rebuild the initfs<syntaxhighlight lang="bash">
 +
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
 +
dracut --add="lvm mdraid" /boot/initramfs-$(uname -r).img $(uname -r) --force
 +
</syntaxhighlight>
 +
 +
== Copy data from one disk of an old Raid mirror disk ==
 +
Let's say you want to copy the huge amount of data you excluded from the backup to migrate from SME9 to SME10 and now you want to copy this to your new server.
 +
 +
This How-To assume your current install is without LVM. An extra trick is needed if you have a LVM and previous SME9 also. You simply need to rename the vg group either of the old SME or new one using a rescue disk or another Linux distro, see [[UpgradeDisk#Moving from SME 8.x to SME 9.x]].
 +
 +
# put one of the old drives in the server or in an external case and connect it
 +
# use lsblk to identify the drive
 +
# adapt the following commands
 +
<syntaxhighlight lang="bash">
 +
# lsblk
 +
sdd        8:48  0 931,5G  0 disk 
 +
├─sdd1      8:49  0  250M  0 part 
 +
└─sdd2      8:50  0 931,3G  0 part 
 +
 +
</syntaxhighlight>We assume that sd1 was the boot partition and the stuff we want is in sdd2<syntaxhighlight lang="bash">
 +
#assemble and run on degraded
 +
mdadm -A /dev/md126 /dev/sdd2 --run
 +
</syntaxhighlight>now let's try to mount, it will work only if you had no LVM, or it will return this<syntaxhighlight lang="bash">
 +
# mkdir /mnt/olddisk/
 +
# mount /dev/md126 /mnt/olddisk/
 +
mount: filesystem « LVM2_member » unknown
 +
 +
</syntaxhighlight>you can skip this step if you did not got the LVM error. Then we need to activate the LVM, and we can assume you might need also to install lvm stuffs...<syntaxhighlight lang="bash">
 +
# yum install lvm2 -y
 +
vgchange -a y main
 +
mount /dev/mapper/main-root  /mnt/olddisk/
 +
</syntaxhighlight>It is now time to copy your stuffs.<syntaxhighlight lang="bash">
 +
rsync -arvHAX  /mnt/olddisk/home/e-smith/files/ /home/e-smith/files
 +
</syntaxhighlight>then to remove safely your disk<syntaxhighlight lang="bash">
 +
umount /dev/mapper/main-root
 +
vgchange -a n main
 +
mdadm --stop /dev/md126
 +
</syntaxhighlight>
 
----
 
----
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>
+
<noinclude>
 +
[[Category:Howto]]
 +
[[Category:Administration:Storage]]
 +
</noinclude>

Latest revision as of 23:57, 5 December 2023

Warning.png Warning:
Please read this article before buying and deploying drives. https://raid.wiki.kernel.org/index.php/Timeout_Mismatch

The new type of SMR drives are NOT suitable for RAID arrays. Beware WD Red NAS drives, though recently they have made it clearer which models use SMR.

A drive failure can corrupt an entire array: RAID does not replace backup!



Important.png Note:
SME Servers RAID Options are largely automated, but even with the best laid plans things don't always go according to plan. See also: Raid:Manual Rebuild, Raid:Growing and Hard Disk Partitioning. There is a wiki on the Linux software raid, you will find many cool Tips here


Hard Drives

A software RAID array will be automatically configured as part of the installation process for servers which contain multiple hard drives. This is to ensure redundancy, so if one disk fails the system will still function.


Important.png Note:
As per the release notes, SME Server 10 RAID configuration is slightly different to previous versions. See Default RAID Rationale below for more details.


The specifics of the RAID setup depends on the number of drives available, to balance redundancy and capacity.

The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:

  • 1 drive - no RAID
  • 2 drives - RAID 1
  • 3 drives - RAID 1 + hot spare
  • 4 drives - RAID 6
  • 5+ drives - RAID 6 + hot spare

The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.

If you use a hardware RAID controller to manage your drives, this should be configured to present a single volume, which SME will configure without software RAID.


Default RAID Rationale

The differences in RAID layout between SME Server 10 and previous versions is summarised below:

Number of Drives SME Server 10 Previous Versions
1 No software RAID Degraded RAID 1
2 Software RAID 1
3 RAID 1 + hot spare
4 RAID 6 RAID 5 + hot spare
5 RAID 6 + hot spare
6
7+ RAID 6 + hot spare

The main differences are no degraded RAID 1 for a single disk install, which better supports virtualised and hardware RAID use cases, and a preference for RAID 6 over RAID 5. This is to reduce the risk of a single disk failure bringing down the array. While consumer hard drives have got significantly larger over time, their unrecoverable read error rate (URE) has remained at 1 per 10^14 bits, or 12TB. As an example, imagine a server with 5 x 4TB drives. Under previous versions of SME Server this would have been configured as a 4 disk RAID 5 array with 1 hot spare. If one drive failed, the hot spare would become active and the array would begin to rebuild. This would require reading all 3 disks and, at some point during that 12TB operation, it’s very likely that an unrecoverable error would be encountered. At this point, the whole array would fail. In comparison, a RAID 6 array is tolerant to two disk failures. While this does not entirely solve the risk of a URE during rebuild, it significantly reduces the likelihood of it taking down the array. Note: RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.

Disk Layout

Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may lose both drives. Also, performance will suffer slightly.

The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.

In a 2 drive setup put each drive on a different IDE channel:

IDE 1 Primary - Drive 1
IDE 1 Secondary - CDROM
IDE 2 Primary - Drive 2

Obviously this section is obsolete with SATA hard drives because each disk has its own channel.


Identifying Hard Drives

It may not always be obvious which physical hard drive maps to which logical device. The first step would be to be able to identify all block devices present on your server. This could be done by using two commands

 lsblk

or the following

 findmnt


Then, once you identified a block device , the simplest method to verify which physical drive it is, is using S.M.A.R.T. capability to map the serial number on the physical package with that displayed by smartctl. Assuming the device of interest is sda, (a SCSI drive), then you would issue the following command as root:

smartctl -i /dev/sda

Or if an IDE Drive

smartctl -i /dev/hda


Adding Additional Drives

For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.

See also AddExtraHardDisk. It's an alternative for part of the data if you have only one drive and you want to use RAID1. But better solution is to reinstall SME10 with 2 drives.

Ensure that any new drives are the same size or larger than your existing drives.

  • Shut down the machine
  • Install one additional drive at a time
  • Boot up
  • At the login prompt log on as admin with the root password to get to the admin console
  • Go to #5 Manage disk redundancy
  • Accept the option to add an additional drive

If the Manage disk redundancy page displays the message "The free disk count must equal one" and "Manual intervention may be required", then you probably have additional hard drives that need to be disconnected while the RAID is set up. An external USB drive will have this effect, and should be unplugged.

Reusing Hard Drives

  • MBR formatted disks

If it was ever installed on a Windows machine, or any of the *BSDs, (or in some cases an old system with RAID and/or LVM) then you will need to clear the MBR first before installing it.

From the linux command prompt, type the following:

#dd if=/dev/zero of=/dev/hdx bs=512 count=1

or

#dd if=/dev/zero of=/dev/sdx bs=512 count=1

You MUST reboot so that the empty partition table gets read correctly.

For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154

  • For disks previously formatted as GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.
  • To remove the (hardware) RAID configuration that is stored at the end of the drive, do:
#dd if=/dev/zero of=/dev/sdx bs=512 count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))

Upgrading the Hard Drive Size

Note: these instructions are only applicable if you have a RAID system with more than one drive. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311

  • CAUTION MAKE A FULL BACKUP!
  • Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]

HD Scenario - Current 250gb drives, new larger 500gb drives

  1. Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
  2. Boot up and login to the admin console and use option 5 to add the new (larger) drive to system.
  3. Wait for raid to fully sync.
  4. Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
  5. Ensure all drives have been replace with larger drives and array is in sync and redundant!
  6. Issue the following commands:


Important.png Note:
SME9 uses /dev/md1 not /dev/md2.


mdadm --grow /dev/md2 --size=max
pvresize /dev/md2
lvresize -l +100%FREE main/root
ext2online -C0 /dev/main/root   

In the last command above, the -C0 is: dash C zero

If you receive an "command not found" error, try this:

resize2fs /dev/mapper/main-root &

TIP: I put an "&" at end to allow it to run in background even if I close ssh session.


Notes :

  • All of this can be done while the server is up and running with the exception of #1.
  • These instructions should work for any raid level you have as long as you have >= 2 drives
  • If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes
ext2online -C0 /dev/md2 #(or whatever / is mounted to)

or If you receive an "command not found" error, try this:

resize2fs /dev/md2 &

Replacing and Upgrading a Hard Drive after HD fail

Note: See Bugzilla: 6632 and Bugzilla:6630 a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda.

Note: These instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311

  • CAUTION MAKE A FULL BACKUP!
  • Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]

HD Scenario - Current 250gb drives, new larger 500gb drives

  1. Remove failed HDD from system, ensure remaining drive is on sda on its own and boot up.
  2. Shutdown, connect one new 500gb drive as sdb and boot up
  3. Login to the admin panel and manage raid to add new (larger) drive to system.
  4. Wait for raid to fully sync
  5. Do full reboot with those 2 drives in place (1 original, 1 new)
  6. Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original)
  7. Boot up again with just the one new drive in place, and confirm it boots OK.
  8. Shutdown, and connected the other 500gb drive as sdb
  9. Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync.
  10. Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!
  11. Issue the following commands:


Important.png Note:
SME9 uses /dev/md1 not /dev/md2.


mdadm --grow /dev/md2 --size=max
pvresize /dev/md2
lvresize -l +100%FREE main/root
ext2online -C0 /dev/main/root   

In the last command above, the -C0 is: dash C zero

If you receive an "command not found" error, try this:

resize2fs /dev/mapper/main-root &

TIP: I put an "&" at end to allow it to run in background even if I close ssh session.

Notes :

  • These instructions should work for any raid level you have as long as you have >= 2 drives
  • If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes
ext2online -C0 /dev/md2 #(or whatever / is mounted to)

or If you receive an "command not found" error, try this:

resize2fs /dev/md2 &

RAID Notes

Many on-board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good for Linux. You will generally get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.

If you are insistent on getting a hardware RAID, buy a well supported RAID card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.

It rarely happens, but sometimes when a device has finished rebuilding, its state doesn't change from "dirty" to "clean" until a reboot occurs. This is cosmetic

Periodic scrub of RAID arrays

A Periodic scrub of RAID arrays (weekly raid check) is performed every week on Sunday at 04:22 local time, refer Bugzilla:3535 and Bugzilla:6160 for more information.

Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.

Receive periodic check of RAID by email

There are routines in SME Server to check the RAID and sent mail to the admin user, when the RAID is degraded or when the RAID is resynchronizing. But the admin user may receive a lot of emails and sometimes messages can be forgotten. So the purpose is to have a routine which sends email to the user of your choice each week.

nano /etc/cron.weekly/raid-status.sh

You have to change the variable DEST="stephane@your-domaine-name.org" to the email you decide to use.

#!/bin/sh
# cron.weekly/mdadm-status -- weekly status of the RAID
# 2013 Pierre-Alain Bandinelli
# distributed under the terms of the Artistic Licence 2.0

# Get status from the RAID array and send the details by email.
# Email will go to the address specified in the commandline.
set -eu

MDADM=/sbin/mdadm
[ -x $MDADM ] || exit 0 # package may be removed but not purged

DEST="stephane@your-domaine-name.org"
exec $MDADM --detail  $(ls /dev/md*) | mail -s "RAID status SME Server" $DEST

save by ctrl+x

chmod +x /etc/cron.weekly/raid-status.sh

each sunday a 4h00 AM you will receive a mail which looks to this :

/dev/md1:
       Version : 0.90
 Creation Time : Sun Jan  6 20:50:41 2013
    Raid Level : raid1
    Array Size : 104320 (101.89 MiB 106.82 MB)
 Used Dev Size : 104320 (101.89 MiB 106.82 MB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 1
   Persistence : Superblock is persistent

   Update Time : Sun Dec 22 04:22:42 2013
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

          UUID : 28745adb:d9cff1f4:fcb31dd8:ff24cb0c
        Events : 0.208

   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync   /dev/sda1
      1       8       17        1      active sync   /dev/sdb1
/dev/md2:
       Version : 0.90
 Creation Time : Sun Jan  6 20:50:42 2013
    Raid Level : raid1
    Array Size : 262036096 (249.90 GiB 268.32 GB)
 Used Dev Size : 262036096 (249.90 GiB 268.32 GB)
  Raid Devices : 2
 Total Devices : 2
Preferred Minor : 2
   Persistence : Superblock is persistent

   Update Time : Sun Dec 22 05:30:36 2013
         State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
 Spare Devices : 0

          UUID : c343c79e:91c01009:fcde78b4:bad0b497
        Events : 0.224

   Number   Major   Minor   RaidDevice State
      0       8        2        0      active sync   /dev/sda2
      1       8       18        1      active sync   /dev/sdb2

If you want to test the message sent without waiting the next sunday, you can do

/etc/cron.weekly/raid-status.sh

nospare

If you use the commandline parameter nospare during installation ("sme nospare"), the system will still count the missing spare towards the number of drives. A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2". Note: with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2].

remove the degraded RAID message

When you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :

mdadm --grow /dev/md0 --force --raid-devices=1
mdadm --grow /dev/md1 --force --raid-devices=1

after that you will see this

# cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sda1[0]
     255936 blocks super 1.0 [1/1] [U]
     
md1 : active raid1 sda2[0]
     268047168 blocks super 1.1 [1/1] [U]
     bitmap: 2/2 pages [8KB], 65536KB chunk

unused devices: <none>

Resynchronising a Failed RAID

Information.png Tip:
You can refer to 'man mdadm' or Mdadm Man page or Raid:Manual_Rebuild


Sometimes a partition will be taken offline automatically. Admin will receive an email DegradedArray event on /dev/md2.


Important.png Note:
This will happen if, for example, a read or write error is detected in a disk in the RAID set, or a disk does not respond fast enough, causing a timeout. As a precaution, verify the health of your disks as documented in: Monitor_Disk_Health and specifically with the command:


smartctl -a /dev/hda

Where hda is the device to be checked; check all of them.

You may check the health of your array using the Admin Console. Login as root, type console. Select Item 5. "Manage disk reduncancy"

--------Disk Reduncancy status as of Thursday Dec 22 -------
             	Current RAID status:
             
             	Personalities : [raid1]
             		md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.	
                                     38973568 blocks [2/1] [U_]
             
             		md1 : active raid1 hda1[0] hdb1[1]
                   			104320 blocks [2/2] [UU]
             
             			unused devices: <none>
             	Only Some of the RAID devices are unclean.  <-- NOTICE This message and 
             	Manual intervention may be required. <-- this message.

Notice the last 2 sentences of the window above. You have some problems.
If your system is healthy however the message you will see at the bottom of Raid Console window is:

All RAID devices are in clean state

If you have no software RAID devices you will see the message at the bottom of the Console window:

Your system only has a single disk drive installed or is using hardware 
mirroring. If you would like to enable software mirroring, please shut
down, install a second disk drive (of the same capacity) and then return
to this screen.

Additionally, the details of the raid can be seen by inspecting the mdstat file from the shell prompt.

[root@sme]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda3[0] hdb3[1]
     38837056 blocks [2/2] [UU]

md2 : active raid1 hdb2[1]        <--    Shows current active partition - note there is one missing
     1048704 blocks [2/1] [_U]    <--    '_' = partition missing from array

md0 : active raid1 hda1[0] hdb1[1]
     255936 blocks [2/2] [UU]

Make a note of the raid partition that has failed, shown by [_U]
In this case it is md2, the device being /dev/md2.

The failed drive partition is indicated by the '_' underline character. In the above example _U indicates that the first drive partition on md2, (Multi-Device 2) has failed. The second drive partition on md2, symbolized by the character 'U' is still part of the md2. If the second drive partition had failed, that is hdb2 then the details would be reversed. E.g. [U_] . Placing the _ Underline second in the details.

Determine the missing physical partition, Look carefully at the sample above and fill in the gap for which drive is missing.
In this example, it's hda2, the device being /dev/hda2

md1 : active raid1 hda3[0] hdb3[1]
md2 : active raid1 hda2[0] hdb2[1]   <--- In the above sample hda2[0] is missing
md0 : active raid1 hda1[0] hdb1[1]

If the raid has a failed disk that has not yet been kicked out of the array then mdstat will show something like the following:

md2 : active raid1 hda2[0](F) hdb2[1]   <--    Shows current active partition - with one FAILED (F)
     1048704 blocks [2/1] [_U]          <--    '_' = partition missing from array

In this case before you add the disk back in you will need to remove the disk as per:

[root@sme]# mdadm --remove /dev/md2 /dev/hda2

However if the drive has already been removed by the operating system then removing the drive is unnecessary. To determine this use the command:

mdadm --query --detail /dev/md2

Of course use the proper md# based on your configuration. This command will give you several lines of data, including the size of the array. Near the end of the output you will see the following if the drive has been removed already. There is no need to remove the drive since it has already been removed.

   Number   Major   Minor   RaidDevice State
      0       3        2        0      active sync   /dev/hda2
      1       0        0        -      removed      <-- NOTE THIS 


To add the physical partition back and rebuild the raid partition.

[root@sme]# mdadm --add /dev/md2 /dev/hda2

Once you type the command the following message will appear, appropriate for your device.

 [root@sme]  mdadm: hot added /dev/hda2

It important to know that your devices are likely to be different, E.G your device could be /dev/sda2 or you may have more than two disks, including a hot standby. These details can always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat. You can see this real time by:

[root@sme]# watch -n .1 cat /proc/mdstat

or you can see this in a snapshot by:

[root@sme]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda3[0] hdb3[1]
      38837056 blocks [2/2] [UU]

md2 : active raid1 hda2[2] hdb2[1]
      1048704 blocks [2/1] [_U]
      [=>...................]  recovery =  6.4% (67712/1048704) finish=1.2min speed=13542K/sec
md0 : active raid1 hda1[0] hdb1[1]
      255936 blocks [2/2] [UU]

When recovery is complete, the partitions will all be up:

[root@sme]# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 hda3[0] hdb3[1]
      38837056 blocks [2/2] [UU]

md2 : active raid1 hda2[0] hdb2[1]
     1048704 blocks [2/2] [UU]

md0 : active raid1 hda1[0] hdb1[1]
      255936 blocks [2/2] [UU]

If this action is required regularly, you should test your disks for SMART errors and physical errors, check your disk cables, and make sure no two hard drives share the same IDE port. See also: http://wiki.contribs.org/Monitor_Disk_Health

Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.


Information.png Tip:
we could use a shortcut for the raid rebuild :
mdadm -f /dev/md2 /dev/hda2 -r /dev/hda2 -a /dev/hda2


Convert Software RAID1 to RAID5

Important.png Note:
these instructions are only applicable if you have SME8 or greater and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)


Warning.png Warning:
Please make a full backup before proceeding


Warning.png Warning:
Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:

mdadm –detail /dev/md0

Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place. Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.


  1. Login as root
  2. Move to /boot (we must create a new initrd image to load raid5 driver). cd /boot
  3. Make a backup copy mv initrd-`uname -r`.img initrd-`uname -r`.img.old
  4. Create the new image mkinitrd --preload raid5 initrd-`uname -r`.img `uname -r`
  5. Shut down and install new drive(s) in system.
  6. Boot up with SME cd and enter the rescue mode. sme rescue
  7. Skip network setup.
  8. Skip mounting the current SME installation.
  9. Now, create on the new drive(s) the correct partition table. sfdisk -d /dev/sda > tmp.out sfdisk /dev/sdc < tmp.out
  10. Repeat the last step for each new hd (sdd, sde etc.).
  11. Create the new array mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2 mdadm: /dev/sda2 appears to be part of a raid array: level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 mdadm: /dev/sdb2 appears to be part of a raid array: level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 Continue creating array? y mdadm: array /dev/md2 started.
  12. Wait for resync; monitor the status with cat /proc/mdstat root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] md2 : active raid5 sdb1[2] sda1[0] 1048512 blocks level 5, 256k chunk, algorithm 2 [2/1] [U_] [==>..................] recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec
  13. Reboot exit
  14. Login as root
  15. Add the new drives to the array mdadm --add /dev/md2 /dev/sdc2
  16. Repeat the last step for each new hd (sdd2, sde2 etc.)
  17. Grow the array mdadm --grow /dev/md2 --raid-devices=N
  18. N is the total number of drives: minimum is 3
  19. Wait for array reshaping. This part can take a substantial amount of time; monitor it with cat /proc/mdstat root# cat /proc/mdstat Personalities : [raid0] [raid1] [raid5] md2 : active raid5 sdc1[2] sdb1[1] sda1[0] 1048512 blocks super 0.91 level 5, 256k chunk, algorithm 2 [3/3] [UUU] [==>..................] reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec
  20. Issue the following commands: pvresize /dev/md2 lvresize -l +100%FREE main/root resize2fs /dev/main/root

Notes :

  • If you have disabled lvm
  1. you don't need the pvresize or lvresize command
  2. the final line becomes resize2fs /dev/md2 (or whatever / is mounted to)
  3. More info: http://www.arkf.net/blog/?p=47

Add another Raid to mount to /home/e-smith/files

this is inspired from previous content of AddExtraHardDisk and particularly the part AddExtraHardDisk#Additional steps to create a raid array from multiple disks but updated to 2022 and SME10

1 you need to check what disk you want, using lsblk

# lsblk --fs
NAME   FSTYPE            LABEL                    UUID                                 MOUNTPOINT
sda                                                                                    
├─sda1 vfat                                       B93A-85A4                            /boot/efi
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
├─sda3 swap                                       64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
sdb    
sdc

then you can create the Raid array. We assume you only need onethen you need to rebuild the grub.conf, depending on your system is EFI or legacy use the appropriate command#EFI

grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg #Legacy grub2-mkconfig -o /boot/grub2/grub.cfg

Raid partition, and hence do not need to partition it.

#create array
mdadm --create --verbose /dev/md11 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
# add to mdadm.conf
mdadm --detail --scan --verbose /dev/md11 >> /etc/mdadm.conf

then format it and enable quotas. If you want to add a LVM Layer, this is just before that !

mkfs.xfs /dev/md11

now you have

# lsblk --fs
NAME   FSTYPE            LABEL                    UUID                                 MOUNTPOINT
sda                                                                                    
├─sda1 vfat                                       B93A-85A4                            /boot/efi
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
├─sda3 swap                                       64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
sdb    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a 
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8 
sdc    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a 
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8

then you need to mount it temporary to move your content

mkdir /mnt/newdisk
mount /dev/md11 /mnt/newdisk
rsync -arv /home/e-smith/files/ /mnt/newdisk

When happy with result simply add an entry to you fstab, according to last lsblk output in this case you should add

UUID=0ab4fe2a-aa81-4728-90d8-2f96d4624af8 /home/e-smith/files            xfs     uquota,gquota        0 0

To have the disk mounted on reboot, you need to alter grub

vim /etc/default/grub

and alter the command line to add either "rd.md=1 rd.md.conf=1 rd.auto=1" or specifically add the uuid to mount (obviously if you add a LVM layer you will rather need to add something like rd.lvm.lv=mylvm/video rd.lvm.lv=mylvm/files)

 1GRUB_TIMEOUT=5
 2GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
 3GRUB_DEFAULT=saved
 4GRUB_DISABLE_SUBMENU=true
 5GRUB_TERMINAL_OUTPUT="gfxterm"
 6GRUB_CMDLINE_LINUX="rhgb quiet rootflags=uquota,pquota rd.md=1 rd.md.conf=1 rd.auto=1"
 7GRUB_DISABLE_RECOVERY="false"
 8GRUB_BACKGROUND="/boot/grub2/smeserver10.png"
 9GRUB_GFXMODE="1024x768"
10GRUB_THEME="/boot/grub2/themes/koozali/theme.txt"

then you need to make sure dracut will add the drivers

vim /etc/dracut.conf

and alter the line needed (you probably will need to uncomment this line and add mdraid between the quote)

19# dracut modules to add to the default
20add_dracutmodules+="lvm mdraid"
21
22# install local /etc/mdadm.conf
23mdadmconf="yes"
24
25# install local /etc/lvm/lvm.conf
26lvmconf="yes"

Finally rebuild the initfs

cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
dracut --add="lvm mdraid" /boot/initramfs-$(uname -r).img $(uname -r) --force

Copy data from one disk of an old Raid mirror disk

Let's say you want to copy the huge amount of data you excluded from the backup to migrate from SME9 to SME10 and now you want to copy this to your new server.

This How-To assume your current install is without LVM. An extra trick is needed if you have a LVM and previous SME9 also. You simply need to rename the vg group either of the old SME or new one using a rescue disk or another Linux distro, see UpgradeDisk#Moving from SME 8.x to SME 9.x.

  1. put one of the old drives in the server or in an external case and connect it
  2. use lsblk to identify the drive
  3. adapt the following commands
# lsblk
sdd         8:48   0 931,5G  0 disk  
├─sdd1      8:49   0   250M  0 part  
└─sdd2      8:50   0 931,3G  0 part

We assume that sd1 was the boot partition and the stuff we want is in sdd2

#assemble and run on degraded 
mdadm -A /dev/md126 /dev/sdd2 --run

now let's try to mount, it will work only if you had no LVM, or it will return this

# mkdir /mnt/olddisk/
# mount /dev/md126 /mnt/olddisk/
mount: filesystem « LVM2_member » unknown

you can skip this step if you did not got the LVM error. Then we need to activate the LVM, and we can assume you might need also to install lvm stuffs...

# yum install lvm2 -y
vgchange -a y main
mount /dev/mapper/main-root  /mnt/olddisk/

It is now time to copy your stuffs.

rsync -arvHAX  /mnt/olddisk/home/e-smith/files/ /home/e-smith/files

then to remove safely your disk

umount /dev/mapper/main-root
vgchange -a n main
mdadm --stop /dev/md126