Line 5: |
Line 5: |
| '''A drive failure can corrupt an entire array: RAID does not replace backup!'''}} | | '''A drive failure can corrupt an entire array: RAID does not replace backup!'''}} |
| | | |
− | {{Note box| SME Servers Raid Options are largely automated, but even with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }} | + | {{Note box| SME Servers RAID Options are largely automated, but even with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }} |
| | | |
| ===Hard Drives=== | | ===Hard Drives=== |
Line 111: |
Line 111: |
| | | |
| ====Reusing Hard Drives==== | | ====Reusing Hard Drives==== |
− | If it was ever installed on a Windows machine (or in some cases an old system) then you will need to clear the MBR first before installing it. | + | |
| + | *MBR formatted disks |
| + | |
| + | If it was ever installed on a Windows machine, or any of the *BSDs, (or in some cases an old system with RAID and/or LVM) then you will need to clear the MBR first before installing it. |
| | | |
| From the linux command prompt, type the following: | | From the linux command prompt, type the following: |
Line 123: |
Line 126: |
| | | |
| <br /> | | <br /> |
| + | |
| + | *For disks formatted as GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.<br /> |
| | | |
| ====Upgrading the Hard Drive Size==== | | ====Upgrading the Hard Drive Size==== |
Line 165: |
Line 170: |
| resize2fs /dev/md2 & | | resize2fs /dev/md2 & |
| | | |
− | ====Replacing and Upgrading Hard Drive after HD fail==== | + | ====Replacing and Upgrading a Hard Drive after HD fail==== |
| | | |
| Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda. | | Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda. |
Line 211: |
Line 216: |
| resize2fs /dev/md2 & | | resize2fs /dev/md2 & |
| | | |
− | ====Raid Notes==== | + | ====RAID Notes==== |
− | Many on board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good. You will get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust. | + | Many on-board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good for Linux. You will generally get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust. |
| | | |
− | If your persistent on getting a hardware raid, buy a well supported raid card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap. | + | If you are insistent on getting a hardware RAID, buy a well supported RAID card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap. |
| | | |
| It rarely happens, but sometimes when a device has finished rebuilding, | | It rarely happens, but sometimes when a device has finished rebuilding, |
Line 224: |
Line 229: |
| | | |
| Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO. | | Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO. |
− | ====Receive periodic check of Raid by email==== | + | ====Receive periodic check of RAID by email==== |
| | | |
− | There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of emails and some time messages can be forgotten. | + | There are routines in SME Server to check the RAID and sent mail to the admin user, when the RAID is degraded or when the RAID is resynchronizing. But the admin user may receive a lot of emails and sometimes messages can be forgotten. |
− | So the purpose is to have a routine which sent email to the user of your choice each week. | + | So the purpose is to have a routine which sends email to the user of your choice each week. |
| | | |
| nano /etc/cron.weekly/raid-status.sh | | nano /etc/cron.weekly/raid-status.sh |
Line 310: |
Line 315: |
| '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. | | '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. |
| | | |
− | ====remove the degraded raid==== | + | ====remove the degraded RAID message==== |
− | when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
| + | When you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then : |
| mdadm --grow /dev/md0 --force --raid-devices=1 | | mdadm --grow /dev/md0 --force --raid-devices=1 |
| mdadm --grow /dev/md1 --force --raid-devices=1 | | mdadm --grow /dev/md1 --force --raid-devices=1 |
Line 341: |
Line 346: |
| Login as root, type console. Select Item 5. "Manage disk reduncancy" | | Login as root, type console. Select Item 5. "Manage disk reduncancy" |
| <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 ------- | | <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 ------- |
− | Current RAID status:
| + | Current RAID status: |
− |
| + | |
− | Personalities : [raid1]
| + | Personalities : [raid1] |
− | md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
| + | md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed. |
− | 38973568 blocks [2/1] [U_]
| + | 38973568 blocks [2/1] [U_] |
− |
| + | |
− | md1 : active raid1 hda1[0] hdb1[1]
| + | md1 : active raid1 hda1[0] hdb1[1] |
− | 104320 blocks [2/2] [UU]
| + | 104320 blocks [2/2] [UU] |
− |
| + | |
− | unused devices: <none>
| + | unused devices: <none> |
− | Only Some of the RAID devices are unclean. <-- NOTICE This message and
| + | Only Some of the RAID devices are unclean. <-- NOTICE This message and |
− | Manual intervention may be required.</nowiki> <-- this message.
| + | Manual intervention may be required.</nowiki> <-- this message. |
| Notice the last 2 sentences of the window above. You have some problems. <br> | | Notice the last 2 sentences of the window above. You have some problems. <br> |
| If your system is healthy however the message you will see at the bottom of Raid Console window is: | | If your system is healthy however the message you will see at the bottom of Raid Console window is: |