Raid
Hard Drives – Raid
SME Server 7 introduces a new feature - Automatic configuration of Software RAID 1, 5 or 6. RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.
Your server will be automatically configured as follows:
- 1 Drive - Software RAID 1 (ready to accept a second drive).
- 2 Drives - Software RAID 1
- 3 Drives - Software RAID 1 + 1 Hot-spare
- 4-6 Drives - Software RAID 5 + 1 Hot-spare
- 7+ Drives - Software RAID 6 + 1 Hot-spare
Hard Drive Layout
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly.
The preferred method is to use the master location on each IDE channel (eg. hda and hdc). This will ensure that if you loose one channel, the other will still operate. It will also give you the best performance.
In a 2 drive setups put each drive on a different IDE channel:
IDE 1 Master - Drive 1
IDE 1 Slave - CDROM
IDE 2 Master - Drive 2
Adding another Hard Drive Later
ENSURE THAT THE NEW DRIVE IS THE SAME SIZE OR LARGER AS THE CURRENT DRIVE(S)
- Shut down the machine
- Install drive as master on the second IDE channel (hdc)
- Boot up
- Log on as admin to get to the admin console
- Go to #5 Manage disk redundancy
It should tell you there if the drives are syncing up. Don't turn off the server until the sync is complete or it will start from the beginning again. When it is done syncing it will show a good working raid1.
Reusing Hard Drives
If it was ever installed on a Windows machine (or in some cases an old system) then you will need to clear the MBR first before installing it.
From the linux command prompt, type the following:
#dd if=/dev/zero of=/dev/hdx bs=512 count=1
You MUST reboot so that the empty partition table gets read correctly. For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
Upgrading the Hard Drive Size
- CAUTION MAKE A FULL BACKUP!
- Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]
- Shut down and install larger drive in system.
- Boot up and manage raid to add new (larger) drive to system.
- Wait for raid to fully sync.
- Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
- Ensure all drives have been replace with larger drives and array is in sync and redundant!
- Issue the following commands:
- mdadm --grow /dev/md2 --size=max
- pvresize /dev/md2
- lvresize -l +$(vgdisplay -c main | cut -d: -f16) main/root [-l (lower case L)]
- ext2online -C0 /dev/main/root [is -C0 (zero)]
Notes :
- All of this can be done while the server is up and running with the exception of #1.
- These instructions should work for any raid level you have as long as you have >= 2 drives
- If you have disabled lvm
- you don't need the pvresize or lvresize command
- the final line becomes ext2online -C0 /dev/md2 (or whatever / is mounted to)
Raid Notes
Many on board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good. You will get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.
If your persistent on getting a hardware raid, buy a well supported raid card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.
It rarely happens, but sometimes when a device has finished rebuilding, its state doesn't change from "dirty" to "clean" until a reboot occurs. This is cosmetic
Resynchronising a Failed RAID
You can refer to 'man mdadm' or http://www.linuxmanpages.com/man8/mdadm.8.php
Sometimes a partition will be taken offline automatically. Admin will receive an email DegradedArray event on /dev/md2.
This will happen if, for example, a read or write error is detected in a disk in the RAID set, or a disk does not respond fast enough, causing a timeout. When this happens, the details of the raid can be seen by inspecting the mdstat file.
[root@sme]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 hda3[0] hdb3[1] 38837056 blocks [2/2] [UU] md2 : active raid1 hdb2[1] <-- missing partition 1048704 blocks [2/1] [_U] <-- failed md0 : active raid1 hda1[0] hdb1[1] 255936 blocks [2/2] [UU]
Make a note of the raid partition that has failed, shown by [_U]
In this case it is md2, the device being /dev/md2.
Determine the missing physical partition, Look carefully, and fill in the gap,
in this example, it's hda2, the device being /dev/hda2
md1 : active raid1 hda3[0] hdb3[1] md2 : active raid1 hda2[0] hdb2[1] md0 : active raid1 hda1[0] hdb1[1]
To add the physical partition back into that raid partition.
[root@sme]# mdadm --add /dev/md2 /dev/hda2
Your devices are likely to be different, and you may have more than two disks, including a hot standby, but will always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat.
[root@sme]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 hda3[0] hdb3[1] 38837056 blocks [2/2] [UU] md2 : active raid1 hda2[2] hdb2[1] 1048704 blocks [2/1] [_U] [=>...................] recovery = 6.4% (67712/1048704) finish=1.2min speed=13542K/sec md0 : active raid1 hda1[0] hdb1[1] 255936 blocks [2/2] [UU]
When recovery is complete, the partitions will all be up:
[root@sme]# cat /proc/mdstat Personalities : [raid1] md1 : active raid1 hda3[0] hdb3[1] 38837056 blocks [2/2] [UU] md2 : active raid1 hda2[0] hdb2[1] 1048704 blocks [2/2] [UU] md0 : active raid1 hda1[0] hdb1[1] 255936 blocks [2/2] [UU]
If this action is required regularly, you should test your disks for SMART errors and physical errors, check your disk cables, and make sure no two hard drives share the same IDE port. Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.