Changes

From SME Server
Jump to navigationJump to search
5 bytes added ,  20:47, 11 April 2021
m
Line 16: Line 16:  
The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:
 
The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:
   −
* 1 drive - no RAID
+
*1 drive - no RAID
* 2 drives - RAID 1
+
*2 drives - RAID 1
* 3 drives - RAID 1 + hot spare
+
*3 drives - RAID 1 + hot spare
* 4 drives - RAID 6
+
*4 drives - RAID 6
* 5+ drives - RAID 6 + hot spare
+
*5+ drives - RAID 6 + hot spare
 +
 
 
The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.
 
The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.
   Line 63: Line 64:  
'''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.  
 
'''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.  
   −
=== Disk Layout ===
+
===Disk Layout===
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly.  
+
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may lose both drives. Also, performance will suffer slightly.  
    
The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.  
 
The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.  
Line 340: Line 341:  
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
Current RAID status:
+
  Current RAID status:
+
 
Personalities : [raid1]
+
  Personalities : [raid1]
md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
+
  md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
                        38973568 blocks [2/1] [U_]
+
                          38973568 blocks [2/1] [U_]
+
 
md1 : active raid1 hda1[0] hdb1[1]
+
  md1 : active raid1 hda1[0] hdb1[1]
      104320 blocks [2/2] [UU]
+
        104320 blocks [2/2] [UU]
+
 
unused devices: <none>
+
  unused devices: <none>
Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
+
  Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
Manual intervention may be required.</nowiki> <-- this message.
+
  Manual intervention may be required.</nowiki> <-- this message.
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:

Navigation menu