Changes

From SME Server
Jump to navigationJump to search
1,692 bytes added ,  03:46, 10 November 2017
m
typo: ecc to etc
Line 3: Line 3:  
===Hard Drives – Raid===
 
===Hard Drives – Raid===
 
From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6.  RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.  
 
From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6.  RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.  
 +
 +
{{Note box| As per the [http://lists.contribs.org/pipermail/updatesannounce/2014-June/000366.html '''release notes'''], SME Server 9 default install will only configure a Raid 1 configuration regardless of the number of Hard Drives, there are selectable install options for other Raid configurations available from the install menu}}
    
Your server will be automatically configured as follows:
 
Your server will be automatically configured as follows:
Line 10: Line 12:  
* 4-6 Drives - Software RAID 5 + 1 Hot-spare
 
* 4-6 Drives - Software RAID 5 + 1 Hot-spare
 
* 7+ Drives - Software RAID 6 + 1 Hot-spare
 
* 7+ Drives - Software RAID 6 + 1 Hot-spare
 +
 +
As per the above note, on SME Server 9.0, the RAID 1 configuration will add the 3rd drive as a member of the RAID and not as a spare.:
 +
* 1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
 +
* 2 Drives - Software RAID 1
 +
* 3 Drives - Software RAID 1
 +
 +
If you use a true hardware raid controller to manage your hard drives and choose noraid during install, your system will still be configured with RAID1.
    
====Hard Drive Layout====
 
====Hard Drive Layout====
Line 148: Line 157:     
# Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
 
# Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
# Boot up and manage raid to add new (larger) drive to system.
+
# Boot up and login to the admin console and use option 5 to add the new (larger) drive to system.
 
# Wait for raid to fully sync.
 
# Wait for raid to fully sync.
 
# Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
 
# Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
 
# Ensure all drives have been replace with larger drives and array is in sync and redundant!
 
# Ensure all drives have been replace with larger drives and array is in sync and redundant!
 
# Issue the following commands:
 
# Issue the following commands:
 +
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
    
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 197: Line 208:  
# Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!  
 
# Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!  
 
# Issue the following commands:  
 
# Issue the following commands:  
 +
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
    
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 315: Line 328:  
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 +
 +
==== remove the degraded raid ====
 +
when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
 +
mdadm --grow /dev/md0 --force --raid-devices=1
 +
mdadm --grow /dev/md1 --force --raid-devices=1
 +
 +
after that you will see this
 +
 +
# cat /proc/mdstat
 +
Personalities : [raid1]
 +
md0 : active raid1 sda1[0]
 +
      255936 blocks super 1.0 [1/1] [U]
 +
     
 +
md1 : active raid1 sda2[0]
 +
      268047168 blocks super 1.1 [1/1] [U]
 +
      bitmap: 2/2 pages [8KB], 65536KB chunk
 +
 +
unused devices: <none>
    
====Resynchronising a Failed RAID====
 
====Resynchronising a Failed RAID====
Line 434: Line 465:     
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 +
 +
{{Tip box| we could use a shortcut for the raid rebuild :
 +
mdadm -f /dev/md2 /dev/hda2 -r /dev/hda2 -a /dev/hda2}}
    
====Convert Software RAID1 to RAID5====
 
====Convert Software RAID1 to RAID5====
{{Note box|msg=these instructions are only applicable if you have SME8 and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
+
{{Note box|msg=these instructions are only applicable if you have SME8 or greater and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
Line 461: Line 495:  
  sfdisk /dev/sdc < tmp.out
 
  sfdisk /dev/sdc < tmp.out
   −
</li><li>Repeat the last step for each new hd (sdd, sde ecc.).
+
</li><li>Repeat the last step for each new hd (sdd, sde etc.).
 
</li><li>Create the new array
 
</li><li>Create the new array
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
Line 485: Line 519:  
  mdadm --add /dev/md2 /dev/sdc2
 
  mdadm --add /dev/md2 /dev/sdc2
   −
</li><li>Repeat the last step for each new hd (sdd2, sde2 ecc.)
+
</li><li>Repeat the last step for each new hd (sdd2, sde2 etc.)
    
</li><li>Grow the array
 
</li><li>Grow the array
2

edits

Navigation menu