Changes

From SME Server
Jump to navigationJump to search
2,019 bytes added ,  03:46, 10 November 2017
m
typo: ecc to etc
Line 3: Line 3:  
===Hard Drives – Raid===
 
===Hard Drives – Raid===
 
From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6.  RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.  
 
From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6.  RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.  
 +
 +
{{Note box| As per the [http://lists.contribs.org/pipermail/updatesannounce/2014-June/000366.html '''release notes'''], SME Server 9 default install will only configure a Raid 1 configuration regardless of the number of Hard Drives, there are selectable install options for other Raid configurations available from the install menu}}
    
Your server will be automatically configured as follows:
 
Your server will be automatically configured as follows:
Line 10: Line 12:  
* 4-6 Drives - Software RAID 5 + 1 Hot-spare
 
* 4-6 Drives - Software RAID 5 + 1 Hot-spare
 
* 7+ Drives - Software RAID 6 + 1 Hot-spare
 
* 7+ Drives - Software RAID 6 + 1 Hot-spare
 +
 +
As per the above note, on SME Server 9.0, the RAID 1 configuration will add the 3rd drive as a member of the RAID and not as a spare.:
 +
* 1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
 +
* 2 Drives - Software RAID 1
 +
* 3 Drives - Software RAID 1
 +
 +
If you use a true hardware raid controller to manage your hard drives and choose noraid during install, your system will still be configured with RAID1.
    
====Hard Drive Layout====
 
====Hard Drive Layout====
Line 148: Line 157:     
# Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
 
# Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives.
# Boot up and manage raid to add new (larger) drive to system.
+
# Boot up and login to the admin console and use option 5 to add the new (larger) drive to system.
 
# Wait for raid to fully sync.
 
# Wait for raid to fully sync.
 
# Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
 
# Repeat steps 1-3 until all drives in system are upgraded to larger capacity.
 
# Ensure all drives have been replace with larger drives and array is in sync and redundant!
 
# Ensure all drives have been replace with larger drives and array is in sync and redundant!
 
# Issue the following commands:
 
# Issue the following commands:
 +
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
    
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 197: Line 208:  
# Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!  
 
# Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant!  
 
# Issue the following commands:  
 
# Issue the following commands:  
 +
 +
{{Note box|SME9 uses /dev/md1 not /dev/md2.}}
    
  mdadm --grow /dev/md2 --size=max
 
  mdadm --grow /dev/md2 --size=max
Line 230: Line 243:     
Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.
 
Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.
==== Receive periodic check of Raid by mail ====
+
==== Receive periodic check of Raid by email ====
   −
There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of mails and some time messages can be forgotten.
+
There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of emails and some time messages can be forgotten.
So the purpose is to have a routine which sent mail to the user of your choice each week.
+
So the purpose is to have a routine which sent email to the user of your choice each week.
   −
  nano /etc/cron.weekly/raid-status
+
  nano /etc/cron.weekly/raid-status.sh
   −
You have to change the variable '''DEST="stephane@your-domaine-name.org"''' to the mail you decide to use.
+
You have to change the variable '''DEST="stephane@your-domaine-name.org"''' to the email you decide to use.
    
  #!/bin/sh
 
  #!/bin/sh
Line 252: Line 265:  
   
 
   
 
  '''DEST="stephane@your-domaine-name.org"'''
 
  '''DEST="stephane@your-domaine-name.org"'''
  exec $MDADM --detail /dev/md1 /dev/md2 |mail -s "RAID status SME Server" $DEST
+
  exec $MDADM --detail $(ls /dev/md*) | mail -s "RAID status SME Server" $DEST
    
save by ctrl+x
 
save by ctrl+x
  chmod +x /etc/cron.weekly/raid-status
+
  chmod +x /etc/cron.weekly/raid-status.sh
    
each sunday a 4h00 AM you will receive a mail which looks to this :  
 
each sunday a 4h00 AM you will receive a mail which looks to this :  
Line 307: Line 320:  
       0      8        2        0      active sync  /dev/sda2
 
       0      8        2        0      active sync  /dev/sda2
 
       1      8      18        1      active sync  /dev/sdb2
 
       1      8      18        1      active sync  /dev/sdb2
 +
 +
If you want to test the message sent without waiting the next sunday, you can do
 +
/etc/cron.weekly/raid-status.sh
    
====nospare====
 
====nospare====
Line 312: Line 328:  
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2".
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 +
 +
==== remove the degraded raid ====
 +
when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
 +
mdadm --grow /dev/md0 --force --raid-devices=1
 +
mdadm --grow /dev/md1 --force --raid-devices=1
 +
 +
after that you will see this
 +
 +
# cat /proc/mdstat
 +
Personalities : [raid1]
 +
md0 : active raid1 sda1[0]
 +
      255936 blocks super 1.0 [1/1] [U]
 +
     
 +
md1 : active raid1 sda2[0]
 +
      268047168 blocks super 1.1 [1/1] [U]
 +
      bitmap: 2/2 pages [8KB], 65536KB chunk
 +
 +
unused devices: <none>
    
====Resynchronising a Failed RAID====
 
====Resynchronising a Failed RAID====
Line 394: Line 428:     
  [root@sme]# mdadm --add /dev/md2 /dev/hda2
 
  [root@sme]# mdadm --add /dev/md2 /dev/hda2
 +
 +
Once you type the command the following message will appear, appropriate for your device.
 +
  [root@sme]  mdadm: hot added /dev/hda2
 
   
 
   
Your devices are likely to be different, and you may have more than two disks, including a hot standby, but will always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat. You can see this real time by:
+
It important to know that your devices are likely to be different, E.G your device could be /dev/sda2 or you may have more than two disks, including a hot standby. These details can always be determined from the mdstat file. Once the raid resync has been started, the progress will be noted in mdstat. You can see this real time by:
    
  [root@sme]# watch -n .1 cat /proc/mdstat
 
  [root@sme]# watch -n .1 cat /proc/mdstat
Line 428: Line 465:     
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 
Also check your driver cards, since a faulty card can destroy the data on a full RAID set as easily as it can a single disk.
 +
 +
{{Tip box| we could use a shortcut for the raid rebuild :
 +
mdadm -f /dev/md2 /dev/hda2 -r /dev/hda2 -a /dev/hda2}}
    
====Convert Software RAID1 to RAID5====
 
====Convert Software RAID1 to RAID5====
{{Note box|msg=these instructions are only applicable if you have SME8 and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
+
{{Note box|msg=these instructions are only applicable if you have SME8 or greater and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Please make a full backup before proceeding}}
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
 
{{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with:
Line 455: Line 495:  
  sfdisk /dev/sdc < tmp.out
 
  sfdisk /dev/sdc < tmp.out
   −
</li><li>Repeat the last step for each new hd (sdd, sde ecc.).
+
</li><li>Repeat the last step for each new hd (sdd, sde etc.).
 
</li><li>Create the new array
 
</li><li>Create the new array
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
 
  mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2
Line 479: Line 519:  
  mdadm --add /dev/md2 /dev/sdc2
 
  mdadm --add /dev/md2 /dev/sdc2
   −
</li><li>Repeat the last step for each new hd (sdd2, sde2 ecc.)
+
</li><li>Repeat the last step for each new hd (sdd2, sde2 etc.)
    
</li><li>Grow the array
 
</li><li>Grow the array
2

edits

Navigation menu