Line 18: |
Line 18: |
| * 3 Drives - Software RAID 1 | | * 3 Drives - Software RAID 1 |
| | | |
| + | If you use a true hardware raid controller to manage your hard drives and choose noraid during install, your system will still be configured with RAID1. |
| | | |
| ====Hard Drive Layout==== | | ====Hard Drive Layout==== |
Line 156: |
Line 157: |
| | | |
| # Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives. | | # Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives. |
− | # Boot up and manage raid to add new (larger) drive to system. | + | # Boot up and login to the admin console and use option 5 to add the new (larger) drive to system. |
| # Wait for raid to fully sync. | | # Wait for raid to fully sync. |
| # Repeat steps 1-3 until all drives in system are upgraded to larger capacity. | | # Repeat steps 1-3 until all drives in system are upgraded to larger capacity. |
| # Ensure all drives have been replace with larger drives and array is in sync and redundant! | | # Ensure all drives have been replace with larger drives and array is in sync and redundant! |
| # Issue the following commands: | | # Issue the following commands: |
| + | |
| + | {{Note box|SME9 uses /dev/md1 not /dev/md2.}} |
| | | |
| mdadm --grow /dev/md2 --size=max | | mdadm --grow /dev/md2 --size=max |
Line 205: |
Line 208: |
| # Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant! | | # Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant! |
| # Issue the following commands: | | # Issue the following commands: |
| + | |
| + | {{Note box|SME9 uses /dev/md1 not /dev/md2.}} |
| | | |
| mdadm --grow /dev/md2 --size=max | | mdadm --grow /dev/md2 --size=max |
Line 323: |
Line 328: |
| A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2". | | A system with 6 physically present harddrives thus will be formated Raid6 _not_ Raid5. Resulting capacity of course will be "n-2". |
| '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. | | '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. |
| + | |
| + | ==== remove the degraded raid ==== |
| + | when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then : |
| + | mdadm --grow /dev/md0 --force --raid-devices=1 |
| + | mdadm --grow /dev/md1 --force --raid-devices=1 |
| + | |
| + | after that you will see this |
| + | |
| + | # cat /proc/mdstat |
| + | Personalities : [raid1] |
| + | md0 : active raid1 sda1[0] |
| + | 255936 blocks super 1.0 [1/1] [U] |
| + | |
| + | md1 : active raid1 sda2[0] |
| + | 268047168 blocks super 1.1 [1/1] [U] |
| + | bitmap: 2/2 pages [8KB], 65536KB chunk |
| + | |
| + | unused devices: <none> |
| | | |
| ====Resynchronising a Failed RAID==== | | ====Resynchronising a Failed RAID==== |
Line 447: |
Line 470: |
| | | |
| ====Convert Software RAID1 to RAID5==== | | ====Convert Software RAID1 to RAID5==== |
− | {{Note box|msg=these instructions are only applicable if you have SME8 and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}} | + | {{Note box|msg=these instructions are only applicable if you have SME8 or greater and a RAID1 system with 2 hd in sync; new drive(s) must be of the same size or larger as the current drive(s)}} |
| {{Warning box|msg=Please make a full backup before proceeding}} | | {{Warning box|msg=Please make a full backup before proceeding}} |
| {{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with: | | {{Warning box|msg=Newer versions of mdadm use the v1.x superblocks stored at the beginning of the block device, which could overwrite the filesystem metadata. You’ll need to be starting with a v0.9 metadata device for the above instructions to work (which was the default for years).First, check the existing superblock version with: |
Line 472: |
Line 495: |
| sfdisk /dev/sdc < tmp.out | | sfdisk /dev/sdc < tmp.out |
| | | |
− | </li><li>Repeat the last step for each new hd (sdd, sde ecc.). | + | </li><li>Repeat the last step for each new hd (sdd, sde etc.). |
| </li><li>Create the new array | | </li><li>Create the new array |
| mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2 | | mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2 |
Line 496: |
Line 519: |
| mdadm --add /dev/md2 /dev/sdc2 | | mdadm --add /dev/md2 /dev/sdc2 |
| | | |
− | </li><li>Repeat the last step for each new hd (sdd2, sde2 ecc.) | + | </li><li>Repeat the last step for each new hd (sdd2, sde2 etc.) |
| | | |
| </li><li>Grow the array | | </li><li>Grow the array |