Line 7: |
Line 7: |
| {{Note box| SME Servers Raid Options are largely automated, but with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }} | | {{Note box| SME Servers Raid Options are largely automated, but with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }} |
| | | |
− | ===Hard Drives – Raid=== | + | ===Hard Drives=== |
− | From SME Server 8 a new feature was introduced - Automatic configuration of Software RAID 1, 5 or 6. RAID is a way of storing data on more than one hard drive at once, so that if one drive fails, the system will still function.
| + | A software RAID array will be automatically configured as part of the installation process for servers which contain multiple hard drives. This is to ensure redundancy, so if one disk fails the system will still function. |
| | | |
− | {{Note box| As per the [http://lists.contribs.org/pipermail/updatesannounce/2014-June/000366.html '''release notes'''], SME Server 9 default install will only configure a Raid 1 configuration regardless of the number of Hard Drives, there are selectable install options for other Raid configurations available from the install menu}} | + | {{Note box|As per the release notes, SME Server 10 RAID configuration is slightly different to previous versions. See Default RAID Rationale below for more details.}} |
| | | |
− | Your server will be automatically configured as follows:
| + | The specifics of the RAID setup depends on the number of drives available, to balance redundancy and capacity. |
− | * 1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
| |
− | * 2 Drives - Software RAID 1
| |
− | * 3 Drives - Software RAID 1 + 1 Hot-spare
| |
− | * 4-6 Drives - Software RAID 5 + 1 Hot-spare
| |
− | * 7+ Drives - Software RAID 6 + 1 Hot-spare
| |
| | | |
− | As per the above note, on SME Server 9.0, the RAID 1 configuration will add the 3rd drive as a member of the RAID and not as a spare.:
| + | The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows: |
− | * 1 Drive - Software RAID 1 (degraded RAID1 mirror ready to accept a second drive).
| |
− | * 2 Drives - Software RAID 1
| |
− | * 3 Drives - Software RAID 1
| |
| | | |
− | If you use a true hardware raid controller to manage your hard drives and choose noraid during install, your system will still be configured with RAID1.
| + | * 1 drive - no RAID |
| + | * 2 drives - RAID 1 |
| + | * 3 drives - RAID 1 + hot spare |
| + | * 4 drives - RAID 6 |
| + | * 5+ drives - RAID 6 + hot spare |
| + | The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0. |
| | | |
− | ====Hard Drive Layout==== | + | If you use a hardware RAID controller to manage your drives, this should be configured to present a single volume, which SME will configure without software RAID. |
| + | |
| + | <br /> |
| + | ===Default RAID Rationale=== |
| + | The differences in RAID layout between SME Server 10 and previous versions is summarised below: |
| + | {| class="wikitable" |
| + | |+ |
| + | !Number of Drives |
| + | |'''SME Server 10''' |
| + | |'''Previous Versions''' |
| + | |- |
| + | |1 |
| + | |No software RAID |
| + | |Degraded RAID 1 |
| + | |- |
| + | |2 |
| + | | colspan="2" |Software RAID 1 |
| + | |- |
| + | |3 |
| + | | colspan="2" |RAID 1 + hot spare |
| + | |- |
| + | |4 |
| + | |RAID 6 |
| + | | rowspan="3" |RAID 5 + hot spare |
| + | |- |
| + | |5 |
| + | | rowspan="3" |RAID 6 + hot spare |
| + | |- |
| + | |6 |
| + | |- |
| + | |7+ |
| + | |RAID 6 + hot spare |
| + | |} |
| + | The main differences are no degraded RAID 1 for a single disk install, which better supports virtualised and hardware RAID use cases, and a preference for RAID 6 over RAID 5. |
| + | This is to reduce the risk of a single disk failure bringing down the array. While consumer hard drives have got significantly larger over time, their unrecoverable read error rate (URE) has remained at 1 per 10^14 bits, or 12TB. |
| + | As an example, imagine a server with 5 x 4TB drives. Under previous versions of SME Server this would have been configured as a 4 disk RAID 5 array with 1 hot spare. |
| + | If one drive failed, the hot spare would become active and the array would begin to rebuild. This would require reading all 3 disks and, at some point during that 12TB operation, it’s very likely that an unrecoverable error would be encountered. At this point, the whole array would fail. |
| + | In comparison, a RAID 6 array is tolerant to two disk failures. While this does not entirely solve the risk of a URE during rebuild, it significantly reduces the likelihood of it taking down the array. |
| + | '''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager. |
| + | |
| + | === Disk Layout === |
| Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly. | | Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly. |
| | | |
− | The preferred method is to use the master location on each IDE channel (eg. hda and hdc). This will ensure that if you loose one channel, the other will still operate. It will also give you the best performance. | + | The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance. |
| + | |
| + | In a 2 drive setup put each drive on a different IDE channel: |
| | | |
− | In a 2 drive setups put each drive on a different IDE channel:
| + | IDE 1 Primary - Drive 1 <br /> |
| + | IDE 1 Secondary - CDROM <br /> |
| + | IDE 2 Primary - Drive 2 |
| | | |
− | IDE 1 Master - Drive 1 <br />
| + | '''Obviously this section is obsolete with SATA hard drives because each disk has its own channel.''' |
− | IDE 1 Slave - CDROM <br />
| |
− | IDE 2 Master - Drive 2 <br />
| |
| | | |
− | '''Obviously this section is completely obsolete with SATA hard drives because each disk has its own channel.'''
| + | <br /> |
| | | |
− | ====Identifying Hard Drives====
| + | ===Identifying Hard Drives=== |
− | It may not always be obvious which physical hard drive maps to which logical device. The simplest method to verify this if you have a drive with S.M.A.R.T. capability is to map the serial number on the physical package with that displayed by smartctl. Assuming the device of interest is '''sda''' , (a SCSI drive), then you would issue the following command from root: | + | It may not always be obvious which physical hard drive maps to which logical device. The simplest method to verify this if you have a drive with S.M.A.R.T. capability is to map the serial number on the physical package with that displayed by smartctl. Assuming the device of interest is '''sda''', (a SCSI drive), then you would issue the following command as root: |
| smartctl -i /dev/sda | | smartctl -i /dev/sda |
| | | |
| Or if an IDE Drive | | Or if an IDE Drive |
| smartctl -i /dev/hda | | smartctl -i /dev/hda |
| + | <br /> |
| + | ===Adding Additional Drives=== |
| | | |
− | ====Adding another Hard Drive Later (Raid1 array only)====
| + | For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure. |
| | | |
− | ENSURE THAT THE NEW DRIVE IS THE SAME SIZE OR LARGER AS THE CURRENT DRIVE(S)
| + | '''Ensure that any new drives are the same size or larger than your existing drives.''' |
− | * Shut down the machine
| |
− | * Install drive as master on the second IDE channel (hdc) or the second SATA channel (sda)
| |
− | * Boot up
| |
− | * At the login prompt log on as admin with the root password to get to the admin console
| |
− | * Go to #5 Manage disk redundancy
| |
| | | |
− | It will show the status and progress if the drives are syncing up. Don't turn off the server until the sync is complete or it will start syncing again from the beginning. When it is done syncing, it will show a good working Raid1.
| + | *Shut down the machine |
| + | *Install one additional drive at a time |
| + | *Boot up |
| + | *At the login prompt log on as admin with the root password to get to the admin console |
| + | *Go to #5 Manage disk redundancy |
| + | *Accept the option to add an additional drive |
| | | |
| If the Manage disk redundancy page displays the message "The free disk count must equal one" and "Manual intervention may be required", then you probably have additional hard drives that need to be disconnected while the RAID is set up. An external USB drive will have this effect, and should be unplugged. | | If the Manage disk redundancy page displays the message "The free disk count must equal one" and "Manual intervention may be required", then you probably have additional hard drives that need to be disconnected while the RAID is set up. An external USB drive will have this effect, and should be unplugged. |
− |
| |
− | {{Note box| the addition of another drive is restricted to a Raid1 that is degraded, i.e. when the system has been installed with a single drive (/dev/hda and /dev/hdc or their SATA equivalent). The addition of a third drive to a Raid1 '''(i.e. a spare)''' is not recognized by the system. To add a spare you need to use the management tool '''mdadm''' at the command line}}
| |
− |
| |
− | {{Note box|I will assume the system is installed with a Raid1 array functioning with two disks sda and sdb and you want to add another disk sdc as a spare (for adding to the array automatically if one disk of the array fails). This HowTo can be adapted to other types of RAID as long as you want to add a spare disk.}}
| |
− |
| |
− | First we need write the partition table from sda (or sdb) to sdc :
| |
− |
| |
− | sfdisk -d /dev/sda > sfdisk_sda.output
| |
− | sfdisk /dev/sdc < sfdisk_sda.output
| |
− |
| |
− | Then we need to add the new partitions to the existings arrays :
| |
− |
| |
− | mdadm --add /dev/md1 /dev/sdc1
| |
− | mdadm --add /dev/md2 /dev/sdc2
| |
− |
| |
− | Verify this with :
| |
− |
| |
− | mdadm --detail /dev/md1
| |
− | mdadm --detail /dev/md2
| |
− |
| |
− | /dev/md1:
| |
− | Version : 0.90
| |
− | Creation Time : Sat Feb 2 22:24:38 2013
| |
− | Raid Level : raid1
| |
− | Array Size : 104320 (101.89 MiB 106.82 MB)
| |
− | Used Dev Size : 104320 (101.89 MiB 106.82 MB)
| |
− | Raid Devices : 2
| |
− | Total Devices : 3
| |
− | Preferred Minor : 1
| |
− | Persistence : Superblock is persistent
| |
− |
| |
− | Update Time : Mon Feb 4 13:28:43 2013
| |
− | State : clean
| |
− | Active Devices : 2
| |
− | Working Devices : 3
| |
− | Failed Devices : 0
| |
− | Spare Devices : 1
| |
− |
| |
− | UUID : f97a86c5:8bb46daa:6854855e:558a3e16
| |
− | Events : 0.6
| |
− |
| |
− | Number Major Minor RaidDevice State
| |
− | 0 8 1 0 active sync /dev/sda1
| |
− | 1 8 17 1 active sync /dev/sdb1
| |
− |
| |
− | 2 8 33 - spare /dev/sdc1
| |
− |
| |
− | Alternatively you can try this.
| |
− |
| |
− | cat /proc/mdstat
| |
− |
| |
− | cat /proc/mdstat
| |
− | Personalities : [raid1]
| |
− | md1 : active raid1 sdc1[2](S) sdb1[1] sda1[0]
| |
− | 104320 blocks [2/2] [UU]
| |
− |
| |
− | md2 : active raid1 sdc2[2](S) sdb2[1] sda2[0]
| |
− | 52323584 blocks [2/2] [UU]
| |
− |
| |
− | (S)= Spare
| |
− | (F)= Fail
| |
− | [0]= number of the disk
| |
− |
| |
− | You should ensure that grub has been written correctly to the spare disk to ensure that it will boot correctly.
| |
− |
| |
− | {{Warning box|Grub is unable to install itself on an empty disk or empty partitions; to have the spare fully working and booting after a sync the boot partition on the spare drive needs to be duplicated:}}
| |
− |
| |
− | {{Warning box|as the dd command is named "data destroyer" you need to be extremely prudent and sure of the name of source partition and/or destination. At first you should skip the dd command, Step 1 below, and attempt to install grub without it, see Step 2 below. If grub can be installed without using dd, then Step 1 can be discarded.}}
| |
− |
| |
− | To copy boot partition (sda=disk of the array sdc=the spare), from within a terminal with administrator privileges :
| |
− |
| |
− | Step 1
| |
− | dd if=/dev/sda1 of=/dev/sdc1
| |
− |
| |
− | Issue following from within a terminal with administrator privileges :
| |
− |
| |
− | Step2
| |
− | grub
| |
− | device (hd2) /dev/sdc
| |
− | root (hd2,0)
| |
− | setup (hd2)
| |
− |
| |
− | Last of all, try forcing a failure of one of the original two drives and ensure that the server boots, and the RAID rebuilds corectly. You may then have to repeat this exercise to get the drives in the correct order (i.e sda/sdb in the array with sdc as the spare)
| |
| | | |
| ====Reusing Hard Drives==== | | ====Reusing Hard Drives==== |
Line 151: |
Line 109: |
| | | |
| You MUST reboot so that the empty partition table gets read correctly. | | You MUST reboot so that the empty partition table gets read correctly. |
| + | |
| For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154 | | For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154 |
| + | |
| + | <br /> |
| | | |
| ====Upgrading the Hard Drive Size==== | | ====Upgrading the Hard Drive Size==== |
Line 157: |
Line 118: |
| Note: these instructions are only applicable if you have a RAID system with more than one drive. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311 | | Note: these instructions are only applicable if you have a RAID system with more than one drive. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311 |
| | | |
− | * CAUTION MAKE A FULL BACKUP! | + | *CAUTION MAKE A FULL BACKUP! |
− | * Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3] | + | *Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3] |
| | | |
| HD Scenario - Current 250gb drives, new larger 500gb drives | | HD Scenario - Current 250gb drives, new larger 500gb drives |
| | | |
− | # Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives. | + | #Shut down and install one larger drive in system for one old HD. Unplug any USB-connected drives. |
− | # Boot up and login to the admin console and use option 5 to add the new (larger) drive to system. | + | #Boot up and login to the admin console and use option 5 to add the new (larger) drive to system. |
− | # Wait for raid to fully sync. | + | #Wait for raid to fully sync. |
− | # Repeat steps 1-3 until all drives in system are upgraded to larger capacity. | + | #Repeat steps 1-3 until all drives in system are upgraded to larger capacity. |
− | # Ensure all drives have been replace with larger drives and array is in sync and redundant! | + | #Ensure all drives have been replace with larger drives and array is in sync and redundant! |
− | # Issue the following commands: | + | #Issue the following commands: |
| | | |
| {{Note box|SME9 uses /dev/md1 not /dev/md2.}} | | {{Note box|SME9 uses /dev/md1 not /dev/md2.}} |
Line 185: |
Line 146: |
| | | |
| Notes : | | Notes : |
− | * All of this can be done while the server is up and running with the exception of #1. | + | |
− | * These instructions should work for any raid level you have as long as you have >= 2 drives | + | *All of this can be done while the server is up and running with the exception of #1. |
− | * If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes | + | *These instructions should work for any raid level you have as long as you have >= 2 drives |
| + | *If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes |
| + | |
| ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to) | | ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to) |
| or If you receive an "command not found" error, try this: | | or If you receive an "command not found" error, try this: |
Line 198: |
Line 161: |
| Note: These instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311 | | Note: These instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311 |
| | | |
− | * CAUTION MAKE A FULL BACKUP! | + | *CAUTION MAKE A FULL BACKUP! |
− | * Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3] | + | *Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3] |
| | | |
| HD Scenario - Current 250gb drives, new larger 500gb drives | | HD Scenario - Current 250gb drives, new larger 500gb drives |
| | | |
− | # Remove failed HDD from system, ensure remaining drive is on sda on its own and boot up. | + | #Remove failed HDD from system, ensure remaining drive is on sda on its own and boot up. |
− | # Shutdown, connect one new 500gb drive as sdb and boot up | + | #Shutdown, connect one new 500gb drive as sdb and boot up |
− | # Login to the admin panel and manage raid to add new (larger) drive to system. | + | #Login to the admin panel and manage raid to add new (larger) drive to system. |
− | # Wait for raid to fully sync | + | #Wait for raid to fully sync |
− | # Do full reboot with those 2 drives in place (1 original, 1 new) | + | #Do full reboot with those 2 drives in place (1 original, 1 new) |
| #Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original) | | #Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original) |
− | # Boot up again with just the one new drive in place, and confirm it boots OK. | + | #Boot up again with just the one new drive in place, and confirm it boots OK. |
− | # Shutdown, and connected the other 500gb drive as sdb | + | #Shutdown, and connected the other 500gb drive as sdb |
− | # Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync. | + | #Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync. |
− | # Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant! | + | #Reboot and ensure all drives have been replaced with larger drives and array is in sync and redundant! |
− | # Issue the following commands: | + | #Issue the following commands: |
| | | |
| {{Note box|SME9 uses /dev/md1 not /dev/md2.}} | | {{Note box|SME9 uses /dev/md1 not /dev/md2.}} |
Line 230: |
Line 193: |
| | | |
| Notes : | | Notes : |
− | * These instructions should work for any raid level you have as long as you have >= 2 drives | + | |
− | * If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes | + | *These instructions should work for any raid level you have as long as you have >= 2 drives |
| + | *If you have disabled lvm , you don't need the pvresize or lvresize command, therefore the final line becomes |
| + | |
| ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to) | | ext2online -C0 /dev/md2 <nowiki>#</nowiki>(or whatever / is mounted to) |
| or If you receive an "command not found" error, try this: | | or If you receive an "command not found" error, try this: |
Line 249: |
Line 214: |
| | | |
| Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO. | | Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO. |
− | ==== Receive periodic check of Raid by email ==== | + | ====Receive periodic check of Raid by email==== |
| | | |
| There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of emails and some time messages can be forgotten. | | There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of emails and some time messages can be forgotten. |
Line 335: |
Line 300: |
| '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. | | '''Note:''' with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" . In addition, you may also select the number of spare(s) implemented [0,1,or 2]. |
| | | |
− | ==== remove the degraded raid ==== | + | ====remove the degraded raid==== |
| when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then : | | when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then : |
| mdadm --grow /dev/md0 --force --raid-devices=1 | | mdadm --grow /dev/md0 --force --raid-devices=1 |
Line 366: |
Line 331: |
| Login as root, type console. Select Item 5. "Manage disk reduncancy" | | Login as root, type console. Select Item 5. "Manage disk reduncancy" |
| <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 ------- | | <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 ------- |
− | Current RAID status:
| + | Current RAID status: |
− | | + | |
− | Personalities : [raid1]
| + | Personalities : [raid1] |
− | md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
| + | md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed. |
− | 38973568 blocks [2/1] [U_]
| + | 38973568 blocks [2/1] [U_] |
− | | + | |
− | md1 : active raid1 hda1[0] hdb1[1]
| + | md1 : active raid1 hda1[0] hdb1[1] |
− | 104320 blocks [2/2] [UU]
| + | 104320 blocks [2/2] [UU] |
− | | + | |
− | unused devices: <none>
| + | unused devices: <none> |
− | Only Some of the RAID devices are unclean. <-- NOTICE This message and
| + | Only Some of the RAID devices are unclean. <-- NOTICE This message and |
− | Manual intervention may be required.</nowiki> <-- this message.
| + | Manual intervention may be required.</nowiki> <-- this message. |
| Notice the last 2 sentences of the window above. You have some problems. <br> | | Notice the last 2 sentences of the window above. You have some problems. <br> |
| If your system is healthy however the message you will see at the bottom of Raid Console window is: | | If your system is healthy however the message you will see at the bottom of Raid Console window is: |
Line 483: |
Line 448: |
| Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place. | | Then, when re-creating the RAID 5 array, make sure you add the –metadata=0.9 tag so the superblock is recreated in the right place. |
| Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.}} | | Unfortunately, v1.0 give a new size for the md device (smaller than the original array), v1.1 and v1.2 corrupts the filesystem outright, so best to avoid these cases entirely. Creating a new array with v1.x superblocks when the original was v0.9 is likewise outright destructive.}} |
− | <ol></li><li>Login as root | + | <ol><li>Login as root |
| </li><li>Move to /boot (we must create a new initrd image to load raid5 driver). | | </li><li>Move to /boot (we must create a new initrd image to load raid5 driver). |
| cd /boot | | cd /boot |
Line 499: |
Line 464: |
| </li><li>Now, create on the new drive(s) the correct partition table. | | </li><li>Now, create on the new drive(s) the correct partition table. |
| sfdisk -d /dev/sda > tmp.out | | sfdisk -d /dev/sda > tmp.out |
− | sfdisk /dev/sdc < tmp.out
| + | sfdisk /dev/sdc < tmp.out |
| | | |
| </li><li>Repeat the last step for each new hd (sdd, sde etc.). | | </li><li>Repeat the last step for each new hd (sdd, sde etc.). |
| </li><li>Create the new array | | </li><li>Create the new array |
| mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2 | | mdadm --create /dev/md2 -c 256 --level=5 --raid-devices=2 /dev/sda2 /dev/sdb2 |
− | mdadm: /dev/sda2 appears to be part of a raid array:
| + | mdadm: /dev/sda2 appears to be part of a raid array: |
− | level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
| + | level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 |
− | mdadm: /dev/sdb2 appears to be part of a raid array:
| + | mdadm: /dev/sdb2 appears to be part of a raid array: |
− | level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009
| + | level=raid1 devices=2 ctime=Fri Dec 18 13:17:49 2009 |
− | Continue creating array? y
| + | Continue creating array? y |
− | mdadm: array /dev/md2 started.
| + | mdadm: array /dev/md2 started. |
| | | |
| </li><li>Wait for resync; monitor the status with | | </li><li>Wait for resync; monitor the status with |
| cat /proc/mdstat | | cat /proc/mdstat |
− |
| + | root# cat /proc/mdstat |
− | root# cat /proc/mdstat
| + | Personalities : [raid0] [raid1] [raid5] |
− | Personalities : [raid0] [raid1] [raid5]
| + | md2 : active raid5 sdb1[2] sda1[0] |
− | md2 : active raid5 sdb1[2] sda1[0]
| + | 1048512 blocks level 5, 256k chunk, algorithm 2 [2/1] [U_] |
− | 1048512 blocks level 5, 256k chunk, algorithm 2 [2/1] [U_]
| + | [==>..................] recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec |
− | [==>..................] recovery = 12.5% (132096/1048512) finish=0.8min speed=18870K/sec
| |
| </li><li>Reboot | | </li><li>Reboot |
| exit | | exit |
Line 534: |
Line 498: |
| </li><li>Wait for array reshaping. This part can take a substantial amount of time; monitor it with | | </li><li>Wait for array reshaping. This part can take a substantial amount of time; monitor it with |
| cat /proc/mdstat | | cat /proc/mdstat |
− |
| + | root# cat /proc/mdstat |
− | root# cat /proc/mdstat
| + | Personalities : [raid0] [raid1] [raid5] |
− | Personalities : [raid0] [raid1] [raid5]
| + | md2 : active raid5 sdc1[2] sdb1[1] sda1[0] |
− | md2 : active raid5 sdc1[2] sdb1[1] sda1[0]
| + | 1048512 blocks super 0.91 level 5, 256k chunk, algorithm 2 [3/3] [UUU] |
− | 1048512 blocks super 0.91 level 5, 256k chunk, algorithm 2 [3/3] [UUU]
| + | [==>..................] reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec |
− | [==>..................] reshape = 12.5% (131520/1048512) finish=2.5min speed=5978K/sec
| |
| | | |
| </li><li>Issue the following commands: | | </li><li>Issue the following commands: |
| pvresize /dev/md2 | | pvresize /dev/md2 |
− | lvresize -l +100%FREE main/root
| + | lvresize -l +100%FREE main/root |
− | resize2fs /dev/main/root
| + | resize2fs /dev/main/root |
| </li></ol> | | </li></ol> |
| Notes : | | Notes : |
− | * If you have disabled lvm | + | |
− | # you don't need the pvresize or lvresize command | + | *If you have disabled lvm |
− | # the final line becomes resize2fs /dev/md2 (or whatever / is mounted to) | + | |
− | # More info: http://www.arkf.net/blog/?p=47 | + | #you don't need the pvresize or lvresize command |
| + | #the final line becomes resize2fs /dev/md2 (or whatever / is mounted to) |
| + | #More info: http://www.arkf.net/blog/?p=47 |
| | | |
| ---- | | ---- |
− | <noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude> | + | <noinclude> |
| + | [[Category:Howto]] |
| + | [[Category:Administration:Storage]] |
| + | </noinclude> |