Line 1: |
Line 1: |
− | {{WIP box}}
| |
| {{level|Advanced}} | | {{level|Advanced}} |
− | Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] | + | Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]. This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] which gives the need to write the the howto |
| | | |
− | The purpose is to add a new drive to an existing Raid5 with LVM which is the standard installation of SME Server. Please backup your datas before to start this HOWTO, '''you may loose them'''. | + | The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''. |
| ==Growing an existing Array== | | ==Growing an existing Array== |
| | | |
− | {{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grow a RAID6}} | + | {{Note box|due to a bug in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not grow a RAID6}} |
| | | |
− | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server | + | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server |
| | | |
− | This is how your array looks before. | + | This is how your array should look before changing. |
| | | |
| [root@smeraid5 ~]# cat /proc/mdstat | | [root@smeraid5 ~]# cat /proc/mdstat |
Line 27: |
Line 26: |
| sfdisk -f /dev/sde < sfdisk_sda.output | | sfdisk -f /dev/sde < sfdisk_sda.output |
| | | |
− | If you have errors about sfdisk command, you can clean the drive with the dd command. | + | If you have errors using the sfdisk command, you can clean the drive with the dd command. |
− | {{Warning box|Be aware that dd is called data-destroyer, think about which partition you type.}} | + | {{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}} |
| #dd if=/dev/zero of=/dev/sdX bs=512 count=1 | | #dd if=/dev/zero of=/dev/sdX bs=512 count=1 |
| | | |
| ===Adding partitions=== | | ===Adding partitions=== |
| + | {{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose. |
| + | |
| + | mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1 |
| + | mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}} |
| | | |
| Now we need to add the first partition /dev/sde1 to /dev/md1 | | Now we need to add the first partition /dev/sde1 to /dev/md1 |
Line 39: |
Line 42: |
| [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1 | | [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1 |
| | | |
− | Here we use the option --raid-devices='''5''' because the raid1 use all drives. You can see how the array is | + | Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by: |
− | | + | {{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}} |
| [root@smeraid5 ~]# mdadm --detail /dev/md1 | | [root@smeraid5 ~]# mdadm --detail /dev/md1 |
| /dev/md1: | | /dev/md1: |
Line 79: |
Line 82: |
| mdadm: ... critical section passed. | | mdadm: ... critical section passed. |
| | | |
− | {{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''}} | + | {{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}} |
| + | |
| + | {{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}} |
| | | |
| we can take a look to the md2 array | | we can take a look to the md2 array |
Line 111: |
Line 116: |
| 0 8 2 0 active sync /dev/sda2 | | 0 8 2 0 active sync /dev/sda2 |
| 1 8 18 1 active sync /dev/sdb2 | | 1 8 18 1 active sync /dev/sdb2 |
− | 3 8 34 2 active sync /dev/sdd2 | + | 3 8 34 2 active sync /dev/sdc2 |
| 4 8 50 3 active sync /dev/sde2 | | 4 8 50 3 active sync /dev/sde2 |
| | | |
− | 2 8 114 - spare /dev/sdc2 | + | 2 8 114 - spare /dev/sdd2 |
| | | |
| ===LVM: Growing the PV=== | | ===LVM: Growing the PV=== |
| | | |
− | Once the construction is done, we have to set the LVM to use the whole space | + | {{Note box|Once the construction is complete, we have to set the LVM to use the whole space}} |
| + | |
| + | * In a root terminal, issue the following command lines |
| | | |
− | [root@smeraid5 ~]# pvresize /dev/md2 | + | [root@smeraid5 ~]# pvresize /dev/md2 |
| Physical volume "/dev/md2" changed | | Physical volume "/dev/md2" changed |
| 1 physical volume(s) resized / 0 physical volume(s) not resized | | 1 physical volume(s) resized / 0 physical volume(s) not resized |
| | | |
− | after that we can resize the LVM | + | * after that we can resize the LVM |
| | | |
| [root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root | | [root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root |
Line 137: |
Line 144: |
| Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks. | | Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks. |
| | | |
− | You should verify that your LVM use the whole drive space with the command | + | * You should verify that your LVM use the whole drive space with the command |
| + | |
| + | On Koozali SME v10 you should use xfs_growfs instead of resize2fs |
| + | |
| + | [root@smev10~]# xfs_growfs /dev/main/root |
| + | meta-data=/dev/mapper/main-root isize=512 agcount=4, agsize=1854976 blks |
| + | = sectsz=512 attr=2, projid32bit=1 |
| + | = crc=1 finobt=0 spinodes=0 |
| + | data = bsize=4096 blocks=7419904, imaxpct=25 |
| + | = sunit=0 swidth=0 blks |
| + | naming =version 2 bsize=4096 ascii-ci=0 ftype=1 |
| + | log =internal bsize=4096 blocks=3623, version=2 |
| + | = sectsz=512 sunit=0 blks, lazy-count=1 |
| + | realtime =none extsz=4096 blocks=0, rtextents=0 |
| + | data blocks changed from 7419904 to 11615232 |
| + | |
| + | |
| | | |
| [root@smeraid5 ~]# pvdisplay | | [root@smeraid5 ~]# pvdisplay |