Difference between revisions of "Raid:Growing"
(Minor grammer changes) |
|||
Line 3: | Line 3: | ||
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] | Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] | ||
− | The purpose is to add a new drive to an existing Raid5 with LVM | + | The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''. |
==Growing an existing Array== | ==Growing an existing Array== | ||
− | {{Note box|due to a bug | + | {{Note box|due to a bug in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not grow a RAID6}} |
− | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk | + | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server |
− | This is how your array | + | This is how your array shpould look before changing. |
[root@smeraid5 ~]# cat /proc/mdstat | [root@smeraid5 ~]# cat /proc/mdstat | ||
Line 27: | Line 27: | ||
sfdisk -f /dev/sde < sfdisk_sda.output | sfdisk -f /dev/sde < sfdisk_sda.output | ||
− | If you have errors | + | If you have errors using the sfdisk command, you can clean the drive with the dd command. |
− | {{Warning box|Be aware that dd is called data-destroyer, | + | {{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}} |
#dd if=/dev/zero of=/dev/sdX bs=512 count=1 | #dd if=/dev/zero of=/dev/sdX bs=512 count=1 | ||
Line 39: | Line 39: | ||
[root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1 | [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1 | ||
− | Here we use the option --raid-devices='''5''' because | + | Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by: |
[root@smeraid5 ~]# mdadm --detail /dev/md1 | [root@smeraid5 ~]# mdadm --detail /dev/md1 | ||
Line 118: | Line 118: | ||
===LVM: Growing the PV=== | ===LVM: Growing the PV=== | ||
− | Once the construction is | + | Once the construction is complete, we have to set the LVM to use the whole space |
[root@smeraid5 ~]# pvresize /dev/md2 | [root@smeraid5 ~]# pvresize /dev/md2 |
Revision as of 00:45, 30 October 2013
Source of this page is the raid wiki
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, or you may loose the lot.
Growing an existing Array
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this HowTo for understanding the automatic raid construction of SME Server
This is how your array shpould look before changing.
[root@smeraid5 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] 104320 blocks [4/4] [UUUU] md2 : active raid5 sdd2[8](S) sdc2[2] sdb2[1] sda2[0] 72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
Partition the new drive
for example using this command to partition the new drive
sfdisk -d /dev/sda > sfdisk_sda.output sfdisk -f /dev/sde < sfdisk_sda.output
If you have errors using the sfdisk command, you can clean the drive with the dd command.
#dd if=/dev/zero of=/dev/sdX bs=512 count=1
Adding partitions
Now we need to add the first partition /dev/sde1 to /dev/md1
[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1 mdadm: added /dev/sde1 [root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1
Here we use the option --raid-devices=5 because raid1 uses all drives. You can see how the array looks by:
[root@smeraid5 ~]# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Tue Oct 29 21:04:15 2013 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:00 2013 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d Events : 0.4 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 65 4 active sync /dev/sde1
After that we have to do the same thing with the md2 which is a raid5 array.
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2 mdadm: added /dev/sde2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2 mdadm: Need to backup 14336K of critical section.. mdadm: ... critical section passed.
we can take a look to the md2 array
[root@smeraid5 ~]# mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Tue Oct 29 21:04:28 2013 Raid Level : raid5 Array Size : 32644096 (30.28 GiB 31.39 GB) Used Dev Size : 7377728 (7.90 GiB 9.63 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:29 2013 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 256K UUID : d2c26bed:b5251648:509041c5:fab64ab4 Events : 0.462 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 3 8 34 2 active sync /dev/sdd2 4 8 50 3 active sync /dev/sde2 2 8 114 - spare /dev/sdc2
LVM: Growing the PV
Once the construction is complete, we have to set the LVM to use the whole space
[root@smeraid5 ~]# pvresize /dev/md2
Physical volume "/dev/md2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
after that we can resize the LVM
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root Extending logical volume root to 30,25 GB Logical volume root successfully resized
[root@smeraid5 ~]# resize2fs /dev/main/root resize2fs 1.39 (29-May-2006) Filesystem at /dev/main/root is mounted on /; on-line resizing required Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
You should verify that your LVM use the whole drive space with the command
[root@smeraid5 ~]# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name main PV Size 30.25 GB / not usable 8,81 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1533 Free PE 0 Allocated PE 1533 PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
if you can see that you have no more FREE PE you are the king of raid. But you can see also with the command
[root@smeraid5 ~]# lvdisplay