Difference between revisions of "Raid:Growing"
(Created page with "{{WIP box}} Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] ==Adding partitions== {{Note box|due to a bug of kernel 2.6.18 which is the ...") |
|||
Line 5: | Line 5: | ||
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}} | {{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}} | ||
− | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives and 1 spare drive. See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server | + | When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server |
This is how your array looks before. | This is how your array looks before. | ||
Line 74: | Line 74: | ||
mdadm: ... critical section passed. | mdadm: ... critical section passed. | ||
− | {{Tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you want | + | {{Tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you do not want a spare drive, you should set --raid-devices='''5'''}} |
+ | |||
+ | we can take a look to the md2 array | ||
+ | |||
+ | [root@smeraid5 ~]# mdadm --detail /dev/md2 | ||
+ | /dev/md2: | ||
+ | Version : 0.90 | ||
+ | Creation Time : Tue Oct 29 21:04:28 2013 | ||
+ | Raid Level : raid5 | ||
+ | Array Size : 32644096 (30.28 GiB 31.39 GB) | ||
+ | Used Dev Size : 10377728 (7.90 GiB 9.63 GB) | ||
+ | Raid Devices : 4 | ||
+ | Total Devices : 5 | ||
+ | Preferred Minor : 2 | ||
+ | Persistence : Superblock is persistent | ||
+ | |||
+ | Update Time : Tue Oct 29 21:39:29 2013 | ||
+ | State : clean | ||
+ | Active Devices : 4 | ||
+ | Working Devices : 5 | ||
+ | Failed Devices : 0 | ||
+ | Spare Devices : 1 | ||
+ | |||
+ | Layout : left-symmetric | ||
+ | Chunk Size : 256K | ||
+ | |||
+ | UUID : d2c26bed:b5251648:509041c5:fab64ab4 | ||
+ | Events : 0.462 | ||
+ | |||
+ | Number Major Minor RaidDevice State | ||
+ | 0 8 2 0 active sync /dev/sda2 | ||
+ | 1 8 18 1 active sync /dev/sdb2 | ||
+ | 3 8 34 2 active sync /dev/sdd2 | ||
+ | 4 8 50 3 active sync /dev/sde2 | ||
+ | |||
+ | 2 8 114 - spare /dev/sdc2 | ||
+ | |||
+ | |||
+ | Once the construction is done, we have to set the LVM to use the whole space | ||
+ | |||
+ | [root@smeraid5 ~]# pvresize /dev/md2 | ||
+ | Physical volume "/dev/md2" changed | ||
+ | 1 physical volume(s) resized / 0 physical volume(s) not resized | ||
+ | |||
+ | after that we can resize the LVM | ||
+ | |||
+ | [root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root | ||
+ | Extending logical volume root to 30,25 GB | ||
+ | Logical volume root successfully resized | ||
+ | |||
+ | {{Tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}} | ||
+ | |||
+ | [root@smeraid5 ~]# resize2fs /dev/main/root | ||
+ | resize2fs 1.39 (29-May-2006) | ||
+ | Filesystem at /dev/main/root is mounted on /; on-line resizing required | ||
+ | Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks. | ||
+ | |||
+ | You should verify that your LVM use the whole drive space with the command | ||
+ | [root@smeraid5 ~]# pvdisplay | ||
+ | --- Physical volume --- | ||
+ | PV Name /dev/md2 | ||
+ | VG Name main | ||
+ | PV Size 30.25 GB / not usable 8,81 MB | ||
+ | Allocatable yes (but full) | ||
+ | PE Size (KByte) 32768 | ||
+ | Total PE 1533 | ||
+ | '''Free PE 0''' | ||
+ | Allocated PE 1533 | ||
+ | PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo | ||
+ | |||
+ | if you can see that you have no more '''FREE PE''' you are the king of raid. But you can see also with the command | ||
+ | |||
+ | [root@smeraid5 ~]# lvdisplay |
Revision as of 23:16, 29 October 2013
Source of this page is the raid wiki
Adding partitions
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[1]] for understanding the automatic raid construction of SME Server
This is how your array looks before.
[root@smeraid5 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] 104320 blocks [4/4] [UUUU] md2 : active raid5 sdd2[8](S) sdc2[2] sdb2[1] sda2[0] 72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
for example using this command to partition the new drive
sfdisk -d /dev/sda > sfdisk_sda.output sfdisk -f /dev/sde < sfdisk_sda.output
If you have errors about sfdisk command, you can clean the drive with the dd command. Be aware that dd is called data-destroyer, think about which partition you type.
#dd if=/dev/zero of=/dev/sdX bs=512 count=1
Now we need to add the first partition /dev/sde1 to /dev/md1
[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1 mdadm: added /dev/sde1 [root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1
you can see how the array is
[root@smeraid5 ~]# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Tue Oct 29 21:04:15 2013 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 1 Persistence : Superblock is persistent
Update Time : Tue Oct 29 21:39:00 2013 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0
UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d Events : 0.4
Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 65 4 active sync /dev/sde1
After that we have to do the same thing with the md2 which is a raid5 array.
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sdj2 mdadm: added /dev/sdj2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2 mdadm: Need to backup 14336K of critical section.. mdadm: ... critical section passed.
we can take a look to the md2 array
[root@smeraid5 ~]# mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Tue Oct 29 21:04:28 2013 Raid Level : raid5 Array Size : 32644096 (30.28 GiB 31.39 GB) Used Dev Size : 10377728 (7.90 GiB 9.63 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent
Update Time : Tue Oct 29 21:39:29 2013 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1
Layout : left-symmetric Chunk Size : 256K
UUID : d2c26bed:b5251648:509041c5:fab64ab4 Events : 0.462
Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 3 8 34 2 active sync /dev/sdd2 4 8 50 3 active sync /dev/sde2
2 8 114 - spare /dev/sdc2
Once the construction is done, we have to set the LVM to use the whole space
[root@smeraid5 ~]# pvresize /dev/md2
Physical volume "/dev/md2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
after that we can resize the LVM
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root Extending logical volume root to 30,25 GB Logical volume root successfully resized
[root@smeraid5 ~]# resize2fs /dev/main/root resize2fs 1.39 (29-May-2006) Filesystem at /dev/main/root is mounted on /; on-line resizing required Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
You should verify that your LVM use the whole drive space with the command
[root@smeraid5 ~]# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name main PV Size 30.25 GB / not usable 8,81 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1533 Free PE 0 Allocated PE 1533 PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
if you can see that you have no more FREE PE you are the king of raid. But you can see also with the command
[root@smeraid5 ~]# lvdisplay