Raid:Growing

From SME Server
Revision as of 22:49, 29 October 2013 by Stephdl (talk | contribs) (Created page with "{{WIP box}} Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] ==Adding partitions== {{Note box|due to a bug of kernel 2.6.18 which is the ...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search
Warning.png Work in Progress:
This page is a Work in Progress. The contents off this page may be in flux, please have a look at this page history the to see list of changes.


Source of this page is the raid wiki

Adding partitions

Important.png Note:
due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6


When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives and 1 spare drive. See this [[1]] for understanding the automatic raid construction of SME Server

This is how your array looks before.

[root@smeraid5 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1] 
md1 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3]
     104320 blocks [4/4] [UUUU]
     
md2 : active raid5 sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
     72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]

for example using this command to partition the new drive


sfdisk -d /dev/sda > sfdisk_sda.output
sfdisk -f /dev/sde < sfdisk_sda.output

If you have errors about sfdisk command, you can clean the drive with the dd command. Be aware that dd is called data-destroyer, think about which partition you type.

#dd if=/dev/zero of=/dev/sdX bs=512 count=1


Now we need to add the first partition /dev/sde1 to /dev/md1

[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1
mdadm: added /dev/sde1
[root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1

you can see how the array is


[root@smeraid5 ~]# mdadm --detail /dev/md1
/dev/md1:
       Version : 0.90
 Creation Time : Tue Oct 29 21:04:15 2013
    Raid Level : raid1
    Array Size : 104320 (101.89 MiB 106.82 MB)
 Used Dev Size : 104320 (101.89 MiB 106.82 MB)
  Raid Devices : 5
 Total Devices : 5
Preferred Minor : 1
   Persistence : Superblock is persistent
   Update Time : Tue Oct 29 21:39:00 2013
         State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
 Spare Devices : 0
          UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
        Events : 0.4
   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync   /dev/sda1
      1       8       17        1      active sync   /dev/sdb1
      2       8       33        2      active sync   /dev/sdc1
      3       8       49        3      active sync   /dev/sdd1
      4       8       65        4      active sync   /dev/sde1

After that we have to do the same thing with the md2 which is a raid5 array.

[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sdj2
mdadm: added /dev/sdj2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2
mdadm: Need to backup 14336K of critical section..
mdadm: ... critical section passed.


Information.png Tip:
Add Tip here