Changes

Jump to navigation Jump to search
2,437 bytes added ,  23:16, 29 October 2013
no edit summary
Line 5: Line 5:  
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}}
 
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}}
   −
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives and 1 spare drive. See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
+
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
    
This is how your array looks before.
 
This is how your array looks before.
Line 74: Line 74:  
  mdadm: ... critical section passed.
 
  mdadm: ... critical section passed.
   −
{{Tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you want no spare set --raid-devices='''5'''}}
+
{{Tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you do not want a spare drive, you should  set --raid-devices='''5'''}}
 +
 
 +
we can take a look to the md2 array
 +
 
 +
[root@smeraid5 ~]# mdadm --detail /dev/md2
 +
/dev/md2:
 +
        Version : 0.90
 +
  Creation Time : Tue Oct 29 21:04:28 2013
 +
    Raid Level : raid5
 +
    Array Size : 32644096 (30.28 GiB 31.39 GB)
 +
  Used Dev Size : 10377728 (7.90 GiB 9.63 GB)
 +
  Raid Devices : 4
 +
  Total Devices : 5
 +
Preferred Minor : 2
 +
    Persistence : Superblock is persistent
 +
 
 +
    Update Time : Tue Oct 29 21:39:29 2013
 +
          State : clean
 +
Active Devices : 4
 +
Working Devices : 5
 +
Failed Devices : 0
 +
  Spare Devices : 1
 +
 
 +
        Layout : left-symmetric
 +
    Chunk Size : 256K
 +
 
 +
          UUID : d2c26bed:b5251648:509041c5:fab64ab4
 +
        Events : 0.462
 +
 
 +
    Number  Major  Minor  RaidDevice State
 +
      0      8        2        0      active sync  /dev/sda2
 +
      1      8      18        1      active sync  /dev/sdb2
 +
      3      8      34        2      active sync  /dev/sdd2
 +
      4      8      50        3      active sync  /dev/sde2
 +
 
 +
      2      8      114        -      spare  /dev/sdc2
 +
 
 +
 
 +
Once the construction is done, we have to set the LVM to use the whole space
 +
 
 +
[root@smeraid5 ~]# pvresize /dev/md2
 +
  Physical volume "/dev/md2" changed
 +
  1 physical volume(s) resized / 0 physical volume(s) not resized
 +
 
 +
after that we can resize the LVM
 +
 
 +
[root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
 +
  Extending logical volume root to 30,25 GB
 +
  Logical volume root successfully resized
 +
 
 +
{{Tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}
 +
 
 +
[root@smeraid5 ~]# resize2fs  /dev/main/root
 +
resize2fs 1.39 (29-May-2006)
 +
Filesystem at /dev/main/root is mounted on /; on-line resizing required
 +
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
 +
 
 +
You should verify that your LVM use the whole drive space with the command
 +
[root@smeraid5 ~]# pvdisplay
 +
  --- Physical volume ---
 +
  PV Name              /dev/md2
 +
  VG Name              main
 +
  PV Size              30.25 GB / not usable 8,81 MB
 +
  Allocatable          yes (but full)
 +
  PE Size (KByte)      32768
 +
  Total PE              1533
 +
  '''Free PE              0'''
 +
  Allocated PE          1533
 +
  PV UUID              a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
 +
 
 +
if you can see that you have no more '''FREE PE''' you are the king of raid. But you can see also with the command
 +
 
 +
[root@smeraid5 ~]# lvdisplay

Navigation menu