Changes

From SME Server
Jump to navigationJump to search
456 bytes added ,  23:40, 29 October 2013
no edit summary
Line 1: Line 1:  
{{WIP box}}
 
{{WIP box}}
 +
{{level|Advanced}}
 
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
 
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
==Adding partitions==
     −
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}}
+
The purpose is to add a new drive to an existing Raid5 with LVM which is the standard installation of SME Server. Please backup your datas before to start this HOWTO, '''you may loose them'''.
 +
==Growing an existing Array==
   −
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
+
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grow a RAID6}}
 +
 
 +
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
    
This is how your array looks before.
 
This is how your array looks before.
Line 16: Line 19:  
  md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
 
  md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
 
       72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
 
       72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
 +
 +
===Partition the new drive===
    
for example using this command to partition the new drive
 
for example using this command to partition the new drive
      
  sfdisk -d /dev/sda > sfdisk_sda.output
 
  sfdisk -d /dev/sda > sfdisk_sda.output
 
  sfdisk -f /dev/sde < sfdisk_sda.output
 
  sfdisk -f /dev/sde < sfdisk_sda.output
   −
If you have errors about sfdisk command, you can clean the drive with the dd command. Be aware that dd is called data-destroyer, think about which partition you type.
+
If you have errors about sfdisk command, you can clean the drive with the dd command.
 +
{{Warning box|Be aware that dd is called data-destroyer, think about which partition you type.}}
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
    +
===Adding partitions===
    
Now we need to add the first partition /dev/sde1 to /dev/md1
 
Now we need to add the first partition /dev/sde1 to /dev/md1
Line 33: Line 39:  
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
   
 
   
you can see how the array is  
+
Here we use the option --raid-devices='''5''' because the raid1 use all drives. You can see how the array is  
 
      
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  [root@smeraid5 ~]# mdadm --detail /dev/md1
Line 67: Line 72:  
After that we have to do the same thing with the md2 which is a raid5 array.
 
After that we have to do the same thing with the md2 which is a raid5 array.
   −
  [root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sdj2
+
  [root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
  mdadm: added /dev/sdj2
+
  mdadm: added /dev/sde2
    
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
 
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
Line 74: Line 79:  
  mdadm: ... critical section passed.
 
  mdadm: ... critical section passed.
   −
{{tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you do not want a spare drive, you should  set --raid-devices='''5'''}}
+
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should  set --raid-devices='''5'''}}
    
  we can take a look to the md2 array
 
  we can take a look to the md2 array
 
+
 
  [root@smeraid5 ~]# mdadm --detail /dev/md2
 
  [root@smeraid5 ~]# mdadm --detail /dev/md2
 
  /dev/md2:
 
  /dev/md2:
Line 84: Line 89:  
     Raid Level : raid5
 
     Raid Level : raid5
 
     Array Size : 32644096 (30.28 GiB 31.39 GB)
 
     Array Size : 32644096 (30.28 GiB 31.39 GB)
   Used Dev Size : 10377728 (7.90 GiB 9.63 GB)
+
   Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
 
   Raid Devices : 4
 
   Raid Devices : 4
 
   Total Devices : 5
 
   Total Devices : 5
 
  Preferred Minor : 2
 
  Preferred Minor : 2
 
     Persistence : Superblock is persistent
 
     Persistence : Superblock is persistent
 
+
 
     Update Time : Tue Oct 29 21:39:29 2013
 
     Update Time : Tue Oct 29 21:39:29 2013
 
           State : clean
 
           State : clean
Line 96: Line 101:  
  Failed Devices : 0
 
  Failed Devices : 0
 
   Spare Devices : 1
 
   Spare Devices : 1
 
+
 
         Layout : left-symmetric
 
         Layout : left-symmetric
 
     Chunk Size : 256K
 
     Chunk Size : 256K
 
+
 
           UUID : d2c26bed:b5251648:509041c5:fab64ab4
 
           UUID : d2c26bed:b5251648:509041c5:fab64ab4
 
         Events : 0.462
 
         Events : 0.462
 
+
 
     Number  Major  Minor  RaidDevice State
 
     Number  Major  Minor  RaidDevice State
 
       0      8        2        0      active sync  /dev/sda2
 
       0      8        2        0      active sync  /dev/sda2
Line 108: Line 113:  
       3      8      34        2      active sync  /dev/sdd2
 
       3      8      34        2      active sync  /dev/sdd2
 
       4      8      50        3      active sync  /dev/sde2
 
       4      8      50        3      active sync  /dev/sde2
 
+
 
       2      8      114        -      spare  /dev/sdc2
 
       2      8      114        -      spare  /dev/sdc2
    +
===LVM: Growing the PV===
    
Once the construction is done, we have to set the LVM to use the whole space
 
Once the construction is done, we have to set the LVM to use the whole space
Line 132: Line 138:     
You should verify that your LVM use the whole drive space with the command  
 
You should verify that your LVM use the whole drive space with the command  
 +
 
  [root@smeraid5 ~]# pvdisplay
 
  [root@smeraid5 ~]# pvdisplay
 
   --- Physical volume ---
 
   --- Physical volume ---
Line 147: Line 154:     
  [root@smeraid5 ~]# lvdisplay
 
  [root@smeraid5 ~]# lvdisplay
 +
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>

Navigation menu