Changes

Jump to navigation Jump to search
2,494 bytes added ,  12:36, 5 June 2023
no edit summary
Line 1: Line 1: −
{{WIP box}}
+
{{level|Advanced}}
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
+
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]. This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] which gives the need to write the the howto
==Adding partitions==
     −
{{Note box|due to a bug of kernel 2.6.18 which is the default kernel of Centos5 and SME Server 8.0, you can not grown a RAID6}}
+
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''.
 +
==Growing an existing Array==
   −
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5 array can be grown. Assuming that before growing it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[http://wiki.contribs.org/Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
+
{{Note box|due to a bug in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not grow a RAID6}}
   −
This is how your array looks before.
+
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
 +
 
 +
This is how your array should look before changing.
    
  [root@smeraid5 ~]# cat /proc/mdstat
 
  [root@smeraid5 ~]# cat /proc/mdstat
Line 16: Line 18:  
  md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
 
  md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
 
       72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
 
       72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
 +
 +
===Partition the new drive===
    
for example using this command to partition the new drive
 
for example using this command to partition the new drive
      
  sfdisk -d /dev/sda > sfdisk_sda.output
 
  sfdisk -d /dev/sda > sfdisk_sda.output
 
  sfdisk -f /dev/sde < sfdisk_sda.output
 
  sfdisk -f /dev/sde < sfdisk_sda.output
   −
If you have errors about sfdisk command, you can clean the drive with the dd command. Be aware that dd is called data-destroyer, think about which partition you type.
+
If you have errors using the sfdisk command, you can clean the drive with the dd command.
 +
{{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}}
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
 
  #dd if=/dev/zero of=/dev/sdX bs=512 count=1
    +
===Adding partitions===
 +
{{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose.
 +
 +
mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
 +
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}}
    
Now we need to add the first partition /dev/sde1 to /dev/md1
 
Now we need to add the first partition /dev/sde1 to /dev/md1
Line 33: Line 42:  
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
   
 
   
you can see how the array is
+
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
 
+
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
 
   
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  /dev/md1:
 
  /dev/md1:
Line 47: Line 55:  
  Preferred Minor : 1
 
  Preferred Minor : 1
 
     Persistence : Superblock is persistent
 
     Persistence : Superblock is persistent
 
+
 
     Update Time : Tue Oct 29 21:39:00 2013
 
     Update Time : Tue Oct 29 21:39:00 2013
 
           State : clean
 
           State : clean
Line 54: Line 62:  
  Failed Devices : 0
 
  Failed Devices : 0
 
   Spare Devices : 0
 
   Spare Devices : 0
 
+
 
           UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
 
           UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
 
         Events : 0.4
 
         Events : 0.4
 
+
 
     Number  Major  Minor  RaidDevice State
 
     Number  Major  Minor  RaidDevice State
 
       0      8        1        0      active sync  /dev/sda1
 
       0      8        1        0      active sync  /dev/sda1
Line 67: Line 75:  
After that we have to do the same thing with the md2 which is a raid5 array.
 
After that we have to do the same thing with the md2 which is a raid5 array.
   −
  [root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sdj2
+
  [root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
  mdadm: added /dev/sdj2
+
  mdadm: added /dev/sde2
    
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
 
  [root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
Line 74: Line 82:  
  mdadm: ... critical section passed.
 
  mdadm: ... critical section passed.
   −
{{Tip box| For md2 you have to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, if you do not want a spare drive, you should  set --raid-devices='''5'''}}
+
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should  set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}}
   −
we can take a look to the md2 array
+
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
    +
we can take a look to the md2 array
 +
 
  [root@smeraid5 ~]# mdadm --detail /dev/md2
 
  [root@smeraid5 ~]# mdadm --detail /dev/md2
 
  /dev/md2:
 
  /dev/md2:
Line 84: Line 94:  
     Raid Level : raid5
 
     Raid Level : raid5
 
     Array Size : 32644096 (30.28 GiB 31.39 GB)
 
     Array Size : 32644096 (30.28 GiB 31.39 GB)
   Used Dev Size : 10377728 (7.90 GiB 9.63 GB)
+
   Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
 
   Raid Devices : 4
 
   Raid Devices : 4
 
   Total Devices : 5
 
   Total Devices : 5
 
  Preferred Minor : 2
 
  Preferred Minor : 2
 
     Persistence : Superblock is persistent
 
     Persistence : Superblock is persistent
 
+
 
     Update Time : Tue Oct 29 21:39:29 2013
 
     Update Time : Tue Oct 29 21:39:29 2013
 
           State : clean
 
           State : clean
Line 96: Line 106:  
  Failed Devices : 0
 
  Failed Devices : 0
 
   Spare Devices : 1
 
   Spare Devices : 1
 
+
 
         Layout : left-symmetric
 
         Layout : left-symmetric
 
     Chunk Size : 256K
 
     Chunk Size : 256K
 
+
 
           UUID : d2c26bed:b5251648:509041c5:fab64ab4
 
           UUID : d2c26bed:b5251648:509041c5:fab64ab4
 
         Events : 0.462
 
         Events : 0.462
 
+
 
     Number  Major  Minor  RaidDevice State
 
     Number  Major  Minor  RaidDevice State
 
       0      8        2        0      active sync  /dev/sda2
 
       0      8        2        0      active sync  /dev/sda2
 
       1      8      18        1      active sync  /dev/sdb2
 
       1      8      18        1      active sync  /dev/sdb2
       3      8      34        2      active sync  /dev/sdd2
+
       3      8      34        2      active sync  /dev/sdc2
 
       4      8      50        3      active sync  /dev/sde2
 
       4      8      50        3      active sync  /dev/sde2
 +
 +
      2      8      114        -      spare  /dev/sdd2
   −
      2      8      114        -      spare  /dev/sdc2
+
===LVM: Growing the PV===
    +
{{Note box|Once the construction is complete, we have to set the LVM to use the whole space}}
   −
Once the construction is done, we have to set the LVM to use the whole space
+
* In a root terminal, issue the following command lines
   −
[root@smeraid5 ~]# pvresize /dev/md2
+
[root@smeraid5 ~]# pvresize /dev/md2
 
   Physical volume "/dev/md2" changed
 
   Physical volume "/dev/md2" changed
 
   1 physical volume(s) resized / 0 physical volume(s) not resized
 
   1 physical volume(s) resized / 0 physical volume(s) not resized
   −
after that we can resize the LVM
+
* after that we can resize the LVM
    
  [root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
 
  [root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
Line 124: Line 137:  
   Logical volume root successfully resized
 
   Logical volume root successfully resized
   −
{{Tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}
+
{{tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}
    
  [root@smeraid5 ~]# resize2fs  /dev/main/root
 
  [root@smeraid5 ~]# resize2fs  /dev/main/root
Line 131: Line 144:  
  Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
 
  Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
   −
You should verify that your LVM use the whole drive space with the command  
+
* You should verify that your LVM use the whole drive space with the command  
 +
 
 +
On Koozali SME v10 you should use xfs_growfs  instead of resize2fs
 +
 
 +
[root@smev10~]# xfs_growfs /dev/main/root
 +
meta-data=/dev/mapper/main-root  isize=512    agcount=4, agsize=1854976 blks
 +
        =                      sectsz=512  attr=2, projid32bit=1
 +
        =                      crc=1        finobt=0 spinodes=0
 +
data    =                      bsize=4096  blocks=7419904, imaxpct=25
 +
        =                      sunit=0      swidth=0 blks
 +
naming  =version 2              bsize=4096  ascii-ci=0 ftype=1
 +
log      =internal              bsize=4096  blocks=3623, version=2
 +
        =                      sectsz=512  sunit=0 blks, lazy-count=1
 +
realtime =none                  extsz=4096  blocks=0, rtextents=0
 +
data blocks changed from 7419904 to 11615232
 +
 
 +
 
 +
 
 
  [root@smeraid5 ~]# pvdisplay
 
  [root@smeraid5 ~]# pvdisplay
 
   --- Physical volume ---
 
   --- Physical volume ---
Line 147: Line 177:     
  [root@smeraid5 ~]# lvdisplay
 
  [root@smeraid5 ~]# lvdisplay
 +
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>

Navigation menu