Changes

Jump to navigation Jump to search
1,982 bytes added ,  12:36, 5 June 2023
no edit summary
Line 1: Line 1: −
{{WIP box}}
   
{{level|Advanced}}
 
{{level|Advanced}}
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]
+
Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]. This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] which gives the need to write the the howto
    
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''.
 
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''.
Line 32: Line 31:     
===Adding partitions===
 
===Adding partitions===
 +
{{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose.
 +
 +
mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
 +
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}}
    
Now we need to add the first partition /dev/sde1 to /dev/md1
 
Now we need to add the first partition /dev/sde1 to /dev/md1
Line 40: Line 43:  
   
 
   
 
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
 
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
 
+
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
 
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  [root@smeraid5 ~]# mdadm --detail /dev/md1
 
  /dev/md1:
 
  /dev/md1:
Line 79: Line 82:  
  mdadm: ... critical section passed.
 
  mdadm: ... critical section passed.
   −
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should  set --raid-devices='''5'''}}
+
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should  set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}}
 +
 
 +
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
    
we can take a look to the md2 array
 
we can take a look to the md2 array
Line 111: Line 116:  
       0      8        2        0      active sync  /dev/sda2
 
       0      8        2        0      active sync  /dev/sda2
 
       1      8      18        1      active sync  /dev/sdb2
 
       1      8      18        1      active sync  /dev/sdb2
       3      8      34        2      active sync  /dev/sdd2
+
       3      8      34        2      active sync  /dev/sdc2
 
       4      8      50        3      active sync  /dev/sde2
 
       4      8      50        3      active sync  /dev/sde2
 
   
 
   
       2      8      114        -      spare  /dev/sdc2
+
       2      8      114        -      spare  /dev/sdd2
    
===LVM: Growing the PV===
 
===LVM: Growing the PV===
   −
Once the construction is complete, we have to set the LVM to use the whole space
+
{{Note box|Once the construction is complete, we have to set the LVM to use the whole space}}
 +
 
 +
* In a root terminal, issue the following command lines
    
  [root@smeraid5 ~]# pvresize /dev/md2
 
  [root@smeraid5 ~]# pvresize /dev/md2
Line 124: Line 131:  
   1 physical volume(s) resized / 0 physical volume(s) not resized
 
   1 physical volume(s) resized / 0 physical volume(s) not resized
   −
after that we can resize the LVM
+
* after that we can resize the LVM
    
  [root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
 
  [root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
Line 137: Line 144:  
  Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
 
  Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
   −
You should verify that your LVM use the whole drive space with the command  
+
* You should verify that your LVM use the whole drive space with the command  
 +
 
 +
On Koozali SME v10 you should use xfs_growfs  instead of resize2fs
 +
 
 +
[root@smev10~]# xfs_growfs /dev/main/root
 +
meta-data=/dev/mapper/main-root  isize=512    agcount=4, agsize=1854976 blks
 +
        =                      sectsz=512  attr=2, projid32bit=1
 +
        =                      crc=1        finobt=0 spinodes=0
 +
data    =                      bsize=4096  blocks=7419904, imaxpct=25
 +
        =                      sunit=0      swidth=0 blks
 +
naming  =version 2              bsize=4096  ascii-ci=0 ftype=1
 +
log      =internal              bsize=4096  blocks=3623, version=2
 +
        =                      sectsz=512  sunit=0 blks, lazy-count=1
 +
realtime =none                  extsz=4096  blocks=0, rtextents=0
 +
data blocks changed from 7419904 to 11615232
 +
 
 +
 
    
  [root@smeraid5 ~]# pvdisplay
 
  [root@smeraid5 ~]# pvdisplay

Navigation menu