Difference between revisions of "Raid:Growing"
(14 intermediate revisions by one other user not shown) | |||
Line 1: | Line 1: | ||
− | |||
{{level|Advanced}} | {{level|Advanced}} | ||
− | Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki] | + | Source of this page is the [https://raid.wiki.kernel.org/index.php/Growing raid wiki]. This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] which gives the need to write the the howto |
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''. | The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, '''or you may loose the lot'''. | ||
Line 32: | Line 31: | ||
===Adding partitions=== | ===Adding partitions=== | ||
+ | {{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose. | ||
+ | |||
+ | mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1 | ||
+ | mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}} | ||
Now we need to add the first partition /dev/sde1 to /dev/md1 | Now we need to add the first partition /dev/sde1 to /dev/md1 | ||
Line 40: | Line 43: | ||
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by: | Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by: | ||
− | + | {{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}} | |
[root@smeraid5 ~]# mdadm --detail /dev/md1 | [root@smeraid5 ~]# mdadm --detail /dev/md1 | ||
/dev/md1: | /dev/md1: | ||
Line 79: | Line 82: | ||
mdadm: ... critical section passed. | mdadm: ... critical section passed. | ||
− | {{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''}} | + | {{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}} |
+ | |||
+ | {{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}} | ||
we can take a look to the md2 array | we can take a look to the md2 array | ||
Line 111: | Line 116: | ||
0 8 2 0 active sync /dev/sda2 | 0 8 2 0 active sync /dev/sda2 | ||
1 8 18 1 active sync /dev/sdb2 | 1 8 18 1 active sync /dev/sdb2 | ||
− | 3 8 34 2 active sync /dev/ | + | 3 8 34 2 active sync /dev/sdc2 |
4 8 50 3 active sync /dev/sde2 | 4 8 50 3 active sync /dev/sde2 | ||
− | 2 8 114 - spare /dev/ | + | 2 8 114 - spare /dev/sdd2 |
===LVM: Growing the PV=== | ===LVM: Growing the PV=== | ||
− | Once the construction is complete, we have to set the LVM to use the whole space | + | {{Note box|Once the construction is complete, we have to set the LVM to use the whole space}} |
+ | |||
+ | * In a root terminal, issue the following command lines | ||
[root@smeraid5 ~]# pvresize /dev/md2 | [root@smeraid5 ~]# pvresize /dev/md2 | ||
Line 124: | Line 131: | ||
1 physical volume(s) resized / 0 physical volume(s) not resized | 1 physical volume(s) resized / 0 physical volume(s) not resized | ||
− | after that we can resize the LVM | + | * after that we can resize the LVM |
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root | [root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root | ||
Line 137: | Line 144: | ||
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks. | Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks. | ||
− | You should verify that your LVM use the whole drive space with the command | + | * You should verify that your LVM use the whole drive space with the command |
+ | |||
+ | On Koozali SME v10 you should use xfs_growfs instead of resize2fs | ||
+ | |||
+ | [root@smev10~]# xfs_growfs /dev/main/root | ||
+ | meta-data=/dev/mapper/main-root isize=512 agcount=4, agsize=1854976 blks | ||
+ | = sectsz=512 attr=2, projid32bit=1 | ||
+ | = crc=1 finobt=0 spinodes=0 | ||
+ | data = bsize=4096 blocks=7419904, imaxpct=25 | ||
+ | = sunit=0 swidth=0 blks | ||
+ | naming =version 2 bsize=4096 ascii-ci=0 ftype=1 | ||
+ | log =internal bsize=4096 blocks=3623, version=2 | ||
+ | = sectsz=512 sunit=0 blks, lazy-count=1 | ||
+ | realtime =none extsz=4096 blocks=0, rtextents=0 | ||
+ | data blocks changed from 7419904 to 11615232 | ||
+ | |||
+ | |||
[root@smeraid5 ~]# pvdisplay | [root@smeraid5 ~]# pvdisplay |
Latest revision as of 11:36, 5 June 2023
Source of this page is the raid wiki. This is the initial forum post which gives the need to write the the howto
The purpose of this HOWTO is to add a new drive to an existing Raid5 with LVM, LVM is the standard installation of SME Server. Please backup your data before starting this HOWTO, or you may loose the lot.
Growing an existing Array
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this HowTo for understanding the automatic raid construction of SME Server
This is how your array should look before changing.
[root@smeraid5 ~]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] 104320 blocks [4/4] [UUUU] md2 : active raid5 sdd2[8](S) sdc2[2] sdb2[1] sda2[0] 72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
Partition the new drive
for example using this command to partition the new drive
sfdisk -d /dev/sda > sfdisk_sda.output sfdisk -f /dev/sde < sfdisk_sda.output
If you have errors using the sfdisk command, you can clean the drive with the dd command.
#dd if=/dev/zero of=/dev/sdX bs=512 count=1
Adding partitions
Now we need to add the first partition /dev/sde1 to /dev/md1
[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1 mdadm: added /dev/sde1 [root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1
Here we use the option --raid-devices=5 because raid1 uses all drives. You can see how the array looks by:
[root@smeraid5 ~]# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Tue Oct 29 21:04:15 2013 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:00 2013 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d Events : 0.4 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 65 4 active sync /dev/sde1
After that we have to do the same thing with the md2 which is a raid5 array.
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2 mdadm: added /dev/sde2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2 mdadm: Need to backup 14336K of critical section.. mdadm: ... critical section passed.
we can take a look to the md2 array
[root@smeraid5 ~]# mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Tue Oct 29 21:04:28 2013 Raid Level : raid5 Array Size : 32644096 (30.28 GiB 31.39 GB) Used Dev Size : 7377728 (7.90 GiB 9.63 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:29 2013 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 256K UUID : d2c26bed:b5251648:509041c5:fab64ab4 Events : 0.462 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 3 8 34 2 active sync /dev/sdc2 4 8 50 3 active sync /dev/sde2 2 8 114 - spare /dev/sdd2
LVM: Growing the PV
- In a root terminal, issue the following command lines
[root@smeraid5 ~]# pvresize /dev/md2 Physical volume "/dev/md2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
- after that we can resize the LVM
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root Extending logical volume root to 30,25 GB Logical volume root successfully resized
[root@smeraid5 ~]# resize2fs /dev/main/root resize2fs 1.39 (29-May-2006) Filesystem at /dev/main/root is mounted on /; on-line resizing required Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
- You should verify that your LVM use the whole drive space with the command
On Koozali SME v10 you should use xfs_growfs instead of resize2fs
[root@smev10~]# xfs_growfs /dev/main/root meta-data=/dev/mapper/main-root isize=512 agcount=4, agsize=1854976 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=0 spinodes=0 data = bsize=4096 blocks=7419904, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0 ftype=1 log =internal bsize=4096 blocks=3623, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 data blocks changed from 7419904 to 11615232
[root@smeraid5 ~]# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name main PV Size 30.25 GB / not usable 8,81 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1533 Free PE 0 Allocated PE 1533 PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
if you can see that you have no more FREE PE you are the king of raid. But you can see also with the command
[root@smeraid5 ~]# lvdisplay