Line 66:
Line 66:
{{Note box|I have noticed these commands do take a while, be patience..}}
{{Note box|I have noticed these commands do take a while, be patience..}}
+
===Format your new Partition and testing===
+
Run the following
+
mkfs.xfs /dev/vg_DATA/lv_DATA
+
If you want to be sure everything went ok you can run a file check on the new LVM once the format is complete.
+
xfs_check /dev/vg_DATA/lv_DATA
−
===Adding partitions===
+
{{Note box|I found that I could not use EXT3 or EXT4 based file systems due to problems with the block sizes and my 20TB setup there may be work-around for this, but I didn’t find anything solid so instead I decided use XFS file system as it does what I need it too.}}
−
{{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose.
−
mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
+
===Mount your new partition to a directory===
−
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}}
+
Finally open /etc/fstab and edit the bottom line to mount the new area be sure to leave a new line feed at the bottom, and use proper spacing.
+
mkfs.xfs /dev/vg_DATA/lv_DATA
−
Now we need to add the first partition /dev/sde1 to /dev/md1
+
For Example in my file I entered
+
/dev/vg_DATA/lv_DATA /TESTFOLDER XFS defaults 0 0
−
[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1
+
You trigger a remount by using
−
mdadm: added /dev/sde1
+
mount –a
−
[root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
−
−
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
−
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
−
[root@smeraid5 ~]# mdadm --detail /dev/md1
−
/dev/md1:
−
Version : 0.90
−
Creation Time : Tue Oct 29 21:04:15 2013
−
Raid Level : raid1
−
Array Size : 104320 (101.89 MiB 106.82 MB)
−
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
−
Raid Devices : 5
−
Total Devices : 5
−
Preferred Minor : 1
−
Persistence : Superblock is persistent
−
−
Update Time : Tue Oct 29 21:39:00 2013
−
State : clean
−
Active Devices : 5
−
Working Devices : 5
−
Failed Devices : 0
−
Spare Devices : 0
−
−
UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
−
Events : 0.4
−
−
Number Major Minor RaidDevice State
−
0 8 1 0 active sync /dev/sda1
−
1 8 17 1 active sync /dev/sdb1
−
2 8 33 2 active sync /dev/sdc1
−
3 8 49 3 active sync /dev/sdd1
−
4 8 65 4 active sync /dev/sde1
−
−
After that we have to do the same thing with the md2 which is a raid5 array.
−
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
+
You can also check whether it has been successful mounted easily by running it should list your mount location and size in use.
−
mdadm: added /dev/sde2
+
df -h
−
[root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
+
*This setup in /etc/fstab should be maintained when updates or upgrades are conducted however if you want a more definite solution I would advise reading up on templates in SME Server.
−
mdadm: Need to backup 14336K of critical section..
−
mdadm: ... critical section passed.
−
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}}
−
−
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
−
−
we can take a look to the md2 array
−
−
[root@smeraid5 ~]# mdadm --detail /dev/md2
−
/dev/md2:
−
Version : 0.90
−
Creation Time : Tue Oct 29 21:04:28 2013
−
Raid Level : raid5
−
Array Size : 32644096 (30.28 GiB 31.39 GB)
−
Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
−
Raid Devices : 4
−
Total Devices : 5
−
Preferred Minor : 2
−
Persistence : Superblock is persistent
−
−
Update Time : Tue Oct 29 21:39:29 2013
−
State : clean
−
Active Devices : 4
−
Working Devices : 5
−
Failed Devices : 0
−
Spare Devices : 1
−
−
Layout : left-symmetric
−
Chunk Size : 256K
−
−
UUID : d2c26bed:b5251648:509041c5:fab64ab4
−
Events : 0.462
−
−
Number Major Minor RaidDevice State
−
0 8 2 0 active sync /dev/sda2
−
1 8 18 1 active sync /dev/sdb2
−
3 8 34 2 active sync /dev/sdc2
−
4 8 50 3 active sync /dev/sde2
−
−
2 8 114 - spare /dev/sdd2
−
−
===LVM: Growing the PV===
−
−
{{Note box|Once the construction is complete, we have to set the LVM to use the whole space}}
−
−
* In a root terminal, issue the following command lines
−
−
[root@smeraid5 ~]# pvresize /dev/md2
−
Physical volume "/dev/md2" changed
−
1 physical volume(s) resized / 0 physical volume(s) not resized
−
−
* after that we can resize the LVM
−
−
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root
−
Extending logical volume root to 30,25 GB
−
Logical volume root successfully resized
−
−
{{tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}
−
−
[root@smeraid5 ~]# resize2fs /dev/main/root
−
resize2fs 1.39 (29-May-2006)
−
Filesystem at /dev/main/root is mounted on /; on-line resizing required
−
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
−
−
* You should verify that your LVM use the whole drive space with the command
−
−
[root@smeraid5 ~]# pvdisplay
−
--- Physical volume ---
−
PV Name /dev/md2
−
VG Name main
−
PV Size 30.25 GB / not usable 8,81 MB
−
Allocatable yes (but full)
−
PE Size (KByte) 32768
−
Total PE 1533
−
'''Free PE 0'''
−
Allocated PE 1533
−
PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
−
−
if you can see that you have no more '''FREE PE''' you are the king of raid. But you can see also with the command
−
−
[root@smeraid5 ~]# lvdisplay
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>