Changes

From SME Server
Jump to navigationJump to search
Created page with "{{level|Advanced}} This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] this howto provide one solution to overcoming current Centos 5 problems ..."
{{level|Advanced}}
This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] this howto provide one solution to overcoming current Centos 5 problems with raiding hard disk drives with capacities over 2TB in size.

The purpose of this HOWTO is to create a Raid5 array of greater than 7TB (19TB) using SME Server 8.0, this is intended as a clean installation.
==Creating Large Raid5 Array using 4TB drives==

{{Note box|due to a limitations in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not create Raid5 arrays from drives with a capacity of more than 2TB. This means that the largest size using the standard SME Server 8.0 install is limited to 7.2TB. You can overcome this by growing the array after you create array using the standard SME Server 8.0 install please follow [[Raid:Growing|Howto]]}}

When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server

This is how your array should look before changing.

[root@smeraid5 ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : '''active raid1''' sda1[0] sdb1[1] sdc1[2] sdd1[3]
104320 blocks [4/4] [UUUU]

md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]

===Partition the new drive===

for example using this command to partition the new drive

sfdisk -d /dev/sda > sfdisk_sda.output
sfdisk -f /dev/sde < sfdisk_sda.output

If you have errors using the sfdisk command, you can clean the drive with the dd command.
{{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}}
#dd if=/dev/zero of=/dev/sdX bs=512 count=1

===Adding partitions===
{{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose.

mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}}

Now we need to add the first partition /dev/sde1 to /dev/md1

[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1
mdadm: added /dev/sde1
[root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1

Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
[root@smeraid5 ~]# mdadm --detail /dev/md1
/dev/md1:
Version : 0.90
Creation Time : Tue Oct 29 21:04:15 2013
Raid Level : raid1
Array Size : 104320 (101.89 MiB 106.82 MB)
Used Dev Size : 104320 (101.89 MiB 106.82 MB)
Raid Devices : 5
Total Devices : 5
Preferred Minor : 1
Persistence : Superblock is persistent

Update Time : Tue Oct 29 21:39:00 2013
State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
Spare Devices : 0

UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
Events : 0.4

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 33 2 active sync /dev/sdc1
3 8 49 3 active sync /dev/sdd1
4 8 65 4 active sync /dev/sde1

After that we have to do the same thing with the md2 which is a raid5 array.

[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
mdadm: added /dev/sde2

[root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
mdadm: Need to backup 14336K of critical section..
mdadm: ... critical section passed.

{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}}

{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}

we can take a look to the md2 array

[root@smeraid5 ~]# mdadm --detail /dev/md2
/dev/md2:
Version : 0.90
Creation Time : Tue Oct 29 21:04:28 2013
Raid Level : raid5
Array Size : 32644096 (30.28 GiB 31.39 GB)
Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
Raid Devices : 4
Total Devices : 5
Preferred Minor : 2
Persistence : Superblock is persistent

Update Time : Tue Oct 29 21:39:29 2013
State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 256K

UUID : d2c26bed:b5251648:509041c5:fab64ab4
Events : 0.462

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
3 8 34 2 active sync /dev/sdc2
4 8 50 3 active sync /dev/sde2

2 8 114 - spare /dev/sdd2

===LVM: Growing the PV===

{{Note box|Once the construction is complete, we have to set the LVM to use the whole space}}

* In a root terminal, issue the following command lines

[root@smeraid5 ~]# pvresize /dev/md2
Physical volume "/dev/md2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized

* after that we can resize the LVM

[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root
Extending logical volume root to 30,25 GB
Logical volume root successfully resized

{{tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}

[root@smeraid5 ~]# resize2fs /dev/main/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/main/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.

* You should verify that your LVM use the whole drive space with the command

[root@smeraid5 ~]# pvdisplay
--- Physical volume ---
PV Name /dev/md2
VG Name main
PV Size 30.25 GB / not usable 8,81 MB
Allocatable yes (but full)
PE Size (KByte) 32768
Total PE 1533
'''Free PE 0'''
Allocated PE 1533
PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo

if you can see that you have no more '''FREE PE''' you are the king of raid. But you can see also with the command

[root@smeraid5 ~]# lvdisplay
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>
20

edits

Navigation menu