Raid:Creating large raid5 array (over 2TB drive)
This is the initial forum post this howto provide one solution to overcoming current Centos 5 problems with raiding hard disk drives with capacities over 2TB in size.
The purpose of this HOWTO is to create a Raid5 array of greater than 7TB (19TB) using SME Server 8.0, this is intended as a clean installation.
Creating Large Raid5 Array using 4TB drives
In the howto below we will be creating an array using 6x 4TB drives to create a new Raid5 array of 19TB in capacity. Below is the basic hardware details of the computer that were used, these are only a guide.
- Large case capable of fitting 12 drives - Gigabyte 3D Aurora - Main board with 6 SATA 3 ports on board - Gigabyte GA-970A-D3 - SATA Controller 4 SATA 3 ports on board - Rocket R640L - 6x 4TB Hard disk drives - 19TB Array - 1x 500MB Hard disk drive - SME Server 8.0 operating system
Before starting the howto below you should have installed SME Server 8.0 on the computer and have run and installed all updates as per a normal server installation. Leave the 6x 4TB drives unplugged until you begin the howto below.
Installing required tools
You will need to install some additionally tools
- parted: Parted is a GNU utility, which is used to manipulate the hard disk partitions. Using parted, you can add, delete, and edit partitions and the file systems located on those partitions
- xfsprogs: Utilities for managing the XFS filesystem A set of commands to use the XFS filesystem, including mkfs.xfs. XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can support large files and large filesystems, extended attributes, variable block sizes, is extent based, and makes extensive use of Btrees (directories, extents, free space) to aid both performance and scalability.
Install the required tools using the Yum commands below.
yum Install parted
yum install xfsprogs It’s usually a good idea to run… but not required Signal-event post-upgrade; Signal-event reboot;
Partitioning the drives
Now we need to create the partition for the drives using the parted partitioning tool.
parted /dev/sdX mklabel gpt unit TB mkpart primary 0.00TB 4.00TB print
Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need tochange the device sdX value for each drive in the chain.
Creating the array
Check which md numbers are in use by running the command cat /proc/mdstat select the highest number md number and add one we will us md3.
mdadm --create /dev/md3 --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1
Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array.
Save raid array structure
Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system.
mdadm --detail --scan | grep md3 >> /etc/mdadm.conf
Once that command is run it would be a good idea to look inside it. You should only see one new line in that file.
It is the then advisable to create a copy of the mdadm.conf run the following cp /etc/mdadm.conf /etc/mdadm.conf.bak Keep this file permanently as it is possible in the future something may occur on your sme server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information.
for example using this command to partition the new drive
sfdisk -d /dev/sda > sfdisk_sda.output sfdisk -f /dev/sde < sfdisk_sda.output
If you have errors using the sfdisk command, you can clean the drive with the dd command.
#dd if=/dev/zero of=/dev/sdX bs=512 count=1
Adding partitions
Now we need to add the first partition /dev/sde1 to /dev/md1
[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1 mdadm: added /dev/sde1 [root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1
Here we use the option --raid-devices=5 because raid1 uses all drives. You can see how the array looks by:
[root@smeraid5 ~]# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Tue Oct 29 21:04:15 2013 Raid Level : raid1 Array Size : 104320 (101.89 MiB 106.82 MB) Used Dev Size : 104320 (101.89 MiB 106.82 MB) Raid Devices : 5 Total Devices : 5 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:00 2013 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d Events : 0.4 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 65 4 active sync /dev/sde1
After that we have to do the same thing with the md2 which is a raid5 array.
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2 mdadm: added /dev/sde2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2 mdadm: Need to backup 14336K of critical section.. mdadm: ... critical section passed.
we can take a look to the md2 array
[root@smeraid5 ~]# mdadm --detail /dev/md2 /dev/md2: Version : 0.90 Creation Time : Tue Oct 29 21:04:28 2013 Raid Level : raid5 Array Size : 32644096 (30.28 GiB 31.39 GB) Used Dev Size : 7377728 (7.90 GiB 9.63 GB) Raid Devices : 4 Total Devices : 5 Preferred Minor : 2 Persistence : Superblock is persistent Update Time : Tue Oct 29 21:39:29 2013 State : clean Active Devices : 4 Working Devices : 5 Failed Devices : 0 Spare Devices : 1 Layout : left-symmetric Chunk Size : 256K UUID : d2c26bed:b5251648:509041c5:fab64ab4 Events : 0.462 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 3 8 34 2 active sync /dev/sdc2 4 8 50 3 active sync /dev/sde2 2 8 114 - spare /dev/sdd2
LVM: Growing the PV
- In a root terminal, issue the following command lines
[root@smeraid5 ~]# pvresize /dev/md2 Physical volume "/dev/md2" changed 1 physical volume(s) resized / 0 physical volume(s) not resized
- after that we can resize the LVM
[root@smeraid5 ~]# lvresize -l +100%FREE /dev/main/root Extending logical volume root to 30,25 GB Logical volume root successfully resized
[root@smeraid5 ~]# resize2fs /dev/main/root resize2fs 1.39 (29-May-2006) Filesystem at /dev/main/root is mounted on /; on-line resizing required Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
- You should verify that your LVM use the whole drive space with the command
[root@smeraid5 ~]# pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name main PV Size 30.25 GB / not usable 8,81 MB Allocatable yes (but full) PE Size (KByte) 32768 Total PE 1533 Free PE 0 Allocated PE 1533 PV UUID a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
if you can see that you have no more FREE PE you are the king of raid. But you can see also with the command
[root@smeraid5 ~]# lvdisplay