Raid:Creating large raid5 array (over 2TB drive)

From SME Server
Revision as of 10:12, 31 May 2014 by Tdbsoft (talk | contribs)
Jump to navigationJump to search
PythonIcon.png Skill level: Advanced
The instructions on this page may require deviations from standard procedures. A good understanding of linux and Koozali SME Server is recommended.


This is the initial forum post this howto provide one solution to overcoming current Centos 5 problems with raiding hard disk drives with capacities over 2TB in size.

The purpose of this HOWTO is to create a Raid5 array of greater than 7TB (19TB) using SME Server 8.0, this is intended as a clean installation.

Creating Large Raid5 Array using 4TB drives

Important.png Note:
due to a limitations in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not create Raid5 arrays from drives with a capacity of more than 2TB. This means that the largest size using the standard SME Server 8.0 install is limited to 7.2TB. For those that have and existing array of 7.2TB and need more space you can growing your existing array please follow Howto


In the howto below we will be creating an array using 6x 4TB drives to create a new Raid5 array of 19TB in capacity. Below is the basic hardware details of the computer that were used, these are only a guide.

 - Large case capable of fitting 12 drives - Gigabyte 3D Aurora
 - Main board with 6 SATA 3 ports on board - Gigabyte GA-970A-D3
 - SATA Controller 4 SATA 3 ports on board - Rocket R640L
 - 6x 4TB Hard disk drives - 19TB Array
 - 1x 500MB Hard disk drive - SME Server 8.0 operating system

Before starting the howto below you should have installed SME Server 8.0 on the computer and have run and installed all updates as per a normal server installation. Leave the 6x 4TB drives unplugged until you begin the howto below.


Warning.png Warning:
This howto is intended as a guide to set a new server which has no existing data, do NOT run this on a live server without a backup of that system.


Installing required tools

You will need to install some additionally tools

  • parted: Parted is a GNU utility, which is used to manipulate the hard disk partitions. Using parted, you can add, delete, and edit partitions and the file systems located on those partitions
  • xfsprogs: Utilities for managing the XFS filesystem A set of commands to use the XFS filesystem, including mkfs.xfs. XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can support large files and large filesystems, extended attributes, variable block sizes, is extent based, and makes extensive use of Btrees (directories, extents, free space) to aid both performance and scalability.

Install the required tools using the Yum commands below.

yum Install parted
yum install xfsprogs It’s usually a good idea to run… but not required Signal-event post-upgrade; Signal-event reboot;

Partitioning the drives

Now we need to create the partition for the drives using the parted partitioning tool.

parted /dev/sdX
mklabel gpt
unit TB
mkpart primary 0.00TB 4.00TB
print

Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need tochange the device sdX value for each drive in the chain.

Creating the array

Check which md numbers are in use by running the command cat /proc/mdstat select the highest number md number and add one we will us md3.

mdadm --create /dev/md3  --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1

Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array.


Important.png Note:
Be Advised this command will process quickly but in the background it will be conducted a re-sync of the RAID array you can monitor this by running cat /proc/mdstat. It highly advisable to wait for the re-sync to be completed before proceeding.


Save raid array structure

Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system.

mdadm --detail --scan | grep md3 >> /etc/mdadm.conf

Once that command is run it would be a good idea to look inside it. You should only see one new line in that file.

It is the then advisable to create a copy of the mdadm.conf run the following cp /etc/mdadm.conf /etc/mdadm.conf.bak Keep this file permanently as it is possible in the future something may occur on your sme server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information.



for example using this command to partition the new drive

sfdisk -d /dev/sda > sfdisk_sda.output
sfdisk -f /dev/sde < sfdisk_sda.output

If you have errors using the sfdisk command, you can clean the drive with the dd command.

Warning.png Warning:
Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.


#dd if=/dev/zero of=/dev/sdX bs=512 count=1


Adding partitions

Important.png Note:
The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option --backup-file= can be specified. Make sure this file is on a different disk or it defeats the purpose.
mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2


Now we need to add the first partition /dev/sde1 to /dev/md1

[root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1
mdadm: added /dev/sde1
[root@smeraid5 ~]# mdadm --grow --raid-devices=5 /dev/md1

Here we use the option --raid-devices=5 because raid1 uses all drives. You can see how the array looks by:

Warning.png Warning:
During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data


[root@smeraid5 ~]# mdadm --detail /dev/md1
/dev/md1:
       Version : 0.90
 Creation Time : Tue Oct 29 21:04:15 2013
    Raid Level : raid1
    Array Size : 104320 (101.89 MiB 106.82 MB)
 Used Dev Size : 104320 (101.89 MiB 106.82 MB)
  Raid Devices : 5
 Total Devices : 5
Preferred Minor : 1
   Persistence : Superblock is persistent

   Update Time : Tue Oct 29 21:39:00 2013
         State : clean
Active Devices : 5
Working Devices : 5
Failed Devices : 0
 Spare Devices : 0

          UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
        Events : 0.4

   Number   Major   Minor   RaidDevice State
      0       8        1        0      active sync   /dev/sda1
      1       8       17        1      active sync   /dev/sdb1
      2       8       33        2      active sync   /dev/sdc1
      3       8       49        3      active sync   /dev/sdd1
      4       8       65        4      active sync   /dev/sde1

After that we have to do the same thing with the md2 which is a raid5 array.

[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
mdadm: added /dev/sde2
[root@smeraid5 ~]# mdadm --grow --raid-devices=4 /dev/md2
mdadm: Need to backup 14336K of critical section..
mdadm: ... critical section passed.


Information.png Tip:
You need to keep --raid-devices=4 if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should set --raid-devices=5. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.



Warning.png Warning:
During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data


we can take a look to the md2 array

[root@smeraid5 ~]# mdadm --detail /dev/md2
/dev/md2:
       Version : 0.90
 Creation Time : Tue Oct 29 21:04:28 2013
    Raid Level : raid5
    Array Size : 32644096 (30.28 GiB 31.39 GB)
 Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
  Raid Devices : 4
 Total Devices : 5
Preferred Minor : 2
   Persistence : Superblock is persistent

   Update Time : Tue Oct 29 21:39:29 2013
         State : clean
Active Devices : 4
Working Devices : 5
Failed Devices : 0
 Spare Devices : 1

        Layout : left-symmetric
    Chunk Size : 256K

          UUID : d2c26bed:b5251648:509041c5:fab64ab4
        Events : 0.462

   Number   Major   Minor   RaidDevice State
      0       8        2        0      active sync   /dev/sda2
      1       8       18        1      active sync   /dev/sdb2
      3       8       34        2      active sync   /dev/sdc2
      4       8       50        3      active sync   /dev/sde2

      2       8      114        -      spare   /dev/sdd2

LVM: Growing the PV

Important.png Note:
Once the construction is complete, we have to set the LVM to use the whole space


  • In a root terminal, issue the following command lines
[root@smeraid5 ~]# pvresize /dev/md2
 Physical volume "/dev/md2" changed
 1 physical volume(s) resized / 0 physical volume(s) not resized
  • after that we can resize the LVM
[root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
 Extending logical volume root to 30,25 GB
 Logical volume root successfully resized


Information.png Tip:
/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay


[root@smeraid5 ~]# resize2fs  /dev/main/root
resize2fs 1.39 (29-May-2006)
Filesystem at /dev/main/root is mounted on /; on-line resizing required
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
  • You should verify that your LVM use the whole drive space with the command
[root@smeraid5 ~]# pvdisplay
 --- Physical volume ---
 PV Name               /dev/md2
 VG Name               main
 PV Size               30.25 GB / not usable 8,81 MB
 Allocatable           yes (but full)
 PE Size (KByte)       32768
 Total PE              1533
 Free PE               0
 Allocated PE          1533
 PV UUID               a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo

if you can see that you have no more FREE PE you are the king of raid. But you can see also with the command

[root@smeraid5 ~]# lvdisplay