Difference between revisions of "Raid:Creating large raid5 array (over 2TB drive)"

From SME Server
Jump to navigationJump to search
(Created page with "{{level|Advanced}} This is the [http://forums.contribs.org/index.php/topic,50311.0 initial forum post] this howto provide one solution to overcoming current Centos 5 problems ...")
 
m
 
(19 intermediate revisions by 3 users not shown)
Line 5: Line 5:
 
==Creating Large Raid5 Array using 4TB drives==
 
==Creating Large Raid5 Array using 4TB drives==
  
{{Note box|due to a limitations in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not create Raid5 arrays from drives with a capacity of more than 2TB.  This means that the largest size using the standard SME Server 8.0 install is limited to 7.2TB.  You can overcome this by growing the array after you create array using the standard SME Server 8.0 install please follow [[Raid:Growing|Howto]]}}
+
{{Note box|due to a limitations in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not create Raid5 arrays from drives with a capacity of more than 2TB.  This means that the largest size using the standard SME Server 8.0 install is limited to 7.2TB.  For those that have and existing array of 7.2TB and need more space you can growing your existing array please follow [[Raid:Growing|Howto]]}}
  
When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk has been partitioned, the RAID array 1/4/5 may be grown. Assuming that before growing, it contains four drives in Raid5 and therefore an array of 3 drives (3*10G) and 1 spare drive(10G). See this [[Raid#Hard_Drives_.E2.80.93_Raid|HowTo]] for understanding the automatic raid construction of SME Server
+
In the howto below we will be creating an array using 6x 4TB drives to create a new Raid5 array of 19TB in capacity. Below is the basic hardware details of the computer that were used, these are only a guide.<br>
 +
  - Large case capable of fitting 12 drives - Gigabyte 3D Aurora
 +
  - Main board with 6 SATA 3 ports on board - Gigabyte GA-970A-D3
 +
  - SATA Controller 4 SATA 3 ports on board - Rocket R640L
 +
  - 6x 4TB Hard disk drives - 19TB Array
 +
  - 1x 500MB Hard disk drive - SME Server 8.0 operating system
  
This is how your array should look before changing.
+
Before starting the howto below you should have installed SME Server 8.0 on the computer and have run and installed all updates as per a normal server installation.  Leave the 6x 4TB drives unplugged until you begin the howto below.
  
[root@smeraid5 ~]# cat /proc/mdstat
+
{{Warning box|This howto is intended as a guide to set a new server which has no existing data, do NOT run this on a live server without a backup of that system.}}
Personalities : [raid6] [raid5] [raid4] [raid1]
 
md1 : '''active raid1''' sda1[0] sdb1[1] sdc1[2] sdd1[3]
 
      104320 blocks [4/4] [UUUU]
 
     
 
md2 : '''active raid5''' sdd2[8](S) sdc2[2] sdb2[1] sda2[0]
 
      72644096 blocks level 5, 256k chunk, algorithm 2 [8/8] [UUU]
 
  
===Partition the new drive===
+
===Installing required tools===
 +
You will need to install some additionally tools
 +
* parted: Parted is a GNU utility, which is used to manipulate the hard disk partitions. Using parted, you can add, delete, and edit partitions and the file systems located on those partitions
 +
* xfsprogs: Utilities for managing the XFS filesystem A set of commands to use the XFS filesystem, including mkfs.xfs. XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can support large files and large filesystems, extended attributes, variable block sizes, is extent based, and makes extensive use of Btrees (directories, extents, free space) to aid both performance and scalability.
  
for example using this command to partition the new drive
+
Install the required tools using the Yum commands below.
 +
yum install parted<br>
 +
yum install xfsprogs
 +
 +
It’s usually a good idea to run… but not required
 +
Signal-event post-upgrade;
 +
Signal-event reboot;
  
  sfdisk -d /dev/sda > sfdisk_sda.output
+
===Partitioning the drives===
  sfdisk -f /dev/sde < sfdisk_sda.output
+
Now we need to create the partition for the drives using the parted partitioning tool.
 +
  parted /dev/sdX
 +
mklabel gpt
 +
unit TB
 +
mkpart primary 0.00TB 4.00TB
 +
  print
  
If you have errors using the sfdisk command, you can clean the drive with the dd command.
+
Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need to change the device sdX value for each drive in the chain.
{{Warning box|Be aware that dd is called data-destroyer, be certaing of the partition you want zeroed.}}
 
#dd if=/dev/zero of=/dev/sdX bs=512 count=1
 
  
===Adding partitions===
+
===Creating the array===
{{Note box|msg=The process can take many hours or even days. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option <code>--backup-file=</code> can be specified. Make sure this file is on a different disk or it defeats the purpose.
+
Check which ''md'' numbers are in use by running the command ''cat /proc/mdstat'' select the highest number md number and add one we will us md3.
 +
mdadm --create /dev/md3  --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1
 +
Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array.
  
mdadm --grow --raid-devices=5 --backup-file=/root/grow_md1.bak /dev/md1
+
{{Note box|Be Advised this command will process quickly but will continue to run a re-sync of the RAID array in the background. You can monitor this re-sync by running cat /proc/mdstat, It highly advisable to wait for the re-sync to be completed before proceeding.}}
mdadm --grow --raid-devices=4 --backup-file=/root/grow_md2.bak /dev/md2}}
 
  
Now we need to add the first partition /dev/sde1 to /dev/md1
+
===Save raid array structure===
 +
Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system.
  
  [root@smeraid5 ~]# mdadm --add /dev/md1 /dev/sde1
+
  mdadm --detail --scan | grep md3 >> /etc/mdadm.conf
mdadm: added /dev/sde1
 
[root@smeraid5 ~]# mdadm --grow --raid-devices='''5''' /dev/md1
 
 
Here we use the option --raid-devices='''5''' because raid1 uses all drives. You can see how the array looks by:
 
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
 
[root@smeraid5 ~]# mdadm --detail /dev/md1
 
/dev/md1:
 
        Version : 0.90
 
  Creation Time : Tue Oct 29 21:04:15 2013
 
    Raid Level : raid1
 
    Array Size : 104320 (101.89 MiB 106.82 MB)
 
  Used Dev Size : 104320 (101.89 MiB 106.82 MB)
 
  Raid Devices : 5
 
  Total Devices : 5
 
Preferred Minor : 1
 
    Persistence : Superblock is persistent
 
 
    Update Time : Tue Oct 29 21:39:00 2013
 
          State : clean
 
Active Devices : 5
 
Working Devices : 5
 
Failed Devices : 0
 
  Spare Devices : 0
 
 
          UUID : 15eb70b1:3d0293bb:f3c49d70:6fc5aa4d
 
        Events : 0.4
 
 
    Number  Major  Minor  RaidDevice State
 
      0      8        1        0      active sync  /dev/sda1
 
      1      8      17        1      active sync  /dev/sdb1
 
      2      8      33        2      active sync  /dev/sdc1
 
      3      8      49        3      active sync  /dev/sdd1
 
      4      8      65        4      active sync  /dev/sde1
 
 
After that we have to do the same thing with the md2 which is a raid5 array.
 
 
 
[root@smeraid5 ~]# mdadm --add /dev/md2 /dev/sde2
 
mdadm: added /dev/sde2
 
 
 
[root@smeraid5 ~]# mdadm --grow --raid-devices='''4''' /dev/md2
 
mdadm: Need to backup 14336K of critical section..
 
mdadm: ... critical section passed.
 
 
 
{{tip box|msg=You need to keep --raid-devices='''4''' if you want to have an array of 4 drives+1spare, However if you do not want a spare drive, you should  set --raid-devices='''5'''. This command can be used to grow an array of raid on the spare drive, just say to mdadm that you want to use all disks connected to the computer.}}
 
 
 
{{Warning box|During the raid growing step you DO not shutdown your computer, or experienced an electrical failure, these issues can let your computer in a badly status and you can loose your data}}
 
 
 
we can take a look to the md2 array
 
 
[root@smeraid5 ~]# mdadm --detail /dev/md2
 
/dev/md2:
 
        Version : 0.90
 
  Creation Time : Tue Oct 29 21:04:28 2013
 
    Raid Level : raid5
 
    Array Size : 32644096 (30.28 GiB 31.39 GB)
 
  Used Dev Size : 7377728 (7.90 GiB 9.63 GB)
 
  Raid Devices : 4
 
  Total Devices : 5
 
Preferred Minor : 2
 
    Persistence : Superblock is persistent
 
 
    Update Time : Tue Oct 29 21:39:29 2013
 
          State : clean
 
Active Devices : 4
 
Working Devices : 5
 
Failed Devices : 0
 
  Spare Devices : 1
 
 
        Layout : left-symmetric
 
    Chunk Size : 256K
 
 
          UUID : d2c26bed:b5251648:509041c5:fab64ab4
 
        Events : 0.462
 
 
    Number  Major  Minor  RaidDevice State
 
      0      8        2        0      active sync  /dev/sda2
 
      1      8      18        1      active sync  /dev/sdb2
 
      3      8      34        2      active sync  /dev/sdc2
 
      4      8      50        3      active sync  /dev/sde2
 
 
      2      8      114        -      spare  /dev/sdd2
 
  
===LVM: Growing the PV===
+
Once that command is run it would be a good advisable to view the file, You should only see one new line in the file.
 +
It is the then a good idea to create a copy of the mdadm.conf by running the following command
 +
cp /etc/mdadm.conf /etc/mdadm.conf.bak
 +
Keep this file permanently as it is possible in the future something may occur on your SME Server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information.
  
{{Note box|Once the construction is complete, we have to set the LVM to use the whole space}}
+
===Create your lvm partition===
 +
When using the vgcreate command use a good name for your partition, I have just used vg_DATA and lv_DATA.
 +
  pvcreate /dev/md3
 +
  vgcreate vg_DATA /dev/md3
 +
  lvcreate --name lv_DATA -l 100%FREE vg_DATA
  
* In a root terminal, issue the following command lines
+
{{Note box|I have noticed these commands do take a while, be patience..}}
  
  [root@smeraid5 ~]# pvresize /dev/md2
+
===Format your new Partition and testing===
  Physical volume "/dev/md2" changed
+
Run the following
  1 physical volume(s) resized / 0 physical volume(s) not resized
+
  mkfs.xfs /dev/vg_DATA/lv_DATA
  
* after that we can resize the LVM
+
If you want to be sure everything went ok you can run a file check on the new LVM once the format is complete.
 +
xfs_check /dev/vg_DATA/lv_DATA
  
[root@smeraid5 ~]# lvresize -l +100%FREE  /dev/main/root
+
{{Note box|I found that I could not use EXT3 or EXT4 based file systems due to problems with the block sizes and my 20TB setup there may be work-around for this, but I didn’t find anything solid so instead I decided use XFS file system as it does what I need it too.}}
  Extending logical volume root to 30,25 GB
 
  Logical volume root successfully resized
 
  
{{tip box|/dev/main/root is the default name, but if you have changed this you can find it by typing the command : lvdisplay}}
+
===Mount your new partition to a directory===
 +
Finally open /etc/fstab and edit the bottom line to mount the new area be sure to leave a new line feed at the bottom, and use proper spacing.
  
[root@smeraid5 ~]# resize2fs  /dev/main/root
+
For Example in my file I entered
resize2fs 1.39 (29-May-2006)
+
  /dev/vg_DATA/lv_DATA /TESTFOLDER XFS defaults 0 0
  Filesystem at /dev/main/root is mounted on /; on-line resizing required
 
Performing an on-line resize of /dev/main/root to 19726336 (4k) blocks.
 
  
* You should verify that your LVM use the whole drive space with the command
+
You trigger a remount by using
 +
mount –a
  
[root@smeraid5 ~]# pvdisplay
+
You can also check whether it has been successful mounted easily by running it should list your mount location and size in use.
  --- Physical volume ---
+
df -h
  PV Name              /dev/md2
 
  VG Name              main
 
  PV Size              30.25 GB / not usable 8,81 MB
 
  Allocatable          yes (but full)
 
  PE Size (KByte)      32768
 
  Total PE              1533
 
  '''Free PE              0'''
 
  Allocated PE          1533
 
  PV UUID              a31UBW-2SN6-CXFk-qLOZ-qrsQ-BIYo-nZexXo
 
  
if you can see that you have no more '''FREE PE''' you are the king of raid. But you can see also with the command
+
*This setup in /etc/fstab should be maintained when updates or upgrades are conducted however if you want a more definite solution I would advise reading up on templates in SME Server.
  
[root@smeraid5 ~]# lvdisplay
+
<noinclude>
<noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude>
+
[[Category:Howto]]
 +
[[Category:Administration:Storage]]
 +
</noinclude>

Latest revision as of 13:50, 16 April 2018

PythonIcon.png Skill level: Advanced
The instructions on this page may require deviations from standard procedures. A good understanding of linux and Koozali SME Server is recommended.


This is the initial forum post this howto provide one solution to overcoming current Centos 5 problems with raiding hard disk drives with capacities over 2TB in size.

The purpose of this HOWTO is to create a Raid5 array of greater than 7TB (19TB) using SME Server 8.0, this is intended as a clean installation.

Creating Large Raid5 Array using 4TB drives

Important.png Note:
due to a limitations in kernel 2.6.18 which is the default kernel of Centos 5 and SME Server 8.0, you can not create Raid5 arrays from drives with a capacity of more than 2TB. This means that the largest size using the standard SME Server 8.0 install is limited to 7.2TB. For those that have and existing array of 7.2TB and need more space you can growing your existing array please follow Howto


In the howto below we will be creating an array using 6x 4TB drives to create a new Raid5 array of 19TB in capacity. Below is the basic hardware details of the computer that were used, these are only a guide.

 - Large case capable of fitting 12 drives - Gigabyte 3D Aurora
 - Main board with 6 SATA 3 ports on board - Gigabyte GA-970A-D3
 - SATA Controller 4 SATA 3 ports on board - Rocket R640L
 - 6x 4TB Hard disk drives - 19TB Array
 - 1x 500MB Hard disk drive - SME Server 8.0 operating system

Before starting the howto below you should have installed SME Server 8.0 on the computer and have run and installed all updates as per a normal server installation. Leave the 6x 4TB drives unplugged until you begin the howto below.


Warning.png Warning:
This howto is intended as a guide to set a new server which has no existing data, do NOT run this on a live server without a backup of that system.


Installing required tools

You will need to install some additionally tools

  • parted: Parted is a GNU utility, which is used to manipulate the hard disk partitions. Using parted, you can add, delete, and edit partitions and the file systems located on those partitions
  • xfsprogs: Utilities for managing the XFS filesystem A set of commands to use the XFS filesystem, including mkfs.xfs. XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can support large files and large filesystems, extended attributes, variable block sizes, is extent based, and makes extensive use of Btrees (directories, extents, free space) to aid both performance and scalability.

Install the required tools using the Yum commands below.

yum install parted
yum install xfsprogs It’s usually a good idea to run… but not required Signal-event post-upgrade; Signal-event reboot;

Partitioning the drives

Now we need to create the partition for the drives using the parted partitioning tool.

parted /dev/sdX
mklabel gpt
unit TB
mkpart primary 0.00TB 4.00TB
print

Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need to change the device sdX value for each drive in the chain.

Creating the array

Check which md numbers are in use by running the command cat /proc/mdstat select the highest number md number and add one we will us md3.

mdadm --create /dev/md3  --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1

Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array.


Important.png Note:
Be Advised this command will process quickly but will continue to run a re-sync of the RAID array in the background. You can monitor this re-sync by running cat /proc/mdstat, It highly advisable to wait for the re-sync to be completed before proceeding.


Save raid array structure

Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system.

mdadm --detail --scan | grep md3 >> /etc/mdadm.conf

Once that command is run it would be a good advisable to view the file, You should only see one new line in the file. It is the then a good idea to create a copy of the mdadm.conf by running the following command

cp /etc/mdadm.conf /etc/mdadm.conf.bak

Keep this file permanently as it is possible in the future something may occur on your SME Server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information.

Create your lvm partition

When using the vgcreate command use a good name for your partition, I have just used vg_DATA and lv_DATA.

 pvcreate /dev/md3
 vgcreate vg_DATA /dev/md3
 lvcreate --name lv_DATA -l 100%FREE vg_DATA


Important.png Note:
I have noticed these commands do take a while, be patience..


Format your new Partition and testing

Run the following

mkfs.xfs /dev/vg_DATA/lv_DATA

If you want to be sure everything went ok you can run a file check on the new LVM once the format is complete.

xfs_check /dev/vg_DATA/lv_DATA


Important.png Note:
I found that I could not use EXT3 or EXT4 based file systems due to problems with the block sizes and my 20TB setup there may be work-around for this, but I didn’t find anything solid so instead I decided use XFS file system as it does what I need it too.


Mount your new partition to a directory

Finally open /etc/fstab and edit the bottom line to mount the new area be sure to leave a new line feed at the bottom, and use proper spacing.

For Example in my file I entered

/dev/vg_DATA/lv_DATA	/TESTFOLDER	XFS	defaults	0 0

You trigger a remount by using

mount –a

You can also check whether it has been successful mounted easily by running it should list your mount location and size in use.

df -h
  • This setup in /etc/fstab should be maintained when updates or upgrades are conducted however if you want a more definite solution I would advise reading up on templates in SME Server.