Difference between revisions of "Raid:Creating large raid5 array (over 2TB drive)"
m |
|||
(7 intermediate revisions by 3 users not shown) | |||
Line 24: | Line 24: | ||
Install the required tools using the Yum commands below. | Install the required tools using the Yum commands below. | ||
− | yum | + | yum install parted<br> |
yum install xfsprogs | yum install xfsprogs | ||
Line 39: | Line 39: | ||
print | print | ||
− | Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need | + | Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need to change the device sdX value for each drive in the chain. |
===Creating the array=== | ===Creating the array=== | ||
Check which ''md'' numbers are in use by running the command ''cat /proc/mdstat'' select the highest number md number and add one we will us md3. | Check which ''md'' numbers are in use by running the command ''cat /proc/mdstat'' select the highest number md number and add one we will us md3. | ||
− | |||
mdadm --create /dev/md3 --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1 | mdadm --create /dev/md3 --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1 | ||
− | |||
Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array. | Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array. | ||
+ | {{Note box|Be Advised this command will process quickly but will continue to run a re-sync of the RAID array in the background. You can monitor this re-sync by running cat /proc/mdstat, It highly advisable to wait for the re-sync to be completed before proceeding.}} | ||
− | + | ===Save raid array structure=== | |
− | + | Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | mdadm --detail --scan | grep md3 >> /etc/mdadm.conf | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | Once that command is run it would be a good advisable to view the file, You should only see one new line in the file. | |
+ | It is the then a good idea to create a copy of the mdadm.conf by running the following command | ||
+ | cp /etc/mdadm.conf /etc/mdadm.conf.bak | ||
+ | Keep this file permanently as it is possible in the future something may occur on your SME Server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information. | ||
− | + | ===Create your lvm partition=== | |
+ | When using the vgcreate command use a good name for your partition, I have just used vg_DATA and lv_DATA. | ||
+ | pvcreate /dev/md3 | ||
+ | vgcreate vg_DATA /dev/md3 | ||
+ | lvcreate --name lv_DATA -l 100%FREE vg_DATA | ||
− | + | {{Note box|I have noticed these commands do take a while, be patience..}} | |
− | + | ===Format your new Partition and testing=== | |
− | + | Run the following | |
− | + | mkfs.xfs /dev/vg_DATA/lv_DATA | |
− | + | If you want to be sure everything went ok you can run a file check on the new LVM once the format is complete. | |
+ | xfs_check /dev/vg_DATA/lv_DATA | ||
− | + | {{Note box|I found that I could not use EXT3 or EXT4 based file systems due to problems with the block sizes and my 20TB setup there may be work-around for this, but I didn’t find anything solid so instead I decided use XFS file system as it does what I need it too.}} | |
− | |||
− | |||
− | + | ===Mount your new partition to a directory=== | |
+ | Finally open /etc/fstab and edit the bottom line to mount the new area be sure to leave a new line feed at the bottom, and use proper spacing. | ||
− | + | For Example in my file I entered | |
− | + | /dev/vg_DATA/lv_DATA /TESTFOLDER XFS defaults 0 0 | |
− | |||
− | |||
− | + | You trigger a remount by using | |
+ | mount –a | ||
− | + | You can also check whether it has been successful mounted easily by running it should list your mount location and size in use. | |
− | + | df -h | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | if you | + | *This setup in /etc/fstab should be maintained when updates or upgrades are conducted however if you want a more definite solution I would advise reading up on templates in SME Server. |
− | + | <noinclude> | |
− | <noinclude>[[Category:Howto]][[Category:Administration:Storage]]</noinclude> | + | [[Category:Howto]] |
+ | [[Category:Administration:Storage]] | ||
+ | </noinclude> |
Latest revision as of 13:50, 16 April 2018
This is the initial forum post this howto provide one solution to overcoming current Centos 5 problems with raiding hard disk drives with capacities over 2TB in size.
The purpose of this HOWTO is to create a Raid5 array of greater than 7TB (19TB) using SME Server 8.0, this is intended as a clean installation.
Creating Large Raid5 Array using 4TB drives
In the howto below we will be creating an array using 6x 4TB drives to create a new Raid5 array of 19TB in capacity. Below is the basic hardware details of the computer that were used, these are only a guide.
- Large case capable of fitting 12 drives - Gigabyte 3D Aurora - Main board with 6 SATA 3 ports on board - Gigabyte GA-970A-D3 - SATA Controller 4 SATA 3 ports on board - Rocket R640L - 6x 4TB Hard disk drives - 19TB Array - 1x 500MB Hard disk drive - SME Server 8.0 operating system
Before starting the howto below you should have installed SME Server 8.0 on the computer and have run and installed all updates as per a normal server installation. Leave the 6x 4TB drives unplugged until you begin the howto below.
Installing required tools
You will need to install some additionally tools
- parted: Parted is a GNU utility, which is used to manipulate the hard disk partitions. Using parted, you can add, delete, and edit partitions and the file systems located on those partitions
- xfsprogs: Utilities for managing the XFS filesystem A set of commands to use the XFS filesystem, including mkfs.xfs. XFS is a high performance journaling filesystem which originated on the SGI IRIX platform. It is completely multi-threaded, can support large files and large filesystems, extended attributes, variable block sizes, is extent based, and makes extensive use of Btrees (directories, extents, free space) to aid both performance and scalability.
Install the required tools using the Yum commands below.
yum install parted
yum install xfsprogs It’s usually a good idea to run… but not required Signal-event post-upgrade; Signal-event reboot;
Partitioning the drives
Now we need to create the partition for the drives using the parted partitioning tool.
parted /dev/sdX mklabel gpt unit TB mkpart primary 0.00TB 4.00TB print
Remember to set the max size (4.00TB above) to whatever size hard disk drives you are using also you will need to change the device sdX value for each drive in the chain.
Creating the array
Check which md numbers are in use by running the command cat /proc/mdstat select the highest number md number and add one we will us md3.
mdadm --create /dev/md3 --level=raid5 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/hda1 /dev/hdb1
Change the md3 number to the one you have selected, you will also need to change the number of raid devices and list each of the devices to use in the array.
Save raid array structure
Next we need to save array structure to the mdadm.conf file in order for the SME Server to continue working with the array on reboot of the system.
mdadm --detail --scan | grep md3 >> /etc/mdadm.conf
Once that command is run it would be a good advisable to view the file, You should only see one new line in the file. It is the then a good idea to create a copy of the mdadm.conf by running the following command
cp /etc/mdadm.conf /etc/mdadm.conf.bak
Keep this file permanently as it is possible in the future something may occur on your SME Server to cause the mdadm.conf file to be trashed or reset, in that case you can use this backup file to rewrite your required raid information.
Create your lvm partition
When using the vgcreate command use a good name for your partition, I have just used vg_DATA and lv_DATA.
pvcreate /dev/md3 vgcreate vg_DATA /dev/md3 lvcreate --name lv_DATA -l 100%FREE vg_DATA
Format your new Partition and testing
Run the following
mkfs.xfs /dev/vg_DATA/lv_DATA
If you want to be sure everything went ok you can run a file check on the new LVM once the format is complete.
xfs_check /dev/vg_DATA/lv_DATA
Mount your new partition to a directory
Finally open /etc/fstab and edit the bottom line to mount the new area be sure to leave a new line feed at the bottom, and use proper spacing.
For Example in my file I entered
/dev/vg_DATA/lv_DATA /TESTFOLDER XFS defaults 0 0
You trigger a remount by using
mount –a
You can also check whether it has been successful mounted easily by running it should list your mount location and size in use.
df -h
- This setup in /etc/fstab should be maintained when updates or upgrades are conducted however if you want a more definite solution I would advise reading up on templates in SME Server.