Changes

Jump to navigation Jump to search
7,427 bytes added ,  00:57, 6 December 2023
m
no edit summary
Line 1: Line 1:  
{{Warning box| Please read this article before buying and deploying drives. https://raid.wiki.kernel.org/index.php/Timeout_Mismatch  
 
{{Warning box| Please read this article before buying and deploying drives. https://raid.wiki.kernel.org/index.php/Timeout_Mismatch  
   −
The new type of SMR drives are NOT suitable for RAID arrays. Beware WD Red NAS drives.
+
The new type of SMR drives are NOT suitable for RAID arrays. Beware WD Red NAS drives, though recently they have made it clearer which models use SMR.
   −
'''A drive failure can corrupt an entire array'''}}
+
'''A drive failure can corrupt an entire array: RAID does not replace backup!'''}}
   −
{{Note box| SME Servers Raid Options are largely automated, but with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }}
+
{{Note box| SME Servers RAID Options are largely automated, but even with the best laid plans things don't always go according to plan. See also: [[Raid:Manual Rebuild]], [[Raid:Growing]] and [[Hard Disk Partitioning]]. There is a wiki on the Linux software raid, you will find many [https://raid.wiki.kernel.org/index.php/Linux_Raid cool Tips here] }}
    
===Hard Drives===
 
===Hard Drives===
Line 16: Line 16:  
The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:
 
The root and swap volumes are configured using LVM on the RAID device /dev/md1 as follows:
   −
* 1 drive - no RAID
+
*1 drive - no RAID
* 2 drives - RAID 1
+
*2 drives - RAID 1
* 3 drives - RAID 1 + hot spare
+
*3 drives - RAID 1 + hot spare
* 4 drives - RAID 6
+
*4 drives - RAID 6
* 5+ drives - RAID 6 + hot spare
+
*5+ drives - RAID 6 + hot spare
 +
 
 
The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.
 
The /boot volume and EFI partition if necessary is always a non-LVM RAID 1 array on the device /dev/md0.
   Line 63: Line 64:  
'''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.  
 
'''Note:''' RAID is a convenient method of protecting server availability from a drive failure. It does not remove the need for regular backups, which can be configured using the Server Manager.  
   −
=== Disk Layout ===
+
===Disk Layout===
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may loose both drives. Also, performance will suffer slightly.  
+
Mirroring drives in the same IDE channel (eg. hda and hdb) is not desirable. If that channel goes out, you may lose both drives. Also, performance will suffer slightly.  
    
The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.  
 
The preferred method is to use the primary location on each IDE channel (eg. hda and hdc). This will ensure that if you lose one channel, the other will still operate. It will also give you the best performance.  
Line 97: Line 98:     
For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.
 
For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.
 +
 +
See also [[AddExtraHardDisk]]. It's an alternative for part of the data if you have only one drive and you want to use RAID1. But better solution is to reinstall SME10 with 2 drives.
    
'''Ensure that any new drives are the same size or larger than your existing drives.'''
 
'''Ensure that any new drives are the same size or larger than your existing drives.'''
Line 110: Line 113:     
====Reusing Hard Drives====
 
====Reusing Hard Drives====
If it was ever installed on a Windows machine (or in some cases an old system) then you will need to clear the MBR first before installing it.
+
 
 +
*MBR formatted disks
 +
 
 +
If it was ever installed on a Windows machine, or any of the *BSDs, (or in some cases an old system with RAID and/or LVM) then you will need to clear the MBR first before installing it.
    
From the linux command prompt, type the following:
 
From the linux command prompt, type the following:
Line 121: Line 127:  
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
 
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
   −
<br />
+
*For disks previously formatted as  GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.
 +
*To remove the (hardware) RAID configuration that is stored at the end of the drive, do:
 +
#dd if=/dev/zero of=/dev/sdx bs=512 count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))
    
====Upgrading the Hard Drive Size====
 
====Upgrading the Hard Drive Size====
Line 164: Line 172:  
  resize2fs /dev/md2 &
 
  resize2fs /dev/md2 &
   −
====Replacing and Upgrading Hard Drive after HD fail====
+
====Replacing and Upgrading a Hard Drive after HD fail====
    
Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda.
 
Note: See [[Bugzilla: 6632]] and [[Bugzilla:6630]] a suggested sequence for Upgrading a Hard Drive size is detailed after issue when attempting to sync a new drive when added first as sda.
Line 210: Line 218:  
  resize2fs /dev/md2 &
 
  resize2fs /dev/md2 &
   −
====Raid Notes====
+
====RAID Notes====
Many on board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good. You will get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.
+
Many on-board hardware raid cards are in fact software RAID. Turn it off as cheap "fakeraid" cards aren't good for Linux. You will generally get better performance and reliability with Linux Software RAID (http://linux-ata.org/faq-sata-raid.html). Linux software RAID is fast and robust.
   −
If your persistent on getting a hardware raid, buy a well supported raid card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.
+
If you are insistent on getting a hardware RAID, buy a well supported RAID card which has a proper RAID BIOS. This hides the disks and presents a single disk to Linux (http://linuxmafia.com/faq/Hardware/sata.html). Please check that it is supported by the kernel and has some form of management. Also avoid anything which requires a driver. Try googling for the exact model of RAID controller before buying it. Please note that you won't get a real hardware raid controller cheap.
    
It rarely happens, but sometimes when a device has finished rebuilding,
 
It rarely happens, but sometimes when a device has finished rebuilding,
Line 223: Line 231:     
Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.
 
Theses operations are logged, however, no emails will be sent to admin as of the release of packages associated with Bug #6160 or the release of the 8.1 ISO.
====Receive periodic check of Raid by email====
+
====Receive periodic check of RAID by email====
   −
There are routines in SME Server to check the raid and sent mail to the admin user, when the raid is degraded or when the raid is resynchronizing. But the admin user receive a lot of emails and some time messages can be forgotten.
+
There are routines in SME Server to check the RAID and sent mail to the admin user, when the RAID is degraded or when the RAID is resynchronizing. But the admin user may receive a lot of emails and sometimes messages can be forgotten.
So the purpose is to have a routine which sent email to the user of your choice each week.
+
So the purpose is to have a routine which sends email to the user of your choice each week.
    
  nano /etc/cron.weekly/raid-status.sh
 
  nano /etc/cron.weekly/raid-status.sh
Line 309: Line 317:  
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
 
'''Note:'''  with the release of version 7.6 and 8.0, the commandline parameter "sme nospare" has been changed to "sme spares=0" .  In addition, you may also select the number of spare(s) implemented [0,1,or 2].
   −
====remove the degraded raid====
+
====remove the degraded RAID message====
when you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
+
When you install the smeserver with one drive with a degraded raid, you will see a 'U_' state but without warnings. If you want to leave just one 'U' in the /proc/mdstat and stop all future questions about your degraded raid state, then :
 
  mdadm --grow /dev/md0 --force --raid-devices=1
 
  mdadm --grow /dev/md0 --force --raid-devices=1
 
  mdadm --grow /dev/md1 --force --raid-devices=1
 
  mdadm --grow /dev/md1 --force --raid-devices=1
Line 340: Line 348:  
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
Login as root, type console. Select Item 5. "Manage disk reduncancy"
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
 
  <nowiki>--------Disk Reduncancy status as of Thursday Dec 22 -------
Current RAID status:
+
            Current RAID status:
+
           
Personalities : [raid1]
+
            Personalities : [raid1]
md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
+
            md2 : active raid1 hda2[0] <-- NOTICE hdb2[#] is missing. Means hdb2[#] failed.
                        38973568 blocks [2/1] [U_]
+
                                    38973568 blocks [2/1] [U_]
+
           
md1 : active raid1 hda1[0] hdb1[1]
+
            md1 : active raid1 hda1[0] hdb1[1]
      104320 blocks [2/2] [UU]
+
                  104320 blocks [2/2] [UU]
+
           
unused devices: <none>
+
            unused devices: <none>
Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
+
            Only Some of the RAID devices are unclean.  <-- NOTICE This message and  
Manual intervention may be required.</nowiki> <-- this message.
+
            Manual intervention may be required.</nowiki> <-- this message.
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
Notice the last 2 sentences of the window above. You have some problems. <br>
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:
 
If your system is healthy however the message you will see at the bottom of Raid Console window is:
Line 526: Line 534:  
#More info: http://www.arkf.net/blog/?p=47
 
#More info: http://www.arkf.net/blog/?p=47
    +
== Add another Raid to mount to /home/e-smith/files ==
 +
this is inspired from previous content of [[AddExtraHardDisk]] and particularly the part [[AddExtraHardDisk#Additional steps to create a raid array from multiple disks]] but updated to 2022 and SME10
 +
 +
1 you need to check what disk you want, using lsblk<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb   
 +
sdc   
 +
 +
</syntaxhighlight>then you can create the Raid array. We assume you only need onethen you need to rebuild the  grub.conf, depending on your system is EFI or legacy use the appropriate command#EFI
 +
grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
 +
<nowiki>#</nowiki>Legacy
 +
grub2-mkconfig -o /boot/grub2/grub.cfg
 +
Raid partition, and hence do not need to partition it.<syntaxhighlight lang="bash">
 +
#create array
 +
mdadm --create --verbose /dev/md11 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
 +
# add to mdadm.conf
 +
mdadm --detail --scan --verbose /dev/md11 >> /etc/mdadm.conf
 +
</syntaxhighlight>then format it and enable quotas. If you want to add a LVM Layer, this is just before that !<syntaxhighlight lang="bash">
 +
mkfs.xfs /dev/md11
 +
</syntaxhighlight>now you have<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
sdc    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
</syntaxhighlight>then you need to mount it temporary to move your content<syntaxhighlight lang="bash">
 +
mkdir /mnt/newdisk
 +
mount /dev/md11 /mnt/newdisk
 +
rsync -arv /home/e-smith/files/ /mnt/newdisk
 +
</syntaxhighlight>When happy with result simply add an entry to you fstab, according to last lsblk output in this case you should add <syntaxhighlight lang="bash">
 +
UUID=0ab4fe2a-aa81-4728-90d8-2f96d4624af8 /home/e-smith/files            xfs    uquota,gquota        0 0
 +
 +
</syntaxhighlight>To have the disk mounted on reboot, you need to alter grub<syntaxhighlight lang="bash">
 +
vim /etc/default/grub
 +
</syntaxhighlight>and alter the command line to add either "rd.md=1 rd.md.conf=1 rd.auto=1" or specifically add the uuid to mount (obviously if you add a LVM layer you will rather need to add something like rd.lvm.lv=mylvm/video  rd.lvm.lv=mylvm/files)<syntaxhighlight lang="bash" line="1">
 +
GRUB_TIMEOUT=5
 +
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
 +
GRUB_DEFAULT=saved
 +
GRUB_DISABLE_SUBMENU=true
 +
GRUB_TERMINAL_OUTPUT="gfxterm"
 +
GRUB_CMDLINE_LINUX="rhgb quiet rootflags=uquota,pquota rd.md=1 rd.md.conf=1 rd.auto=1"
 +
GRUB_DISABLE_RECOVERY="false"
 +
GRUB_BACKGROUND="/boot/grub2/smeserver10.png"
 +
GRUB_GFXMODE="1024x768"
 +
GRUB_THEME="/boot/grub2/themes/koozali/theme.txt"
 +
 +
</syntaxhighlight>then you need to make sure dracut will add the drivers<syntaxhighlight lang="bash">
 +
vim /etc/dracut.conf
 +
</syntaxhighlight>and alter the line needed (you probably will need to uncomment this line and add mdraid between the quote)<syntaxhighlight lang="bash" line="1" start="19">
 +
# dracut modules to add to the default
 +
add_dracutmodules+="lvm mdraid"
 +
 +
# install local /etc/mdadm.conf
 +
mdadmconf="yes"
 +
 +
# install local /etc/lvm/lvm.conf
 +
lvmconf="yes"
 +
 +
 +
</syntaxhighlight>Finally rebuild the initfs<syntaxhighlight lang="bash">
 +
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
 +
dracut --add="lvm mdraid" /boot/initramfs-$(uname -r).img $(uname -r) --force
 +
</syntaxhighlight>
 +
 +
== Copy data from one disk of an old Raid mirror disk ==
 +
Let's say you want to copy the huge amount of data you excluded from the backup to migrate from SME9 to SME10 and now you want to copy this to your new server.
 +
 +
This How-To assume your current install is without LVM. An extra trick is needed if you have a LVM and previous SME9 also. You simply need to rename the vg group either of the old SME or new one using a rescue disk or another Linux distro, see [[UpgradeDisk#Moving from SME 8.x to SME 9.x]].
 +
 +
# put one of the old drives in the server or in an external case and connect it
 +
# use lsblk to identify the drive
 +
# adapt the following commands
 +
<syntaxhighlight lang="bash">
 +
# lsblk
 +
sdd        8:48  0 931,5G  0 disk 
 +
├─sdd1      8:49  0  250M  0 part 
 +
└─sdd2      8:50  0 931,3G  0 part 
 +
 +
</syntaxhighlight>We assume that sd1 was the boot partition and the stuff we want is in sdd2<syntaxhighlight lang="bash">
 +
#assemble and run on degraded
 +
mdadm -A /dev/md126 /dev/sdd2 --run
 +
</syntaxhighlight>now let's try to mount, it will work only if you had no LVM, or it will return this<syntaxhighlight lang="bash">
 +
# mkdir /mnt/olddisk/
 +
# mount /dev/md126 /mnt/olddisk/
 +
mount: filesystem « LVM2_member » unknown
 +
 +
</syntaxhighlight>you can skip this step if you did not got the LVM error. Then we need to activate the LVM, and we can assume you might need also to install lvm stuffs...<syntaxhighlight lang="bash">
 +
# yum install lvm2 -y
 +
vgchange -a y main
 +
mount /dev/mapper/main-root  /mnt/olddisk/
 +
</syntaxhighlight>It is now time to copy your stuffs.<syntaxhighlight lang="bash">
 +
rsync -arvHAX  /mnt/olddisk/home/e-smith/files/ /home/e-smith/files
 +
</syntaxhighlight>then to remove safely your disk<syntaxhighlight lang="bash">
 +
umount /dev/mapper/main-root
 +
vgchange -a n main
 +
mdadm --stop /dev/md126
 +
</syntaxhighlight>
 
----
 
----
 
<noinclude>
 
<noinclude>
19

edits

Navigation menu