Changes

Jump to navigation Jump to search
6,695 bytes added ,  00:57, 6 December 2023
m
no edit summary
Line 99: Line 99:  
For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.
 
For servers which were installed with 2+ drives and have a working RAID array, it is possible to add an additional drive which will become a hot spare, ready to be activated in case of drive failure.
   −
See also [[AddExtraHardDisk]]. It's an alternative solution if you have only one drive and you want to use RAID1. But better solution is to reinstall SME10 with 2 drives.
+
See also [[AddExtraHardDisk]]. It's an alternative for part of the data if you have only one drive and you want to use RAID1. But better solution is to reinstall SME10 with 2 drives.
    
'''Ensure that any new drives are the same size or larger than your existing drives.'''
 
'''Ensure that any new drives are the same size or larger than your existing drives.'''
Line 127: Line 127:  
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
 
For more information, check: http://bugs.contribs.org/show_bug.cgi?id=2154
   −
<br />
+
*For disks previously formatted as  GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.
 
+
*To remove the (hardware) RAID configuration that is stored at the end of the drive, do:
*For disks previously formatted as  GPT this is insufficient. It's probably best to use gdisk or parted or partx to delete the partitions; there are other tools that will work. Parted has limited support for LVM.<br />
+
#dd if=/dev/zero of=/dev/sdx bs=512 count=2048 seek=$((`blockdev --getsz /dev/sdx` - 2048))
    
====Upgrading the Hard Drive Size====
 
====Upgrading the Hard Drive Size====
Line 534: Line 534:  
#More info: http://www.arkf.net/blog/?p=47
 
#More info: http://www.arkf.net/blog/?p=47
    +
== Add another Raid to mount to /home/e-smith/files ==
 +
this is inspired from previous content of [[AddExtraHardDisk]] and particularly the part [[AddExtraHardDisk#Additional steps to create a raid array from multiple disks]] but updated to 2022 and SME10
 +
 +
1 you need to check what disk you want, using lsblk<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb   
 +
sdc   
 +
 +
</syntaxhighlight>then you can create the Raid array. We assume you only need onethen you need to rebuild the  grub.conf, depending on your system is EFI or legacy use the appropriate command#EFI
 +
grub2-mkconfig -o /boot/efi/EFI/centos/grub.cfg
 +
<nowiki>#</nowiki>Legacy
 +
grub2-mkconfig -o /boot/grub2/grub.cfg
 +
Raid partition, and hence do not need to partition it.<syntaxhighlight lang="bash">
 +
#create array
 +
mdadm --create --verbose /dev/md11 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
 +
# add to mdadm.conf
 +
mdadm --detail --scan --verbose /dev/md11 >> /etc/mdadm.conf
 +
</syntaxhighlight>then format it and enable quotas. If you want to add a LVM Layer, this is just before that !<syntaxhighlight lang="bash">
 +
mkfs.xfs /dev/md11
 +
</syntaxhighlight>now you have<syntaxhighlight lang="bash">
 +
# lsblk --fs
 +
NAME  FSTYPE            LABEL                    UUID                                MOUNTPOINT
 +
sda                                                                                   
 +
├─sda1 vfat                                      B93A-85A4                            /boot/efi
 +
├─sda2 xfs                                        89e9cc9e-d3d2-4d02-bad5-2698aea0a510 /boot
 +
├─sda3 swap                                      64d21f89-4d7c-417a-907e-34236f6cd0be [SWAP]
 +
└─sda4 xfs                                        65bf712c-2186-4524-aae8-edd8151de1e7 /
 +
sdb    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
sdc    linux_raid_member localhost.localdomain:11 6c4b9640-7349-15fe-28b1-78843d9a149a
 +
└─md11 xfs                                        0ab4fe2a-aa81-4728-90d8-2f96d4624af8
 +
</syntaxhighlight>then you need to mount it temporary to move your content<syntaxhighlight lang="bash">
 +
mkdir /mnt/newdisk
 +
mount /dev/md11 /mnt/newdisk
 +
rsync -arv /home/e-smith/files/ /mnt/newdisk
 +
</syntaxhighlight>When happy with result simply add an entry to you fstab, according to last lsblk output in this case you should add <syntaxhighlight lang="bash">
 +
UUID=0ab4fe2a-aa81-4728-90d8-2f96d4624af8 /home/e-smith/files            xfs    uquota,gquota        0 0
 +
 +
</syntaxhighlight>To have the disk mounted on reboot, you need to alter grub<syntaxhighlight lang="bash">
 +
vim /etc/default/grub
 +
</syntaxhighlight>and alter the command line to add either "rd.md=1 rd.md.conf=1 rd.auto=1" or specifically add the uuid to mount (obviously if you add a LVM layer you will rather need to add something like rd.lvm.lv=mylvm/video  rd.lvm.lv=mylvm/files)<syntaxhighlight lang="bash" line="1">
 +
GRUB_TIMEOUT=5
 +
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
 +
GRUB_DEFAULT=saved
 +
GRUB_DISABLE_SUBMENU=true
 +
GRUB_TERMINAL_OUTPUT="gfxterm"
 +
GRUB_CMDLINE_LINUX="rhgb quiet rootflags=uquota,pquota rd.md=1 rd.md.conf=1 rd.auto=1"
 +
GRUB_DISABLE_RECOVERY="false"
 +
GRUB_BACKGROUND="/boot/grub2/smeserver10.png"
 +
GRUB_GFXMODE="1024x768"
 +
GRUB_THEME="/boot/grub2/themes/koozali/theme.txt"
 +
 +
</syntaxhighlight>then you need to make sure dracut will add the drivers<syntaxhighlight lang="bash">
 +
vim /etc/dracut.conf
 +
</syntaxhighlight>and alter the line needed (you probably will need to uncomment this line and add mdraid between the quote)<syntaxhighlight lang="bash" line="1" start="19">
 +
# dracut modules to add to the default
 +
add_dracutmodules+="lvm mdraid"
 +
 +
# install local /etc/mdadm.conf
 +
mdadmconf="yes"
 +
 +
# install local /etc/lvm/lvm.conf
 +
lvmconf="yes"
 +
 +
 +
</syntaxhighlight>Finally rebuild the initfs<syntaxhighlight lang="bash">
 +
cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.old
 +
dracut --add="lvm mdraid" /boot/initramfs-$(uname -r).img $(uname -r) --force
 +
</syntaxhighlight>
 +
 +
== Copy data from one disk of an old Raid mirror disk ==
 +
Let's say you want to copy the huge amount of data you excluded from the backup to migrate from SME9 to SME10 and now you want to copy this to your new server.
 +
 +
This How-To assume your current install is without LVM. An extra trick is needed if you have a LVM and previous SME9 also. You simply need to rename the vg group either of the old SME or new one using a rescue disk or another Linux distro, see [[UpgradeDisk#Moving from SME 8.x to SME 9.x]].
 +
 +
# put one of the old drives in the server or in an external case and connect it
 +
# use lsblk to identify the drive
 +
# adapt the following commands
 +
<syntaxhighlight lang="bash">
 +
# lsblk
 +
sdd        8:48  0 931,5G  0 disk 
 +
├─sdd1      8:49  0  250M  0 part 
 +
└─sdd2      8:50  0 931,3G  0 part 
 +
 +
</syntaxhighlight>We assume that sd1 was the boot partition and the stuff we want is in sdd2<syntaxhighlight lang="bash">
 +
#assemble and run on degraded
 +
mdadm -A /dev/md126 /dev/sdd2 --run
 +
</syntaxhighlight>now let's try to mount, it will work only if you had no LVM, or it will return this<syntaxhighlight lang="bash">
 +
# mkdir /mnt/olddisk/
 +
# mount /dev/md126 /mnt/olddisk/
 +
mount: filesystem « LVM2_member » unknown
 +
 +
</syntaxhighlight>you can skip this step if you did not got the LVM error. Then we need to activate the LVM, and we can assume you might need also to install lvm stuffs...<syntaxhighlight lang="bash">
 +
# yum install lvm2 -y
 +
vgchange -a y main
 +
mount /dev/mapper/main-root  /mnt/olddisk/
 +
</syntaxhighlight>It is now time to copy your stuffs.<syntaxhighlight lang="bash">
 +
rsync -arvHAX  /mnt/olddisk/home/e-smith/files/ /home/e-smith/files
 +
</syntaxhighlight>then to remove safely your disk<syntaxhighlight lang="bash">
 +
umount /dev/mapper/main-root
 +
vgchange -a n main
 +
mdadm --stop /dev/md126
 +
</syntaxhighlight>
 
----
 
----
 
<noinclude>
 
<noinclude>
19

edits

Navigation menu