Changes

Jump to navigation Jump to search
4,699 bytes removed ,  00:33, 15 April 2021
Line 1: Line 1: −
==Raid: Manual Rebuild==
   
{{Level|Advanced}}
 
{{Level|Advanced}}
 +
{{Warning box|Get it right or you will lose data. '''Take a backup!''' Let the raid sync, this can take quite a while.}}
   −
{{Warning box|Get it right or you will lose data. Take a backup, let the raid sync}}
+
SME Servers Raid Options are largely automated, if you built your system with a single hard disk simply logon as admin and select Disk Redundancy to add a new drive to your RAID1 array. The same procedure is used if you have a disk failure in a RAID array and you have replaced that failed disk.
   −
{{Warning box|Under Re-Write}}
+
But with the best laid plans things don't always go according to plan, these are the processes required to do it manually.
   −
SME Servers Raid Options are largely automated, if you built your system with a single hard disk, or have a hard disk failure, simply logon as ''admin'' and select ''Disk Redundancy'' to add a new drive to your RAID1 array.
+
See also: [[Hard Disk Partitioning]] and [[Raid#Resynchronising_a_Failed_RAID]]
   −
But with the best laid plans things don't always goaccording to plan, these are the processes required to do it manually
+
==HowTo: Manage/Check a RAID1 Array from the command Line==
 
+
===What is the Status of the Array===
== HowTo: Manage/Check a RAID1 Array from the command Line ==
  −
 
  −
=== What is the Status of the Array ===
      
  [root@ ~]# '''cat /proc/mdstat'''
 
  [root@ ~]# '''cat /proc/mdstat'''
Line 24: Line 21:  
  unused devices: <none>
 
  unused devices: <none>
   −
=== Are the Disk Partitioned Correctly ? ===
+
==HowTo: Reinstate a disk from the RAID1 Array with the command Line==
 +
 
 +
===Look at the mdstat===
 +
 
 +
First we must determine which drive is in default.
   −
Here two disks are partitioned identically
     −
  [root@ ~]# '''fdisk -lu /dev/sda; fdisk -lu /dev/sdb'''
+
  [root@ ~]#'''cat /proc/mdstat'''
   
+
  Personalities : [raid1]
  Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
+
  md1 : active raid1 sdb1[1] sda1[0]
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
+
      104320 blocks [2/2] [UU]
Units = sectors of 1 * 512 = 512 bytes
+
     
+
  md2 : active raid1 sdb2[2](F) sda2[0]
    Device Boot      Start        End      Blocks  Id  System
+
      52323584 blocks [2/1] [U_]
/dev/sda1   *          63      208844      104391  fd  Linux raid autodetect
+
     
/dev/sda2          208845  1953520064  976655610  fd  Linux raid autodetect
+
  unused devices: <none>
+
 
  Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
+
(S)= Spare
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
+
(F)= Fail
  Units = sectors of 1 * 512 = 512 bytes
+
[0]= number of the disk
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sdb1  *          63      208844      104391  fd  Linux raid autodetect
  −
/dev/sdb2          208845  1953520064  976655610  fd  Linux raid autodetect
     −
==== Example : Incorrecty Partitioned 2nd Disk ====
+
{{note box|As we can see the partition sdb2 is in default, we can see the flag: sdb2 [2] (F). We need to resynchronize the disk sdb to the existing array md2.}}
   −
In this example the partitions are set too close to the start of the disk and there is no room for GRUB to be written, the disk will not boot, there will not be enough room for grub ''staging''
+
===Fail and remove the disk, '''sdb''' in this case===
   −
  [root@ ~]# '''fdisk -l /dev/sdb; fdisk -lu /dev/sdb'''
+
mdadm: set /dev/sdb2 faulty in /dev/md2
+
  [root@ ~]# '''mdadm --manage /dev/md2 --fail /dev/sdb2'''
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
  −
255 heads, 63 sectors/track, 121601 cylinders
  −
Units = cylinders of 16065 * 512 = 8225280 bytes
  −
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sdb1  *          1          13      104384+  fd  Linux raid autodetect
  −
'''Partition 1 does not end on cylinder boundary.'''
  −
/dev/sdb2             13      121601  976655647  fd  Linux raid autodetect
  −
  −
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
  −
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
  −
Units = sectors of 1 * 512 = 512 bytes
  −
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sdb1  *          1      208769      104384+  fd  Linux raid autodetect
  −
'''Partition 1 does not end on cylinder boundary.'''
  −
/dev/sdb2          208770  1953520063  976655647  fd  Linux raid autodetect
     −
===== message Log showing Grub errors =====
+
mdadm: hot removed /dev/sdb2
 +
[root@ ~]# '''mdadm --manage /dev/md2 --remove /dev/sdb2'''
   −
add_drive_to_raid: Waiting for boot partition to sync before installing grub...
+
mdadm: set /dev/sdb1 faulty in /dev/md1
add_drive_to_raid: Probing devices to guess BIOS drives. This may take a long time.
+
  [root@ ~]# '''mdadm --manage /dev/md1 --fail /dev/sdb1'''
add_drive_to_raid:
  −
add_drive_to_raid:
  −
add_drive_to_raid:     GNU GRUB  version 0.95  (640K lower / 3072K upper memory)
  −
add_drive_to_raid:
  −
add_drive_to_raid:  [ Minimal BASH-like line editing is supported.  For the first word, TAB
  −
add_drive_to_raid:    lists possible command completions.  Anywhere else TAB lists the possible
  −
add_drive_to_raid:    completions of a device/filename.]
  −
add_drive_to_raid: grub> device (hd0) /dev/sdb
  −
  add_drive_to_raid: grub> root (hd0,0)
  −
add_drive_to_raid:  Filesystem type is ext2fs, partition type 0xfd
  −
add_drive_to_raid: grub> setup (hd0)
  −
add_drive_to_raid:  Checking if "/boot/grub/stage1" exists... no
  −
add_drive_to_raid:  Checking if "/grub/stage1" exists... yes
  −
add_drive_to_raid:  Checking if "/grub/stage2" exists... yes
  −
add_drive_to_raid:  Checking if "/grub/e2fs_stage1_5" exists... yes
  −
add_drive_to_raid:  Running "embed /grub/e2fs_stage1_5 (hd0)"... failed (this is not fatal)
  −
add_drive_to_raid:  Running "embed /grub/e2fs_stage1_5 (hd0,0)"... failed (this is not fatal)
  −
add_drive_to_raid:  Running "install /grub/stage1 (hd0) /grub/stage2 p /grub/grub.conf "... succeeded
  −
add_drive_to_raid: Done.
  −
add_drive_to_raid: grub> quit
     −
== HowTo: Remove a disk from the RAID1 Array from the command Line ==
+
mdadm: hot removed /dev/sdb1
 +
[root@ ~]# '''mdadm --manage /dev/md1 --remove /dev/sdb1'''
   −
=== Look at the mdstat ===
+
===Do your Disk Maintenance here===
   −
[root@ ~]# cat /proc/mdstat
+
At this point the disk is idle.
Personalities : [raid1]
  −
md2 : active raid1 sdb2[1] sda2[0]
  −
      488279488 blocks [2/2] [UU]
  −
  −
md1 : active raid1 sdb1[1] sda1[0]
  −
      104320 blocks [2/2] [UU]
   
   
 
   
 +
[root@ ~]# '''cat /proc/mdstat'''
 +
Personalities : [raid1]
 +
md1 : active raid1 sda1[0]
 +
      104320 blocks [2/1] [U_]
 +
     
 +
md2 : active raid1 sda2[0]
 +
      52323584 blocks [2/1] [U_]
 +
     
 
  unused devices: <none>
 
  unused devices: <none>
   −
=== Fail and remove the disk, '''sdb''' in this case ===
+
{{note box|You'll have to determine if your disk can be reinstated at the array. In fact sometimes a raid can get out of sync after a power failure but also some times for physical outages of the hard disk. It is necessary to test the hard disk if this occurs repeatedly. For this we will use '''smartctl'''.}}
 +
 
 +
For all the details available by SMART on the disk
 +
 
 +
[root@ ~]# '''smartctl -a /dev/sdb'''
 +
 
 +
At least two types of tests are possible, short (~ 1 min) and long (~ 10 min to 90 min).
 +
 
 +
[root@ ~]# '''smartctl -t short /dev/sdb''' #short test
 +
[root@ ~]# '''smartctl -t long  /dev/sdb''' #long test
 +
 
 +
to access the results / statistics for these tests:
 +
 
 +
[root@ ~]# '''smartctl -l selftest /dev/sdb'''
 +
 
 +
You can refer to this page for more information how activate or understand the Analysis and Reporting Technology (SMART) [[Monitor_Disk_Health]]
 +
 
 +
{{Note box|if you need to change the disk due to physical failure found by the smartctl command, install a new disk of the same capacity (or more) and enter the following commands to recreate new partitions by copying them from healthy disk sda.}}<!-- Do NOT try to use sfdisk on disks llarger than 2 TiB, use gdisk or similar, see below. -->
 +
 
 +
[root@ ~]# '''sfdisk -d /dev/sda > sfdisk_sda.output'''
 +
[root@ ~]# '''sfdisk /dev/sdb < sfdisk_sda.output'''
 +
 
 +
GPT Disks
 +
 
 +
Larger disks will be GPT Disks, sfdisk will not work - you will need to use gdisk and partx (parted)
 +
[root@ ~]# '''yum install gdisk'''
 +
 
 +
The copy the partition table from a good disk to the new disk, the first line will copy the partition table from disk sda to sdd, the second will randomize the GUID
 +
[root@ ~]# '''sgdisk /dev/sda -R /dev/sdd'''
 +
[root@ ~]# '''sgdisk -G /dev/sdd'''
 +
 
 +
To view the partitions use partx
 +
[root@ ~]# '''partx -l /dev/sdd'''
 +
 
   −
[root@ ~]# '''mdadm --manage /dev/md2 --fail /dev/sdb2'''
+
If you want to reinstate the same disk without replacing it, go to the next step.
mdadm: set /dev/sdb2 faulty in /dev/md2
  −
[root@ ~]# '''mdadm --manage /dev/md2 --remove /dev/sdb2'''
  −
mdadm: hot removed /dev/sdb2
  −
[root@ ~]# '''mdadm --manage /dev/md1 --fail /dev/sdb1'''
  −
mdadm: set /dev/sdb1 faulty in /dev/md1
  −
[root@ ~]# '''mdadm --manage /dev/md1 --remove /dev/sdb1'''
  −
mdadm: hot removed /dev/sdb1
     −
=== Add the partitions back ===
+
===Add the partitions back===
    +
mdadm: hot added /dev/sdb1
 
  [root@ ~]# '''mdadm --manage /dev/md1 --add /dev/sdb1'''
 
  [root@ ~]# '''mdadm --manage /dev/md1 --add /dev/sdb1'''
  mdadm: hot added /dev/sdb1
+
   
 +
mdadm: hot added /dev/sdb2
 
  [root@ ~]# '''mdadm --manage /dev/md2 --add /dev/sdb2'''
 
  [root@ ~]# '''mdadm --manage /dev/md2 --add /dev/sdb2'''
mdadm: hot added /dev/sdb2
     −
== Partition / Re-Partition, this disk ==
+
===Another Look at the mdstat===
   −
=== Delete Existing Partitions ===
+
[root@sme8-64-dev ~]# cat /proc/mdstat
 +
Personalities : [raid1]
 +
md1 : active raid1 sdb1[1] sda1[0]
 +
      104320 blocks [2/2] [UU]
 +
     
 +
md2 : active raid1 sdb2[2] sda2[0]
 +
      52323584 blocks [2/1] [U_]
 +
      [>....................]  recovery = 1.9% (1041600/52323584) finish=14.7min speed=57866K/sec
 +
 +
unused devices: <none>
   −
[root@ ~]# '''fdisk /dev/sdb'''
+
{{note box|with a new disk it may be worthwhile to reinstall grub to avoid problems on startup error. The grub is the program that allows you to launch the operating systems. Please enter the following commands. }}
+
 
The number of cylinders for this disk is set to 121601.
+
==HowTo: Write the GRUB boot sector==
There is nothing wrong with that, but this is larger than 1024,
  −
and could in certain setups cause problems with:
  −
1) software that runs at boot time (e.g., old versions of LILO)
  −
2) booting and partitioning software from other OSs
  −
    (e.g., DOS FDISK, OS/2 FDISK)
  −
  −
Command (m for help): p
  −
  −
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
  −
255 heads, 63 sectors/track, 121601 cylinders
  −
Units = cylinders of 16065 * 512 = 8225280 bytes
  −
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sdb1  *          1          13      104384+  fd  Linux raid autodetect
  −
Partition 1 does not end on cylinder boundary.
  −
/dev/sdb2              13      121601  976655647  fd  Linux raid autodetect
  −
  −
Command (m for help): d
  −
Partition number (1-4): 1
  −
  −
Command (m for help): d
  −
Selected partition 2
  −
  −
Command (m for help): w
  −
The partition table has been altered!
  −
  −
Calling ioctl() to re-read partition table.
  −
Syncing disks.
     −
=== Create new partitions ===
+
{{Warning box|as the dd command is named "data destroyer" you need to be extremely prudent and sure of the name of source partition and/or destination. At first you should skip the dd command, Step 1 below, and attempt to install grub without it, see Step 2 below. If grub can be installed without using dd, then Step 1 can be discarded. }}
   −
Note: change the partitions system id to reflect Linux raid autodetect
+
*1.dd
   −
  [root@ ~]# fdisk /dev/sdb
+
  [root@ ~]# '''dd if=/dev/sda1 of=/dev/sdb1'''
  −
The number of cylinders for this disk is set to 121601.
  −
There is nothing wrong with that, but this is larger than 1024,
  −
and could in certain setups cause problems with:
  −
1) software that runs at boot time (e.g., old versions of LILO)
  −
2) booting and partitioning software from other OSs
  −
    (e.g., DOS FDISK, OS/2 FDISK)
  −
  −
Command (m for help): '''n'''
  −
Command action
  −
    e  extended
  −
    p  primary partition (1-4)
  −
'''p'''
  −
Partition number (1-4): 1
  −
First cylinder (1-121601, default 1):
  −
Using default value 1
  −
Last cylinder or +size or +sizeM or +sizeK (1-121601, default 121601): 13
  −
  −
Command (m for help): '''n'''
  −
Command action
  −
    e  extended
  −
    p  primary partition (1-4)
  −
'''p'''
  −
Partition number (1-4): 2
  −
First cylinder (14-121601, default 14):
  −
Using default value 14
  −
Last cylinder or +size or +sizeM or +sizeK (14-121601, default 121601):
  −
Using default value 121601
  −
  −
Command (m for help): '''a'''
  −
Partition number (1-4): '''1'''
  −
  −
Command (m for help): '''t'''
  −
Partition number (1-4): '''1'''
  −
Hex code (type L to list codes): '''fd'''
  −
Changed system type of partition 1 to fd (Linux raid autodetect)
  −
  −
Command (m for help): '''t'''
  −
Partition number (1-4): '''2'''
  −
Hex code (type L to list codes): '''fd'''
  −
Changed system type of partition 2 to fd (Linux raid autodetect)
  −
  −
Command (m for help): '''p'''
  −
  −
Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
  −
255 heads, 63 sectors/track, 121601 cylinders
  −
Units = cylinders of 16065 * 512 = 8225280 bytes
  −
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sdb1   *          1          13      104391  fd  Linux raid autodetect
  −
/dev/sdb2              14      121601  976655610  fd  Linux raid autodetect
  −
 
  −
Command (m for help): '''w'''
  −
The partition table has been altered!
  −
  −
Calling ioctl() to re-read partition table.
  −
Syncing disks.
     −
== Write the GRUB boot sector ==
+
*2.grub
    
  [root@ ~]# '''grub'''
 
  [root@ ~]# '''grub'''
Line 248: Line 162:  
   Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
 
   Running "embed /grub/e2fs_stage1_5 (hd0)"...  16 sectors are embedded.
 
  succeeded
 
  succeeded
   Running "install /grub/stage1 (hd0) (hd0)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
+
   Running "install /grub/stage1 (hd0) (hd1)1+16 p (hd0,0)/grub/stage2 /grub/grub.conf"... succeeded
 
  Done.
 
  Done.
 
   
 
   
 
  grub> '''quit'''
 
  grub> '''quit'''
   −
 
+
<noinclude>
 
+
[[Category:Howto]]
 
+
[[Category:Administration:Storage]]
 
+
</noinclude>
 
  −
 
  −
 
  −
 
  −
 
  −
===The Leadup===
  −
I'm not sure if I'm reporting a bug or just some manual maintenance
  −
 
  −
My Disk didn't respond correctly to the Menu option "Manage Disk Redundancy". I was upgrading the hard disks to 1Gb disks from the 500Gb that came with the Dell server, the new disks were the Seagate 1Tb ST1000340NS, they are a Server Edition disk. It did this on both disks
  −
 
  −
The Disk was installed as the 2nd Hard Disk during an Upgrade process
  −
 
  −
''It's not fatal'', but it did stop the machine from booting on the disk, perhaps that's just ''not living, therefore not fatal'', whatever, it's not terribly useful.
  −
 
  −
 
  −
and a look from fdisks view shows
  −
 
  −
Note the correct partitioning on sda
  −
 
  −
[root@ ~]# fdisk -lu /dev/sda
  −
  −
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
  −
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
  −
Units = sectors of 1 * 512 = 512 bytes
  −
  −
    Device Boot      Start        End      Blocks  Id  System
  −
/dev/sda1  *          63      208844      104391  fd  Linux raid autodetect
  −
/dev/sda2          208845  1953520064  976655610  fd  Linux raid autodetect
  −
 
  −
What has happened here is the disk partition has been written too close to the start of the drive, so the boot record hasn't got enough room for its GRUB staging - if thats the right term.
  −
 
  −
To correct this, remove the disk from the array, you will need to fail it, then remove it, the repartition and add it back to the array
  −
 
  −
{{Note box|I'm using sdb which was right for me, it might not be for you (if it's RAID 1, there is a 50% chance it's not !)}}
  −
 
  −
===Here we go lets fix this===
  −
 
  −
 
  −
and then I can use the wiki's proceedure to grow the disk - which is why I am here
  −
 
  −
David Bray
  −
 
  −
17 March, 2010
  −
 
  −
<!-- noinclude>[[Category:Howto]]</noinclude -->
 

Navigation menu