Talk:Raid

From SME Server
Revision as of 00:41, 16 December 2012 by Trex (talk | contribs)
Jump to navigation Jump to search

from bug 5665 The newer e2fs tools are smart enough that if you use the resize2fs tool to do the resizing with a mounted filesystem it will do the same thing ext2online used to do. They just made it so instead of having two tools you have one smart tool.

That being said the documentation may need to be updated to handle the sme8 case. However there are a lot of enhanced things you can do with the new tools in 8 that you can't do in 7. I'd like to see an entire new page explaining how to expand/grow your filesystem on 8 instead of just adding notes for where things are different.

2011-03-06: added a warning box to RAID1 -> RAID5 conversion: note taken from http://www.arkf.net/blog/?p=47


From Bug 6632 and 6630 a suggested sequence for Upgrading a Hard Drive size is detailed below after issue when attempting to sync a new drive when added first as sda

Upgrading the Hard Drive Size after HD fail

Note: these instructions are applicable if you have a faulty HD on a RAID system with more than one drive and intend to upgrade the sizes as well as replacing the failed HD. They are not applicable to a single-drive RAID 1 system, and increasing the useable space on such a system by cloning the existing single drive to a larger drive is not supported. See http://bugs.contribs.org/show_bug.cgi?id=5311

  • CAUTION MAKE A FULL BACKUP!
  • Ensure you have e-smith-base-4.16.0-33 or newer installed. [or Update to at least 7.1.3]

HD Scenario - Current 250gb drives, new larger 500gb drives

  1. Remove old 250gb HDD from sdb, leave old 250gb drive as sda on its own and boot up.
  2. Shutdown, connect one new 500gb drive as sdb and boot up
  3. Login to the admin panel and manage raid to add new (larger) drive to system.
  4. Wait for raid to fully sync
  5. Do full reboot with those 2 drives in place (1 original, 1 new)
  6. Shutdown again, disconnect the original drive, and connect the new drive just sync'd as sda (in place of original)
  7. Boot up again with just the one new drive in place, and confirm it boots OK.
  8. Shutdown, and connected the other 500gb drive as sdb
  9. Boot up login to admin panel and add sdb to the array, and wait for raid to fully sync.
  10. Reboot with both drives in place, and check RAID health is OK.


mdadm --grow /dev/md2 --size=max
pvresize /dev/md2
lvresize -l +100%FREE main/root
ext2online -C0 /dev/main/root   

In the last command above, the -C0 is: dash C zero

If you receive an "command not found" error, try this:

resize2fs /dev/mapper/main-root &

TIP: I put an "&" at end to allow it to run in background even if I close ssh session.


Notes :

  • All of this can be done while the server is up and running with the exception of #1.
  • These instructions should work for any raid level you have as long as you have >= 2 drives
  • If you have disabled lvm
  1. you don't need the pvresize or lvresize command
  2. the final line becomes ext2online -C0 /dev/md2 (or whatever / is mounted to)