Subject: practical RAIDframe questions
To: None <netbsd-users@netbsd.org>
From: Geert Hendrickx <ghen@netbsd.org>
List: netbsd-users
Date: 01/26/2006 12:20:21
Hello, 

I'm planning to move our mail+file server to a software RAID-1.  I've been
reading about RAIDframe, and even toyed with it in qemu[*], but I have no
"real life" experience with it, so I still have a few questions: 

- Partitions.  Some people divide their physical disks (wd0, wd1, ...) into
  multiple partitions, create multiple raid* devices on them, and then put
  one (or more) filesystem partition(s) on each.  Others just create one
  big partition on each physical drive, building one big raid0 device, and
  put all their filesystem partitions on that (so raid0a, raid0b, raid0e,
  ...).  Are there any specific advantages to either setup?  The only thing
  I could think off is that you'll have more work recovering in the former
  situation (more raid sets to rebuild).  

- Swap.  Should I swap onto raid0b, or onto wd0b and wd1b?  In case of a
  disk failure, swap on raid0b will keep working, whereas swap on wd?b will
  not.  But I've read about problems with swap-on-raid in the past.  
  (I know I should set swapoff=YES when swapping on raid, and I know how to
  setup crash dumps onto a physical partition.)  

- Configuration.  I've been using the configuration from the NetBSD guide: 
  > START array
  > 1 2 0
  > 
  > START disks
  > /dev/wd0a
  > /dev/wd1a
  > 
  > START layout
  > 128 1 1 1
  > 
  > START queue
  > fifo 100

  Is this ok?  I'm not sure whether/how the "layout" or "queue" sections
  could be optimized.  

- Any other things I should be careful about?  (no, I am not considering
  raid as a backup method -- this machine is dump(8)ing to tape daily.)

Thanks for any hints, 

	Geert


[*] This was fun!  Just feed the guest OS two (or more) hard disks using
    qemu -hda disk0.img -hdb disk1.img and you can setup RAIDframe.  Also,
    with qemu, it's very easy to damage/remove/add/switch disks from the
    array.  Good for practicing failure recovery! :-)