Subject: Re: RaidFrame Partitioning
To: Greg Oster <oster@cs.usask.ca>
From: Louis Guillaume <lguillaume@berklee.edu>
List: netbsd-users
Date: 02/10/2004 15:07:25
Greg Oster writes:
> "Louis Guillaume" writes:
>
>>Hello,
>>
>>I've seen some conflicting information regarding partitioning of
>>RaidFrame components and partitions and was hoping for some
>>clarification or even just opinions. This is mainly with regards to a
>>mirrored set (raid 1.)
>>
>>The NetBSD Guide suggests creating one large "RAID" partition on each
>>drive (i.e. one component per drive) then partitioning this raid device
>>into the desired filesystems.
>>
>>Elsewhere (and I can't remember where, sorry) there was suggestion of
>>creating several RAID partitions on each drive, resulting in several
>>components per drive, each of which will house a single filesystem.
>
>
> 'man raidctl' suggests that, among other places. (My personal
> preference is for one filesystem per RAID set. Search the mail
> archives or on Google for more info..)
>
I've gone ahead and done this anyway, creating several RAID sets, one
for each filesystem. This should give more flexibility for filesystem
re-organization in future.
>
>>I initially did things the former way as it seemed simple. But
>>unfortunately I did a poor job of partitioning so now I must
>>re-configure the entire array (this alone may be an argument for the
>>latter.)
>>
>>Also I've noticed some filesystem corruption popping up sporadically on
>>the root filesystem such as...
>>
>>find: /usr/share/man/cat3/getnetgrent.0: Bad file descriptor
>>
>>... in my daily insecurity output. I've only ever seen this with
>>RaidFrame. It has been happening for some time
>
>
> For how long, and from what kernel rev(s)?
>
I've only seen this since 1.6ZG, but was only running RaidFrame for a
few days on 1.6ZF and not before that.
>
>> in small, subtle and
>>as-of-yet non-critical ways. Lucky me! This, of course, only gets fixed
>>by fsck-ing. Any idea of what's causing this
>
>
> My guess would be bad RAM, but I might be biased... (There are NO
> bugs (at least that I'm aware of) in RAIDframe that would be causing
> this sort of lossage.)
>
>
>>or if it could be avoided by configuring Raid differently?
>
>
> You havn't given any config files, but you shouldn't see filesystem
> lossage from any valid RAIDframe configuration (and if a
> configuration isn't valid, RAIDframe shouldn't allow it).
>
Perhaps the problem wasn't with RAIDframe itself but with my
configuration. Here are the old and new configs. Please let me know if
you see anything funky.
The old configuration was a single RAID-1 array with 3 filesystems, /
/home and swap.
The new one is 5 RAID-1 arrays one for each of / /home /usr /var and swap.
#######################
## OLD CONFIGURATION ##
#######################
==> /etc/fstab <==
/dev/raid0a / ffs rw 1 1
/dev/raid0b none swap sw 0 0
/dev/raid0e /home ffs rw 1 1
/dev/wd1b none swap dp 0 0
/dev/cd0a /cdrom cd9660 ro,noauto 0 0
kernfs /kern kernfs rw
procfs /proc procfs rw,noauto
==> /etc/raid0.conf <==
START array
1 2 0
START disks
/dev/wd9a <- became /dev/wd0a
/dev/wd1a
START layout
128 1 1 1
START queue
fifo 100
==> disklabel.raid0 <==
5 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 16777216 0 4.2BSD 2048 16384 27672 # (Cyl. 0 -
16383)
b: 1048576 16777216 swap # (Cyl. 16384 -
17407)
d: 39102208 0 unused 0 0 # (Cyl. 0 -
38185*)
e: 21276416 17825792 4.2BSD 2048 16384 27784 # (Cyl. 17408 -
38185*)
==> disklabel.wd0 <==
16 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 39102273 63 RAID # (Cyl. 0*-
38791)
b: 1048576 16777344 swap # (Cyl. 16644*-
17684*)
c: 40020561 63 unused 0 0 # (Cyl. 0*-
39702)
d: 40020624 0 unused 0 0 # (Cyl. 0 -
39702)
e: 918288 39102336 4.2BSD 0 0 0 # (Cyl. 38792 -
39702)
==> disklabel.wd1 <==
8 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 39102273 63 RAID # (Cyl. 0*-
38791)
b: 1048576 16777344 swap # (Cyl. 16644*-
17684*)
c: 39102273 63 unused 0 0 # (Cyl. 0*-
38791)
d: 39102336 0 unused 0 0 # (Cyl. 0 -
38791)
#######################
## NEW CONFIGURATION ##
#######################
==> /etc/fstab <==
/dev/raid0a / ffs rw 1 1
/dev/raid4a none swap sw 0 0
/dev/wd0b none swap dp 0 0
/dev/raid1a /usr ffs rw 1 1
/dev/raid2a /var ffs rw 1 1
/dev/raid3a /home ffs rw 1 1
kernfs /kern kernfs rw
procfs /proc procfs rw,noauto
==> /etc/raid0.conf <==
START array
1 2 0
START disks
/dev/wd0a
/dev/wd9a
START layout
128 1 1 1
START queue
fifo 100
==> /etc/raid1.conf <==
START array
1 2 0
START disks
/dev/wd0e
/dev/wd9e
START layout
128 1 1 1
START queue
fifo 100
==> /etc/raid2.conf <==
START array
1 2 0
START disks
/dev/wd0f
/dev/wd9f
START layout
128 1 1 1
START queue
fifo 100
==> /etc/raid3.conf <==
START array
1 2 0
START disks
/dev/wd0g
/dev/wd9g
START layout
128 1 1 1
START queue
fifo 100
==> disklabel.raid0 <==
4 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 1056256 0 4.2BSD 1024 8192 44016 # (Cyl. 0 -
1031*)
d: 1056256 0 unused 0 0 # (Cyl. 0 -
1031*)
==> disklabel.raid1 <==
4 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 8450944 0 4.2BSD 2048 16384 26328 # (Cyl. 0 -
8252*)
d: 8450944 0 unused 0 0 # (Cyl. 0 -
8252*)
==> disklabel.raid2 <==
4 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 15845632 0 4.2BSD 2048 16384 28784 # (Cyl. 0 -
15474*)
d: 15845632 0 unused 0 0 # (Cyl. 0 -
15474*)
==> disklabel.raid3 <==
4 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 12692608 0 4.2BSD 2048 16384 27792 # (Cyl. 0 -
12395*)
d: 12692608 0 unused 0 0 # (Cyl. 0 -
12395*)
==> disklabel.raid4 <==
4 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 1056256 0 swap # (Cyl. 0 -
1031*)
d: 1056256 0 unused 0 0 # (Cyl. 0 -
1031*)
==> disklabel.wd0 <==
8 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 1056321 63 RAID # (Cyl. 0*-
1047)
b: 1056384 1056384 RAID # (Cyl. 1048 -
2095)
c: 40020561 63 unused 0 0 # (Cyl. 0*-
39702)
d: 40020624 0 unused 0 0 # (Cyl. 0 -
39702)
e: 8451072 2112768 RAID # (Cyl. 2096 -
10479)
f: 15845760 10563840 RAID # (Cyl. 10480 -
26199)
g: 12692736 26409600 RAID # (Cyl. 26200 -
38791)
h: 918288 39102336 4.2BSD 1024 8192 45920 # (Cyl. 38792 -
39702)
==> disklabel.wd1 <==
7 partitions:
# size offset fstype [fsize bsize cpg/sgs]
a: 1056321 63 RAID # (Cyl. 0*-
1047)
b: 1056384 1056384 RAID # (Cyl. 1048 -
2095)
c: 39102273 63 unused 0 0 # (Cyl. 0*-
38791)
d: 39102336 0 unused 0 0 # (Cyl. 0 -
38791)
e: 8451072 2112768 RAID # (Cyl. 2096 -
10479)
f: 15845760 10563840 RAID # (Cyl. 10480 -
26199)
g: 12692736 26409600 RAID # (Cyl. 26200 -
38791)
>
>>What is the better way to partition our raid schemes?
>>
>>I'm using -current (1.6ZG) at this time and will probably upgrade to
>>1.6ZI or higher after re-partitioning.
>>
>>Any advice would be most appreciated. Thanks,
>>
>>Louis
>>
>
>
Louis