Subject: Re: RAID-1 and crash dumps
To: Manuel Bouyer <bouyer@antioche.lip6.fr>
From: Greg Oster <oster@cs.usask.ca>
List: current-users
Date: 02/04/2004 13:41:51
Manuel Bouyer writes:
> On Wed, Feb 04, 2004 at 07:40:52AM -0600, Greg Oster wrote:
> > [sorry I missed this discussion at the start, but to say I've "been
> > experiencing email problems" the last few days is a bit of an
> > understatement.]
> >
> > Martti Kuparinen writes:
> > > Manuel Bouyer wrote:
> > >
> > > >># disklabel raid0
> > > >> a: 16777216 0 4.2BSD 2048 16384 27672
> > > >> b: 2097152 16777216 swap
> > > >> d: 241254528 0 unused 0 0
> > > >> e: 222380160 18874368 4.2BSD 2048 16384 28856
> > >
> > > > Obvisouly something changed. The value "192" for the offset looks suspi
> ciou
> > > s to
> > > > me, on 2 i386 hosts here it's 96. Can you try with 96 ?
> > >
> > > I found the correct offset, it is 63+96=159 for raid0a's
> > > real offset on wd0.
> > >
> > > But this is weird as according to my calculations this should
> > > be 63+129 as 129 is the number I get for the RAID internal
> > > structures. Isn't this the way to calculate the size?
> >
> > There is no RAID internal structure that is 129 blocks.
> >
> > Let me backup to one of your previous emails where you said:
> >
> > > # disklabel wd0
> > > a: 241254657 63 RAID
> > > c: 241254657 63 unused 0 0
> > > d: 241254720 0 unused 0 0
> > > e: 16777216 192 4.2BSD 2048 16384 27672
> > > f: 222380160 18874560 4.2BSD 2048 16384 28856
> >
> > You're wanting to make wd0e "line up" with 'raid0a' (which starts at
> > block 0 of raid0), right? The "magic value" you're looking for here
> > is "RF_PROTECTED_SECTORS" (defined in <dev/raidframe/raidframevar.h>)
> > which is the number of sectors before the "data" part of the RAID set
> > (i.e. before where "block 0" of raid0 would be.) The value is "64",
> > which means the start of "e" above should be 63+64=127.
> >
> > So to get your swap partition lined up, you'll want:
> >
> > 16777216+64+63 = 16777343
>
> Hum, experiments show that the real value is 96, not 64. This is true
> for older (1.6.x) systems too.
Hmmmm... I'm not aware of any system running RAIDframe that would
use a value other than RF_PROTECTED_SECTORS. And that value has been
64 since day one.
> The question is why the old way to computing this doesn't work any more.
> It looks like, in addition to some space at the start, raidframe is now
> using (or at last hidding) some space at the end too.
Here's how things are setup for a partition marked as "RAID" in a
disklabel:
- 64 blocks are "reserved" via RF_PROTECTED_SECTORS.
- at block 32 in the reserved space is where RAIDframe hides its 1
block that is the component label
- the size of the data portion that RAIDframe will report as the
size of raid0d (or raid0c) is based on the largest multiple of the
stripe size used for a RAID set.
Say wd0e is marked as type RAID, and it's for a RAID 1 set with a
stripe width of 128. If the size of wd0e is 1024 blocks, then:
the RAID data will start at block 64 of wd0e.
(1024-64)/128 = 7.5
the 7.5 will get rounded down to 7, and thus the data portion
of that component would be just 7*128=896 blocks.
If the size of wd0e is 16777216 blocks, then:
the RAID data will start at block 64 of wd0e
(16777216-64)/128 = 130175.5
Total data portion is 130175*128 = 16777088 blocks
I'm not sure where the "96" is coming from, but it's likely related
to the size of the partitions...
> This is because the
> size of raid0d is smaller than the size of sd0a + RF_PROTECTED_SECTORS.
> Of maybe raidframe is wrong when computing the size of the virtual drive.
>
> Could one of your recent changes explain this ?
No. See above :)
Later...
Greg Oster