Subject: Re: RAID-1 and crash dumps
To: Greg Oster <oster@cs.usask.ca>
From: Manuel Bouyer <bouyer@antioche.lip6.fr>
List: current-users
Date: 02/04/2004 18:01:40
On Wed, Feb 04, 2004 at 07:40:52AM -0600, Greg Oster wrote:
> [sorry I missed this discussion at the start, but to say I've "been 
> experiencing email problems" the last few days is a bit of an 
> understatement.]
> 
> Martti Kuparinen writes:
> > Manuel Bouyer wrote:
> > 
> > >># disklabel raid0
> > >> a:  16777216         0     4.2BSD   2048 16384 27672
> > >> b:   2097152  16777216       swap
> > >> d: 241254528         0     unused      0     0
> > >> e: 222380160  18874368     4.2BSD   2048 16384 28856
> > 
> > > Obvisouly something changed. The value "192" for the offset looks suspiciou
> > s to
> > > me, on 2 i386 hosts here it's 96. Can you try with 96 ?
> > 
> > I found the correct offset, it is 63+96=159 for raid0a's
> > real offset on wd0.
> >
> > But this is weird as according to my calculations this should
> > be 63+129 as 129 is the number I get for the RAID internal
> > structures. Isn't this the way to calculate the size?
> 
> There is no RAID internal structure that is 129 blocks.
> 
> Let me backup to one of your previous emails where you said:
> 
> > # disklabel wd0
> >  a: 241254657        63       RAID
> >  c: 241254657        63     unused      0     0
> >  d: 241254720         0     unused      0     0
> >  e:  16777216       192     4.2BSD   2048 16384 27672
> >  f: 222380160  18874560     4.2BSD   2048 16384 28856
> 
> You're wanting to make wd0e "line up" with 'raid0a' (which starts at 
> block 0 of raid0), right?  The "magic value" you're looking for here 
> is "RF_PROTECTED_SECTORS" (defined in <dev/raidframe/raidframevar.h>)
> which is the number of sectors before the "data" part of the RAID set 
> (i.e. before where "block 0" of raid0 would be.)  The value is "64", 
> which means the start of "e" above should be 63+64=127.
> 
> So to get your swap partition lined up, you'll want:
> 
> 16777216+64+63 = 16777343

Hum, experiments show that the real value is 96, not 64. This is true
for older (1.6.x) systems too.

The question is why the old way to computing this doesn't work any more.
It looks like, in addition to some space at the start, raidframe is now
using (or at last hidding) some space at the end too. This is because the
size of raid0d is smaller than the size of sd0a + RF_PROTECTED_SECTORS.
Of maybe raidframe is wrong when computing the size of the virtual drive.

Could one of your recent changes explain this ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--