Subject: Re: chap-rf.xml overhaul -- testers & proof reading?
To: Greg Oster <oster@cs.usask.ca>
From: Brian A. Seklecki <lavalamp@spiritual-machines.org>
List: netbsd-docs
Date: 09/10/2004 00:25:06
> > Although NetBSD is the primary platform for RAIDFrame development,
> > it can naturally be found in OpenBSD and FreeBSD, however another
> > in-kernel RAID system is being developed: Vinum
> 
> Although NetBSD is the primary platform for RAIDFrame development, it
> can also be found in OpenBSD and FreeBSD.  NetBSD also has another
> in-kernel RAID system called Vinum, but it will not be discussed here.
> 

Right, this is better wording, but I didn't want to preclude OBSD users
looking for a procedural reference, which I don't think that this does.

> > Secondly, depending on the RAID level used, RAIDFrame does provide
> > redundancy in the event of a hardware failure, however, it is NOT a
> > replacement for reliable backups!
> 
> Secondly, depending on the RAID level used, RAIDFrame does provide
> redundancy in the event of a hardware failure.  However: it is NOT a
> replacement for reliable backups!

Corrected.

> > Unfortunately, there is no list dedicated to RAIDFrame support. 
> 
> Acutally.. there is... raidframe@cs.cmu.edu or maybe I should say
> "was"...  the list has been horribly inactive, and I'm not sure if
> anything I sent there in March of this year actually made it out :( 

So should I bother mentioning it?  I don't see a searchable list archive
anywhere.

Perhaps, "There is no *NetBSD* list dedicated..."

> 
> > The kernel must also contain static mappings between bus addresses 
> > and device nodes in /dev. 
> 
> No.  If one doesn't plan on using 'raidctl -A yes', then this is true,
> but everyone should be using that now, and so hard-wiring any disk
> devices shouldn't be necessary.
> 
> > Table 23.1. Example i386 Hardware Quick Reference
> 
> This table shouldn't be needed -- the idea of creating a drive chart
> is probably useful though.

See my other mail for comments about this.

> 
> > With RAID-1, components are mirrored, therefore the server can be
> > fully functional in the event of a single component failure.
> 
> With RAID-1 components are mirrored and therefore the server can be
> fully functional in the event of a single component failure.
> 

Fixed.

> > redundancy and negligible performance improvements,
> 
> Not quite -- multiple reads can see an effective 2x performance
> boost over a single disk.
> 
> 
> > it's most practical application 
> 
> its
> 
> > other RAID levels should be considered
> 
> This implies RAID 1 wouldn't be appropriate... s/should/might/.
> 

I have changed this to:

"Because RAID-1 provides both redundancy and performance improvements,
its most practical application is use on critical "system" partitions
such as /, /usr, /var, swap, etc., where read operations are more
frequent than write operations. For other file systems, such as /home or
/var/{application}, other RAID levels might be considered (see the
references above). If one were simply creating a generic RAID-1 volume
for a non-root file system, the cookie-cutter examples from the man page
could be followed, but because the root volume must be bootable, certain
special steps must be taken during initial setup."


> > Mirror / re-sync Disk0/wd0 back into the RAID set.
> 
> s/Mirror/Add/
> 

Fixed.


> Figure 23.5 still says "Volume w/bogus Component0"
> 

I'll fix this in the image and re-upload it.

> > loader to understand both 4.2BSD/FFS and RAID file systems.
> 
> s/file systems/partitions/
> 

Fixed

> > know enough about the file system to be able to read the 2nd stage
> > boot blocks. 
> 
> 
> know enough about the disk partitions and file systems to be able to
> read the 2nd stage boot blocks.
> 
> > You would never want to have both disks on
> 

Fixed.

> "In an ideal world you would never want to have both disks on"
> 
> (we don't live in an ideal world :) )
> 
> > disks if a component fails irrecoverable.
> 
> "disks if a component suffers a critical hardware failure."


Fixed

> 
> 
> > Note that wd9 is a non-existing disk. 
> 
> If this document is expected to be used for NetBSD 2.0+, then you
> might want to talk about the special disk name: "absent"
> Rather than saying "wd9", you can just use "absent" instead.
> 
> See "Initialization and Configuration" in 'man raidctl' on a 2.0_BETA
> box.

That's pretty cheeky!  <tip> worthy due to the Dependency on NetBSD 2.0+

When the time comes, in the future, I'll re-run through the process with
that and capture command output as such.

"Tip
On systems running NetBSD 2.0+, you may substitute a "bogus" component
such as /dev/wd9a for a special disk name "absent""


> > The format you choose is entirely at your discretion.
> 
> "Sort of."  What needs to be clear here is that whatever serial number
> you choose should be completely different from any other serial number
> you ever expect the system to encounter at the same time. 
> 
> > This can be done using or .
> 
> "" and "" are missing :)

This is a &man macro that was lackage.  &man.pax.1; && &man.dump.1;

> 
> > effectively brining Disk0/wd0 
> 
> "bringing"

Mmmm, fixed. >:}

> 
> > the disklabels of Disk0/wd0 match Disk1/wd0.
> 
> s,Disk1/wd0,Disk1/wd1,
> 
> Oh... one other thing I usually do is save disklabel.wd0, etc. into
> /root, rather than having them disappear from /tmp after reboot.

I mentioned this in my original document, but precluded it, for whatever
reason here.  I've re-added it.

> 
> > Once you are certain that both disks are bootable, verify the RAID
> > parity is clean after each reboot: 
> 
> The example for this doesn't actually show if the parity is clean or
> not :)

Right.  I'm still trying to find a way to highlight specific lines of
output within a <screen> or <computeroutput>, I'll probably use
<emphasis> in combination with <command> for now.   In the mean time, I
tried to cut down on output duplication.  I've re-added this again.

> Looks good!! :)

Good; I'm glad you approve so far.  I had to re-write all of this with
the advent of RAIDFrame in the install kernels, and eventually
integration into sysinst looming overhead, eventually.

Better late than never.  Plus there's plenty of room for improvement on
a document that is structured for such improvements.

A RAID-5 example and a FAQ section would probably be nice.

~lava

> *end*
>