NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Beating a dead horse



    Date:        Wed, 25 Nov 2015 08:10:50 -0553.75
    From:        "William A. Mahaffey III" <wam%hiwaay.net@localhost>
    Message-ID:  <5655C020.5090708%hiwaay.net@localhost>

In addition to what I said in the previous message ...

  | Hmmmm .... I thought that the RAID5 would write 1 parity byte & 4 data 
  | bytes in parallel, i.e. no '1 drive bottleneck'. AFAIUT, parity data is 
  | calculated for N data drives, 1 byte of parity data per N bytes written 
  | to the N data drives, then the N+1 bytes are written to the N+1 drives, no ?

As I understand it, that's how the mathematics works ... but (like many
things) the theory and the practice don't always cooperate quite like
that.  To see what really happens you have to imagine writing a single
disk block (because that's what raidframe gets from the filesystem layer)
in isolation.  Because of striping, you could imagine it in your model
as if you wanted to write just one byte.   You finish that, and then you
write one more byte, and finish that...

In reality, it isn't done as bytes, that's too inefficient, but stripes,
but the principle is the same.


  | 4256EE1 # raidctl -s  raid2
  | Components:
  |             /dev/wd0f: optimal
  |             /dev/wd1f: optimal
  |             /dev/wd2f: optimal
  |             /dev/wd3f: optimal
  |             /dev/wd4f: optimal
  | No spares.

The real reason I wanted to reply to this message is that last line.

wd5 is not being used as a spare.  I kind of suspected that might be the case.
(Parts of it might be used for raid0 or raid1, that's a whole different
question and not material here).

Raidframe autoconfigures the in-use components (assuming autoconfig is
enabled for the raid array, which it is for you ...)

  | Component label for /dev/wd0f:
  |     Autoconfig: Yes

(same for the other components.)   But spares are not autoconfigured.
If you want a "hot" spare (one that will automatically be used if one
of the other components fails, so you get back the reliability as soon
as possible, in case a second component also fails) rather than a "cold"
spare (one waiting to be used, but which needs human intervention to
actually make it happen - which is what you have now), then you need
to arrange to add the spare after every reboot.

There is no current standard way to make that happen (most of us tend to
be counting pennies, and have no spare drives ready at all - we wait for one
to fail, then go buy its replacement only when required...), so I'd just add

	raidctl -a /dev/wd5f raid2

in /etc/rc.local    You might want to defer doing that though until after
having everything else sorted out - at the minute, wd5f is spare scratch
space, being used by nothing - you could make a ffs filesystem on it, to
measure the speed you can get to a single drive filesystem.  You would need
to alter its partition type from RAID to 4.2BSD in the disklabel first
(and then put it back again after you are done testing with the filesystem
and ready to make it back being a spare.)

Sometime or other we ought to either arrange for spares to autoconfig (but I
suspect that would be a job for Greg, and that probably means not anytime
soon...) or at least to have a standard rc.d script that would turn on any
configured (and unused) spares for autoconfigured raid sets, without needing
evil hacks like the one I just suggested sticking in rc.local...

Doing the second of those is probably within my abilities, so I might take
a crack at that one.

kre



Home | Main Index | Thread Index | Old Index