[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: saving RAIDframe parity accross a panic
Edgar =?iso-8859-1?B?RnXf?= writes:
> > If the system now panics, the component labels will say that the
> > parity is dirty, but since a component is missing it will be forced
> > to use whatever parity is there.
> Ah, thanks! Reassuring to know.
> Another RAIDframe question: Is there a decent way to replace a component on
> the fly that one is afraid of failing?
> Say I get recoverable errors or squeeky noises from a component. Of course, I
> can add a hot spare, fail that component and begin a reconstruction. But
> then I have a three-hour window where, if another component fails, I have
> myself in the foot.
Can you schedule downtime of the RAID sets for those 3 hours? If you
can't, then you're rather stuck with that 3-hour window of vulnerability.
If you can, then the best thing to do would be to unmount all active
filesystems from the RAID set, and then do the add hot spare and
Let's say wd0e, wd1e, and wd2e are in your RAID set, and that wd2 is
dying. So you hot-add wd3e to the set, and start rebuilding wd2e to
wd3e. Now if wd0e fails, you could always do a 'raidctl -u' and
then 'raidctl -C' to force the configuration with just wd0e, wd1e,
and wd2e in the RAID set. Because nothing was writing to the disk
during the reconstruction, you know that all the data and parity are
still self-consistent, and so you can still trust wd2e to be in-sync
with the contents of wd0e and wd1e. If the filesystem from the RAID
set is active you can't do this, as the instant you would change any
of the data on wd0e or wd1e then wd2e becomes invalid. (yes, it's
only invalid for the stripes that change, but RAIDframe doesn't keep
track of what's changed on a stripe-by-stripe basis...)
Of course, you do have backups of all the data, right? ;) (A
verified set or two of backups is still the best guarantee against
data loss in these situations...)
Main Index |
Thread Index |