tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
raidframe rebuild question
I've got a bit of a practical issue with raidframe.
The machine is at 4.0.1. The RAID devices are
raid0: L5 /dev/raid5e /dev/raid6e /dev/raid7e /dev/raid4e /dev/raid9e
/dev/raid10e /dev/raid11e[failed] /dev/raid12e /dev/raid8e
raid1: L1 /dev/raid2e /dev/raid3e
raid2: L1 /dev/ld0e
raid3: L1 /dev/ld5e /dev/wd3e
raid4: L1 /dev/ld8e
raid5: L1 /dev/ld2e
raid6: L1 /dev/ld4e
raid7: L1 /dev/ld3e
raid8: L1 /dev/wd4e
raid9: L1 /dev/ld1e
raid10: L1 /dev/ld7e
raid11: L1 /dev/ld6e
raid12: L1 /dev/wd2e
Just recently, /dev/ld6e decided it didn't like us any longer.
(Actually, I think it is probably the twe it's connected to, not ld6
itself.) I manually failed /dev/wd3e in raid3 and added it as a spare
to raid11, but now I find myself stymied as to how to get it to
rebuild. raid11 is of course failed in raid0; I could raidctl -R it,
but that won't help until raid11 is back in operational shape. I can't
reconstruct raid11, because it has no operational members. I can't
unconfigure it (preparatory to reconfiguring it), because it's held
open by raid0.
What's the right way to do this? Am I stuck needing a reboot?
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse%rodents-montreal.org@localhost
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Home |
Main Index |
Thread Index |
Old Index