tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: raidframe oddity
>>> [RAIDframe parity rebuild finished at 87%]
>> Any ideas why such a severe mismatch between estimate and reality,
>> and whether it indicates anything that could actually cause trouble?
> It smells like a 32bit integer overflow. What "sectPerSU" value do
> you use?
16. There are under 2e+9 sectors in each member, about 122e+6 stripes
in the RAID array (or, equivalently, that many SUs per member).
But never mind; I ran raidctl -s on it and saw something unexpected.
It turns out one of the underlying disks threw an I/O error, causing
the RAID5 to go degraded, which presumably is what killed the parity
rebuild. The timestamp in the logs is close enough to right. Perhaps
raidctl should print a message in this case, instead of just exiting
apparently successfully?
I'm annoyed. Those are brand new disks. And, while I've seen errors
from that hardware with good disks, the errors went away when I took it
apart and cleaned off the connectors with one of those cans of
compressed "air" - which wasn't very long ago and thus shouldn't need
doing again now.
/~\ The ASCII Mouse
\ / Ribbon Campaign
X Against HTML mouse%rodents-montreal.org@localhost
/ \ Email! 7D C8 61 52 5D E7 2D 39 4E F1 31 3E E8 B3 27 4B
Home |
Main Index |
Thread Index |
Old Index