Port-sparc64 archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: G'damn raidframe... (was: RaidFrame with unequal size replacements)




On Aug 1, 2009, at 04:39, Manuel Bouyer wrote:
# raidctl -v -a /dev/wd0a raid0
# raidctl -F component0 raid0

 After many hours, this completed.  Then, I followed the instruction:
And finally, reboot the machine one last time before proceeding.
This is required to migrate Disk0/wd0 from status "used_spare" as
"Component0" to state "optimal". Refer to notes in the next section
regarding verification of clean parity after each reboot.

 Unfortunately, when the machine rebooted, I now have a failed
"component1", and no mention of the sd1a that I rebuilt onto as a
spare as per the documentation.

 Anyone have any idea (1) What I did wrong,

I think you forgot to
raidctl -A yes raid0

I don't think that was the problem. At least, the raid array did configure itself, because I booted off of it.

Do I need to execute raidctl -A yes (or in my case, -A root) *after* the reconstruction to spare completes, so that it writes out effectively new information about configuration to the members of the array? I suppose that makes sense, but it's not in the RAIDframe guide that I was following.

or (2) what I need to do
to reconfigure the raid array so that my drive is the secondary,
rather than the immaginary drive I used when constructing the raid?

You can try fixing your /etc/raid0.conf with the right devices names.

Right. I've done that numerous times. I've booted off of cdrom after "raidctl -A no"'ing the raid. Then, I can raidctl -c /tmp/ raid0.conf and have the device names correct, and the second component listed as failed. However, if I reboot at that point, the component name gets lost again. This is consistent with comments made in the documentation. If the drive isn't up, presumedly fully up as part of the raid, it will lose it's identity on reboot.

So, I'm once again starting at the beginning, having added a second disk as a spare, and invoking a "-F component1" to rebuild onto it.

When this completes, should I be able to do as I did before, and just reboot? Or, does there need to be a raidctl -A in there somewhere? (It's already set to -A auto, but of course that was only when there was one drive, so. (*shrug*))

How does one promote a spare back into the main array? The documentation/guide shows the state as:
# raidctl -v -s raid0
Components:
        component0: spared
        /dev/wd1a: optimal
Spares:
        /dev/wd0a: used_spare
[...snip...]

And the states that I should be able to reboot, and it will be optimal optimal. But, that certainly wasn't what I saw the first time...

  Thanks...

                          - Chris



Home | Main Index | Thread Index | Old Index