tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: patch for raidframe and non 512 byte sector devices

        Hello.  Following up on my earlier post, see below.

On Nov 8,  4:38pm, Brian Buhrow wrote:
}       Hello.  Below is a patch which implements this idea.  I've tested it
} on systems with raids configured on raw disks, where autoconfigure didnt
} work before this patch, on systems with existing raid sets inside BSD
} disklabels, and on systems without any raid configured at all.  I have not
} yet booted on a system with raid components configured inside gpt wedges.
}       All works as expected, except that there is a side effect on the
} systems with raid components configured on raw disks.  After the raid set
} is autoconfigured, opendisk() fails with EBUSY, which is expected.  This
} doesn't seem to have any bad side effects, except that it creates a lot of 
} in the dmesg output.  Also, one thought I have is that in the event of a
} component failure, it might not be possible to re-open the disk to replace
} it without rebooting.  This depends, I guess, on whether raidframe closes a
} device when it fails, and whether the system will notice that the disk has
} ben closed and can be re-opened.
}       Do you have any thoughts about this patch, and ways to mitigate the
} noise it generates?  One thought I had was that if you could tell the
} difference between a real label, i.e. one that came from the disk, versus
} one that was faked up by the system when you asked for disk parameters, you
} could fail the configuration test in the first case, and use the faked up
} 'a' partition in the second case.

        I've tested the failure case by failing a drive in my test raid set
and initiating a reconstruction.  All works as it should.  I'm not sure
who's calling opendisk() after the raid set gets configured, but it seems


Home | Main Index | Thread Index | Old Index