tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: nanosecond debiting cv_timedwait



On Sat, Mar 21, 2015 at 10:34:34AM +0000, Taylor R Campbell wrote:
>    Date: Fri, 20 Mar 2015 18:37:50 +0100
>    From: Joerg Sonnenberger <joerg%britannica.bec.de@localhost>
> 
>    On Fri, Mar 20, 2015 at 01:37:59PM +0000, Taylor R Campbell wrote:
>    > Objections?
> 
>    Only thing to consider is whether we want to hardwire timespec here or
>    not switch to bintime. It makes for nicer computations in the rest of
>    the code later.
> 
> True, although since this happens only when the LWP is about to sleep,
> I'm inclined to suspect the cost of nanotime computations over bintime
> computations is negligible.  We could also add another cv_timedwaitbt
> or something.

My consideration is for the future of when the callout handling itself
uses precise times. In that cacse, bintime does simplify quite a few
computation, since i.e. for differences, the overflow/underflow of the
sub-second field needs only binary ops to handle, no conditions. I'd
expect most timeouts in drivers to be constants, so the conversion of
ms to bintime can be done at compile time.

Joerg


Home | Main Index | Thread Index | Old Index