tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: nanosecond debiting cv_timedwait
On Mon, Mar 23, 2015 at 07:36:52AM +0000, Taylor R Campbell wrote:
> Date: Sat, 21 Mar 2015 16:20:32 +0100
> From: Joerg Sonnenberger <joerg%britannica.bec.de@localhost>
>
> My consideration is for the future of when the callout handling itself
> uses precise times. In that cacse, bintime does simplify quite a few
> computation, since i.e. for differences, the overflow/underflow of the
> sub-second field needs only binary ops to handle, no conditions. I'd
> expect most timeouts in drivers to be constants, so the conversion of
> ms to bintime can be done at compile time.
>
> Is it easier for high-resolution timer hardware drivers to deal in
> nanoseconds or bintime? I'm not familiar enough with the hardware to
> say.
The timer hardware is pretty much irrelevant, it seldomly runs at a nice
frequency as you said. Difference is for the kernel, where all arithmetic
is just plain shift, add, and (for overflow).
> In that case, we'd better also add ns2bintime, us2bintime, ms2bintime:
>
> struct bintime timeout = us2bintime(100);
> int error;
>
> while (!condition) {
> error = cv_bintimedwait(&sc->sc_cv, &sc->sc_lock, &timeout);
> if (error)
> goto fail;
> }
>
> I'm OK with this, or with cv_timedwaitbt, if nobody else objects.
We have timespec and timeval conversions, direct ms/us/ns conversion
woudl be easy to add.
Joerg
Home |
Main Index |
Thread Index |
Old Index