tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: nanosecond debiting cv_timedwait
Date: Sat, 21 Mar 2015 16:20:32 +0100
From: Joerg Sonnenberger <joerg%britannica.bec.de@localhost>
My consideration is for the future of when the callout handling itself
uses precise times. In that cacse, bintime does simplify quite a few
computation, since i.e. for differences, the overflow/underflow of the
sub-second field needs only binary ops to handle, no conditions. I'd
expect most timeouts in drivers to be constants, so the conversion of
ms to bintime can be done at compile time.
Is it easier for high-resolution timer hardware drivers to deal in
nanoseconds or bintime? I'm not familiar enough with the hardware to
say.
Judging by a cursory glance at a few struct timecounters, including
the only one I wrote (TI ARM dmtimer), it looks like they usually
don't use nice multiples of 2^64 Hz or 10^9 Hz, so perhaps the
reduction in arithmetic for bookkeeping is still a win.
In that case, we'd better also add ns2bintime, us2bintime, ms2bintime:
struct bintime timeout = us2bintime(100);
int error;
while (!condition) {
error = cv_bintimedwait(&sc->sc_cv, &sc->sc_lock, &timeout);
if (error)
goto fail;
}
I'm OK with this, or with cv_timedwaitbt, if nobody else objects.
Home |
Main Index |
Thread Index |
Old Index