Subject: Re: todr changes to improve clock accuracy across sleeps & reboots
To: Chapman Flack <nblists@anastigmatix.net>
From: Garrett D'Amore <garrett_damore@tadpole.com>
List: tech-kern
Date: 09/08/2006 14:56:01
Chapman Flack wrote:
> Perry E. Metzger wrote:
>> Garrett D'Amore <garrett_damore@tadpole.com> writes:
>>> Yes, but again, you don't want to do that if the rtc actually has
>>> subsecond precision.  Several do.
>
> Just in the interest of generality, oughtn't we be thinking about
> the test as "rtc has granularity >> sec/hz"? If it's subsecond but
> still coarser than hz, we can still win by sleeping or scheduling
> a callout for the transition. (And if finer than hz, we could still
> spin for the transition.)
>
>> that we have a few such, perhaps a device property should tell us
>> whether we have only 1 second precision or not, and the test in
>> todr_gettime and todr_settime could be based on that...
>
> So more generally, maybe the device property should tell us /what/
> the precision /is/. Based on that, the code can make a variety of
> reasonable choices.
>
> -Chap

Geez.  Who cares if you have some initial clock setup that is off by 100
milliseconds?

I would be surprised if any other OS even the 1-second granularity test.

Here's my take on this:

    1) for clocks that use the clock_ymdhms structure (which is most of
them), we'll watch for a one second granularity change.  That's easy to do.

    2) for clocks that have better resolution than 1 second, they should
use the struct timeval (soon to be struct timespec, I hope, though I've
not seen much discussion around _that_) API, and set the clock to
whatever resolution they can.  If they want to poll the clock to get
better resolution, then that can be handled in the clock driver. 
Forcing the MI framework to carry this baggage is getting silly.

Finally, apart from the theoretical case, has _anyone_ ever
significantly experienced this or complained about it?   I.e. is this
even a real-life problem, or are we just tilting at windmills?

I mean seriously, if your clock drifts slightly while offline, do you
care?  And when you get back on-line, don't you want to resync your
clock anyway (with NTP)?

This could be a case study in when "good enough" is good enough, and
"perfect" is just not worth reaching for. :-)

-- 
Garrett D'Amore, Principal Software Engineer
Tadpole Computer / Computing Technologies Division,
General Dynamics C4 Systems
http://www.tadpolecomputer.com/
Phone: 951 325-2134  Fax: 951 325-2191