Subject: Re: Clockticks lost, why ?
To: Scott Reynolds <scottr@og.org>
From: Christopher R. Bowman <crb@glue.umd.edu>
List: port-mac68k
Date: 01/28/1997 23:08:30
>On Tue, 28 Jan 1997, Christoph Ewering wrote:
>
>> Well, I don't think that this is silly. think it is silly to count the
>> interrupts and when i reach 60 to add a second to the systemclock.
>
>That's not an accurate representation of how it works.  For a more
>in-depth explanation, I refer you to `The Design and Implementation of
>4.4BSD' sections 3.4 and 3.6.  I'll summarize, but please understand that
>it's just that:  a summary.
>
> - On each clock `tick', we increment the system time by 1000000/HZ
>   microseconds.[*]
>
> - Scheduling and other kernel tasks (e.g. timeouts), as well
>   as per-process timers (real, profiling, virtual) need resolution finer
>   than a per-second tick.
>
> - Some user programs require a monotonically increasing time of day.
>
>[*] Actually, we add a variable that is initially set to 1000000/HZ, but
>this is close enough for this part of the discussion.  See the reference
>to adjtime() in the last paragraph.
>
>> So why don't i take 60 interrupts and then look what is in the RTC.
>> You can share processtime with this interrupt, but i don't understand why
>> use it to calculate the time.
>
>The system time is a side effect of the other kernel activities going on.
>It's not `calculated', but rather just a counter that gets incremented by
>1000000/HZ microseconds on each tick. In addition to the RTC providing
>only 1 second resolution, which is clearly insufficient resolution for
>several user-level programs, your suggestion has two other problems:
>
> - Access to the RTC is expensive, and
>
> - We will see significant `jitter' in the time-of-day clock if we happen
>   to miss interrupts.
>
>The latter will manifest itself as a clock that suddenly jumps forward as
>the RTC time is used to update the system time (assuming that's even
>practical, i.e. assuming RTC access takes an insignificant amount of
>time).  The adjtime() call is mentioned in the previously cited text as a
>solution to this problem, but understand that adjtime() _depends_ on being
>able to modify the amount of time that the clock is incremented by at each
>`tick,' something that is not possible with the RTC.
>
>Hope this makes the issue a little more clear.
>
>--scott

Maybe I am just stupidly missing something, but the solution seems obvious
to me:

Maybe RTC acces is expensive, maybe it isn't, but I'll bet that that doing
it every N minutes or so (N in the 5 to 20 mintues range) isn't an undue
burden.  Since we aren't incrementing the system time by a constant but
instead by a variable we can change this variable.  If we increase the
variable we in effect run the system (not the RTC) clock faster, if we
decrease this variable (but keep it positive) then we in effect run the
system clock slow.  But note that slow or fast, the system time is
monatonically increase (My guess is that this is how adjtime works, and
reading sec 3.6 of the 4.4BSD book seems to bear this out.)

So what we want to do is monkey with the variable that we add to the system
clock so that the RTC time which is relatively stable but of poor
resolution, and the system time which is of better resolution but not so
good stability converge.  Kinda like a phase locked loop.  We could use a
weighted windowed averaging function to estimate our expected interupt loss
in the next N minutes and adjust our system clock increment accordingly.
So for instance we can calculate how many interupts we have lost in each of
the last X, N minutes periods and calculate a weighted average of these to
use as our estimate of what we expect to loose in the next N minutes.  Now
it just becomes a question of what the weights are and how big a window is
necessary.

---------
Christopher R. Bowman
crb@eng.umd.edu
My home page