Subject: Re: settimeofday() versus interval tim{ers,ing}
To: der Mouse <mouse@holo.rodents.montreal.qc.ca>
From: Dennis Ferguson <dennis@jnx.com>
List: tech-kern
Date: 10/04/1996 10:00:22
> How much does adjtime() slew the clock by, default? .1%? Then if you
> sleep for a thousand ticks - ten seconds on most machines - you can be
> as much as a whole tick out thanks to adjtime. I know _I_'d certainly
> be annoyed if I requested a timer tick once a second and then,
> according to gettimeofday(), actually got it every 1.001 second,
> thereby slowly drifting with respect to second boundaries.
But who was calling adjtime() to adjust your clock in this example, and
why were they doing this? If it was xntpd (or even timed) the only reason
they would have been calling adjtime() in this way is because your system
clock frequency is 0.1% too slow, and they're correcting it. That is,
without adjtime() you would be getting interrupts every 0.999 seconds
and slowly drifting with respect to second boundaries, with adjtime()
adjusted time you're more likely to get interrupts every 1.000 seconds,
and far more likely to get really excellent long term stability (if you
leave the timer running for a day you'll almost certainly have drifted a
second or so if you rely on your system's crystal, if you use the time
ntp is disciplining with adjtime() you'll still be hitting the intervals
within milliseconds).
If your system's clock were actually accurate there'd be no reason for
anyone to call adjtime() (well, they could use adjtime() to do a one-time
phase correction to get the time-of-day right, I guess, but this error
would be non-cumulative). So if someone is calling adjtime() repeatedly
the only reason I can think of for them to do be doing it is because
your system's clock is off and they're using adjtime() to correct it. I can't
think of a real situation where adjtime()-adjusted time would be inferior
over the long term to what your system clock provides.
Dennis Ferguson