Port-vax archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: PSA: Clock drift and pkgin



> How is the timer socket working?  Is it defined to do exactly a
> number of events per second on its own, or do you need to retrigger
> things?

The former.  Setting a timer socket uses a struct itimerval, same as
setitimer().  They are defined to - and I think they do - agree with
ITIMER_REAL timers.

> Also, what does it do when time gets tweaked by something like ntp?

It is supposed to work just like ITIMER_REAL timers.

ntp.drift on the relevant hardware is 3.320.  I'm not sure what the
units of that are; PPM, I would guess.  (Another machine here has
165.505 in its drift file, which is way too large for parts per
thousand, the major alternative, to be plausible.)

I have since done a bit more testing.  I wrote a tiny program that just
requests 100Hz signals with setitimer(), recording gettimeofday() for
each one and dumping them out at the end.  Here's the source,
compressed with bzip2 and then base64 encoded:

QlpoOTFBWSZTWdjtYB4AAFHfgEAQW33hDyKnHA6/79/qQAH9V1gAaICTNTQGTQeo
AAGQ0yaBgNAAANAAAAAAO96QoZpGJ6CekDTE0aMRoZACRIJPUyZNJpPEGkaAaMmh
iG1MZLM28oDAauYpMDk3KHSbglRTiqCaXs6+7y6O7sLCcifrCb1TUhCudjU6RRdl
ojT8Bk1gSkKOlTAsKXo4GHBhokraLMdJxkpbg9umQdvGnbUbOjK85fqKDGiBIvHC
FPyvdIo4lX60Vmg7ocIcnee1I/2f3DY9QCyIgJ3bRpMwrQK1m3gyUrbpp1HDYQdh
8nQieBLx6FIUHtjCzIEdhePNOan3VLUGLiKevhtt2m9S2XhggDpuUcK5hxT6CJQR
8APE6+ygfZh1KVyoaMKrXiNMss0aicycRsovO+qv0qkpLB0yCY65Hrdcy5VCnhNa
qaV3Y57a4phDWbPFS/leUtz9GOOJryWJXYI4TQY7lvqeV8LFX8LGYOmbRqwAvxNi
su+5cPx+Va8oZWS3l/rnE+pqllEia6SSQ3TiEGKz2d/EtoMufswhEw1efNcN71t0
DdCtBfL3EG2vUqTRWKm+0EA8BnOofMGmYXo9ddTBWLTsiLnTXcjwJKMwjnfFwzab
zoQw5KpzZM2a6TQF2SarJadDIQ+lGdpm4oNt/8XckU4UJDY7WAeA

On the same system I did the tests on, the first and last times
recorded (which should differ by 59.99) are

1702565520.073835
1702565640.879493

In view of the 50Hz-signal bug, I would expect a difference of 119.98.
So it is running slow by a factor of 120.805658/119.98, or about .688%.

On the "another machine here" I mentioned above, the one with 165.505
in its NTP drift file (and also running my mutant 5.2), the same
program produces first and last samples

1702565946.913334
1702566067.032438

for about .1159% slow - significantly more accurate, even though its
NTP drift figure is larger by a factor of nearly 50.

I then copied the same program to a 9.1 machine at work.  It has the
50Hz-signal bug; its time difference, which should be 119.98 in view of
the 50Hz bug, is 120.071271, or about 0.076% slow.

I then tried a different work 9.1 machine, one whose kernel has HZ set
to 8000.  There, the result is _horrible_.  The delay which should be
59.99 is 76.674529; stripping off the microseconds from the timestamps
and running through uniq -c, I get

< test-alrm.out sed -e 's/[.].*//' | uniq -c | sed -e 's/ [0-9]*$//' | sort -n | uniq -c
   1   21
   1   38
  17   77
  39   78
  19   79
   1   89

To help understand that command, lines printed look like

1702566154.796588
1702566154.806705
1702566154.815833

just seconds and microseconds, straight from gettimeofday.

I'm going to build a similar program for timer sockets, but I'm at work
at the moment, so that will have to wait.

					Mouse


Home | Main Index | Thread Index | Old Index