Port-vax archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: PSA: Clock drift and pkgin



On Sun, Dec 24, 2023 at 11:45:17AM +0100, Johnny Billquist wrote:
> On 2023-12-24 09:22, Jan-Benedict Glaw wrote:
> > On Sat, 2023-12-23 12:49:16 +0100, Jan-Benedict Glaw <jbglaw%lug-owl.de@localhost> wrote:
> > > After swapping terminators and cables, the conclusion is that one of
> > > the system's SCSI cable's plug doesn't make proper contact to the
> > > cable. Using a different plug on the same cable (which isn't at the
> > > "perfect" location though) makes it work.
> > > 
> > > So I'm now prepared with a 4000/60 :)
> > 
> > With the HDD image I used on the /90, that 4000/60 is running since
> > some hours. No network connection, no ntpd. The image is a few months
> > old, but for finding a misbehaving clock or lost interrupts, that
> > should be good enough.
> > 
> >    While running idle (~ 3 h), I didn't notice loss of time. Maybe a
> > second? But no more. (And that's over 9k6 serial...)
> > 
> >    Then I let it call gcc on a simple C file in a loop for another six
> > to seven hours, and now I seem to have an offset of some 1.5 to 2
> > seconds. No further messages (negative runtime) in `dmesg`. And a
> > total of 2 sec over a timespan of 9 h would be totally fine for ntpd.
> > 
> >    I'll now run a fresh install with a newly built install ISO (already
> > containing the recent page invalidation patch) and give it another
> > try. But I wonder why that box shows a reasonable time. Maybe there's
> > actually an issue with the code behind adjtime()?
> 
> Interesting thought. There could defiitely be something in there. I have ntp
> active on my systems. Maybe I should try not have that and see if time is
> more stable then. Thanks for that idea...
> Anyone else who have observed if having ntp active or not have any
> correlation to how well time is managed?

I also agree - yesterday I went back to do some more testing on a
microVAX-II and things ran rather smoothly for about 8 hours then
I found this - could this have been a step adjustment that went off
the rails?

ntp_gettime() returns code 0 (OK)
  time e931e440.4c447c30  Sat, Dec 23 2023 17:57:04.297, (.297920551),
  maximum error 167084 us, estimated error 8326 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset -9317.210 us, frequency -8.800 ppm, interval 1 s,
  maximum error 167584 us, estimated error 8326 us,
  status 0x2001 (PLL,NANO),
  time constant 7, precision 0.001 us, tolerance 496 ppm,

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 2.netbsd.pool.n .POOL.          16 p    -   64    0    0.000   +0.000   7.813
     [...]            [...]       2 u   82  128  377   88.828   -8.559   7.813
     [...]            [...]       3 u  131  128  377   49.321   -7.954   7.813
     [...]            [...]       2 u   80  128  377   89.987  -11.474   7.813
     [...]            [...]       2 u  126  128  377  109.728   -7.052   7.813
     [...]            [...]       3 u   57  128  377   69.019   -8.512   7.813
     [...]            [...]       2 u   48  128  377  109.300   -7.135   7.813
     [...]            [...]       3 u  104  128  377   38.836  -11.113   7.813

ntp_gettime() returns code 0 (OK)
  time e931f64b.494dd724  Sat, Dec 23 2023 19:14:03.286, (.286344194),
  maximum error 1525218 us, estimated error 7813 us, TAI offset 0
ntp_adjtime() returns code 0 (OK)
  modes 0x0 (),
  offset 0.000 us, frequency -8.886 ppm, interval 1 s,
  maximum error 1525718 us, estimated error 7813 us,
  status 0x2001 (PLL,NANO),
  time constant 8, precision 0.001 us, tolerance 496 ppm,

     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 2.netbsd.pool.n .POOL.          16 p    -   64    0    0.000   +0.000   7.813
     [...]            [...]       2 u   89  256   37   78.555  +583.73   7.813
     [...]            [...]       2 u  155  256   17   49.535  +583.77  23.630
     [...]            [...]       2 u   88  256   37   79.084  +584.67   7.813
     [...]            [...]       2 u  102  256   37   99.090  +585.49   7.813
     [...]            [...]       3 u   92  256   37   71.547  +587.44   7.813
     [...]            [...]       2 u   27  256   37  110.689  +584.64   7.813
     [...]            [...]       3 u  103  256   37   38.874  +588.29   7.813



Home | Main Index | Thread Index | Old Index