Subject: Re: NTP pulse-per-second timestamp diff
To: Jukka Marin <jmarin@pyy.jmp.fi>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 03/27/1998 00:25:40
Jukka Marin replies:

>Doesn't it cause problems at high bit rates?  Now that the serial ports
>are working well again (on all ports, I hope), I don't want to see a new
>bunch of problems on all modem lines ;-)

For timekeeping applications, the duty cycle is very low: a couple of
dozen chars once per second from GPS, or twelve chars ten times a
second from CHU.  IIRC, CHU is At 300 baud, even.  (The `signal' we
want is effectively in the the bit clock, not the data.)

My take is that the key part of this idea is adding a hook (ioctl)
which tells the hardware-level tty drivers to switch from optimizing
for throughput, to minizing latency and jitter.  

This has two parts: 

1) disable use of hardware FIFOS.  Set the UART into
interrupt-per-character mode and actually take the interrupts.  This
avoids jamming the timecode from the clock into one stream, serviced
by one (or at most two) interrupts. which is a *bad* thing for this
application.

2) call the line discipline immediately, from within the
interrupt-hanlder, rather than deferring to soft-interrupt time.

I think so far, that's exactly what Ken Hornstein is proposing.  I'm
just explicitly saying make it an ioctl(), so that the line discipline
can turn it on. obiously, it gets turned off when the tty gets closed.

Now, if we do that, and on top add a `TIOTIMESTAMP' ioctl (same idea
as the TIODCDTIMESTAMP patch: freeBSD has both), but which timestamps
every input char, and (brand new) add a new tty callback hook, via
which a linediscipline can call down into the uart-level driver to get
the most recent timestamp -- we're done. Everyone on the NTP side of
the house is is happy, and we don't affect `normal' serial-port usage
at all (except for some failed tests and a few cycles branch penalty).
Those cant be _too_ horrendous, since FreeBSD already has the
`timestamp every char' ioctl, and on the whole they're much more
oriented to lowlevel performance tuning than NetBSD.

With this change, the NTP CHU and ttyclk line disciplines can issue
the `go into low-latency' mode ioctl() when they get attached, and
issue the ioctl to tell the driver to start timestamping each char.
At worst, the ioctl is a no-op and we are no worse off than before.

The ttyclk (or CHU) discipline should track of whether the kernel is
doing timestamping; if not it does microtime() timestamping itself, as
it does now. (PS: we dont have a ttyclk or chu ldisc in the tree yet;
if we do these mods we should pull them in from the NTP source.)


If the lowlevel driver does implement the ioctl, it will start to DTRT
for NTP.  It upcalls into the ldisc for each received char, from the
lowlevel interrupt handler. the ldisc downcalls to get the timestamp.
The synchronous calls, the per-char processing, and hte low data rate
guarantee that the timestamp we get is the `right' one. 
And it has minimal extra latency and jitter from kernel processing.

And again: For the special case of x86 and a GPS with PPS, just using
the parallel port for PPS (or the DCD timestamp patch) is good enough.
And though this is cool stuff to timevultures, I'm still not convinced
the population of radio-clock users is big enough to justify this.
Unfortuntely, till now NetBSD hasnt had adequate support for radio
clocks, so the interested people are all off using some other OS, so
theres really no way to tell.