Subject: Re: High serial port (output) speeds
To: None <port-sparc@netbsd.org>
From: der Mouse <mouse@Rodents.Montreal.QC.CA>
List: port-sparc
Date: 10/23/1999 21:54:32
> That's the whole problem with 1x; the receiver is not using the same
> clock as the transmitter.  Even if both chips are running at the same
> clock rate, it's not the same clock - there's no circuitry in these
> computers to synchronize them.

That's equally true of X16; if nothing else there are variations in
crystals.

In each case, there is no reason the chip can't hold the divisor chain
cleared until it sees the beginning of the start bit, then always do a
divide-by-2 as the last stage (note that even at X1 the PCLK frequency
is always divided by 2*(tconst+2), an even number), so it has an edge
available at the middle of its nominal bit time to sample on.
Depending on where in the PCLK cycle the start bit's edge falls, this
may be off by as much as one PCLK clock cycle, which is not more than a
quarter of a bit time.  (If PCLK is a square wave and the chip is
*really* smart, it could be off by no more than half of a PCLK cycle,
but I'm not sure that wouldn't violate the 8530 interface spec.)

And once the stop bit has been sampled, the divisor chain can be frozen
again - half a bit time before the nominal end of the stop bit - to
await the next character's start bit.  This allows the sender to be up
to about 2.5% faster than the receiver - a quarter of a bit time after
ten bits - without ever seeing the "drifting out of sync" problem Bill
outlined.  (The sender can be slower by the same amount; the problem if
the sender is slower by more than that is that the supposed stop bit
sample can hit what the sender thinks is a data bit instead; while this
may produce a framing error, it won't miss the next character's start
bit unless it's so slow that the receiver thinks the start bit ends
before the sender thinks the last data bit begins, a difference of
about 20%, and even then, only with certain data patterns.)

> [...] In this case, once every eleven seconds, one clock slips ahead
> of the other by a cycle.  At some point in this slippage, the
> receiver will be sampling right when the transmitter is changing its
> output.  Who knows what will get through at that point. :-)

More important, the bitstream will get shifted by a bit - a bit will
get inserted or deleted, depending on which is faster.  This will
almost certainly cause a burst of framing errors as the receiver hunts
for the correct timing of the start and stop bits in the data stream;
how long this takes to settle depends on the data.

If the sender ever idles the line, of course, the receiver will resync
within at most one character time.  But we can't count on that.

> To quote the _SCC_User's_Manual_, Zilog document DC-8293-02, page 4-6,
> "[...] The 1X mode is used when bit synchronization external to the
> received clock is present (i.e., the clock recovery circuit, or
> active receive clock from the sender side)."

Without seeing more context, I'd read that as "here's how to do it if
you want to use an external clock" rather than "this is the only thing
1X mode is good for".  If you believe the context supports the latter,
I'll happily accept that...but would still want to know whether that's
equally true of clones like the AMD chips I found in an IPX I just now
opened up. :-)

> From looking at a web page on Sun serial ports [...], all the listed
> serial ports support an external receive clock [some do a transmit
> clock as well]

> If we could figure out how to turn that on, when linking one sparc to
> another, we set one to feed out a serial clock, and the other would
> use it as its data clock.  Then whatever rate you choose would be
> rock solid (modulo interrupt issues..).

That would be Pretty Cool.

					der Mouse

			       mouse@rodents.montreal.qc.ca
		     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B