Subject: Re: ASUS-SP3G slow Clock
To: None <port-i386@NetBSD.ORG>
From: John F. Woods <jfw@jfwhome.funhouse.com>
List: port-i386
Date: 01/04/1996 11:17:10
And since there was someone else (I deleted the mail) who said that he didn't
understand how a computer could gain or lose a whole minute per day, I'll
pontificate briefly.

This is one of those ugly analog things hiding under the covers of these
so-called "digital" computers.  (If circuitry weren't really analog, there
would be no such thing as timing requirements, and the words "transmission
line" would not induce near panic in motherboard designers...)  The
timekeeping component in computers is a little slab of quartz crystal,
carefully cut so that if you tickle it JUST RIGHT in a circuit with JUST the
right amount of parasitic capacitance, it will vibrate at almost exactly some
handy frequency.  However, like any manufacturing process, there's going to be
some variability in just how thick they cut any given slab of quartz, and just
how stiff any given slab of quartz will be in the first place.  When you buy
custom crystals (as I do occasionally in my secret identity as Amateur Radio
Station WB7EEL), you specify what tolerance you want; typical tolerances are
100 parts per million, 50 ppm, and 10ppm, depending on just how much you want
to spend.  At the 100ppm level, they generally just cut and test; at 10ppm,
they actually spend time carefully shaving the crystal blank until it meets
the tolerance.  For mass produced computer applications, you can pretty
easily guess which of those tolerances is more likely.  (In fact, I think
mass-produced computer crystals are often as bad as 200ppm spec, and they
test by sampling rather than testing each crystal.)

So how bad is 100 parts per million?  There are about ten pi million seconds
per year (amusing number fact for trivia fans), so multiply that by 100 and
divide by a million and you get 3142 seconds per year -- a crystal that just
meets spec for computer oscillators could be off by nearly an hour after a
year.  *Then* you get to add to this the fact that the temperature inside a
computer varies a lot, and crystals are (at most temperatures) exquisitely
sensitive to temperature variation.  (Tidbit:  quartz watches in metal cases
generally keep much better time than one would expect, because they are bolted
to a very complicated constant-temperature oven (i.e. your arm), and the
manufacturers carefully choose a crystal cut which happens to have a flat
temperature response near body temperature.)

Some computer timebases are built around wristwatch 32768Hz crystals, and
those generally have decent tolerances even though they're cheap:  wristwatches
are expected to be at least REASONABLY accurate, and they're sold in bulk,
so the manufacturers *can* and *must* improve the process to achieve good
tolerance at reasonable cost.  Computer timebases built around the 14.xxx MHz
timebase, on the other hand, are almost guaranteed to suck, since such crystals
are generally aimed at television sets, which use a phase-lock-loop scheme to
ensure synchronization with the incoming signal, hence the local timebase
needn't be terribly accurate.

That reminds me, I gotta finish gathering parts for that CHU radio clock...