Subject: Re: microtime
To: Wolfgang Rupprecht <wolfgang+gnus20020821T160903@wsrcc.com>
From: Ken Hornstein <kenh@cmf.nrl.navy.mil>
List: tech-kern
Date: 08/22/2002 00:30:15
>I recall reading Bernstein's proposals to keep the kernel time in
>purely monotonically increasing seconds and simply using the Olson
>time-printing code in libc in its other mode where is would add the
>leap seconds in only at time display time.  Are there any technical
>gotcha's that would bite one in the butt if the leap seconds were
>moved out of the kernel's time?

I always thought it was a bad idea, because you never know when a leap
second is going to take place, so it makes it hard to calculate times
in the future (and as a ex-sysadmin, I wonder how that leap second
information will get propagated to machines).  I could see that adding
a whole lot of complexity to application code that is assuming that
there are 86400 seconds in a day.

Right now (IIRC) we simply pretend the leap second doesn't exist, and
we don't increment the clock during it.  I think that while that sucks,
it's probably the best solution (given that no matter what you end up
doing, it's going to suck ... it's just a question of how MUCH you want
it to suck).  As I see it, it's:

- Keep system time in TAI, make future time computations difficult,
  have systems potentially get out of skew over UTC<->TAI offset,
  break some number of existing applications.

- Have one second every couple of years that you just pretend doesn't
  exist.  So far, seems to work fine.

--Ken