Subject: Re: CVS commit: syssrc/sys/dev/ic
To: gabriel rosenkoetter <gr@eclipsed.net>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 11/09/2001 17:38:36
I'm happy with printing warning when NICs are enabled as an entropy
source. I'd be unhappy if it was flatly forbidden, particularly for
environments where NICs are likely the best available entropy sources--
say,, boxes acting as routers, or TLS's flash-based bastion hosts.

That said..


>Content-Transfer-Encoding: quoted-printable
>
>On Thu, Nov 08, 2001 at 04:52:14PM -0800, Jonathan Stone wrote:
>> Its not dangerous'; its just not *shown* to be safe. But then, what is? =
>=20
>
>Seems to me that Bill and Perry just pointed out some very easy ways
>it *is* dangerous on a public network (or, really, any machine unwanted
>packets can reach in any way). 

What on earth makes you say that? Perry nor Bill has shown any such
thing.  They say they dont *know* that its safe and they *worry* an
exploit may be found -- a very different thing.

We all understand that (under typical circumstances -- not michael
graff's box) its easy to modulate the volume of traffic arriving at a
particular box.  So take it as given that an attacker can modulate the
*frequency* at which packets arrive.  

But that has never been the relevant issue.  An inexact but
approximate description, is whether an attacker can modulate the
*phase* skew between packet arrivals on the one hand, and

    (a) the inherent jitter of the system clock 
    (b) jitter in interrupt latency.


Here's an example: suppose in a year, you have a machine with a two
gigahertz cycle counter.  You can timestamp packet-arrival times to
sub-nanosecond resolution.  You can, in principle, decide to use some
of those low-order bits as input to Yarrow or the
compression/cryptographic hash of your choice.

The question is: How *accurately* can a would-be attacker predict, to
sub-nanosecond scales, the *exact* value of the low-order bits of your
CPU cycle counter, at *precisely* the point when the CPU services the
packet-arrival interrupt and reads out that timestamp?

Dave Mills has some 20-odd years of very good published work in the
research literature which shows that its very difficult to achieve
phase-locks better the rough order of microseconds, using just
packet-netowrk traffic or network-carried information (ie: NTP).

Even if you attach an atomic clock or GPS clock to a serial port and
take an interrupt, there's still on the order of *tens of nanoseconds*
of unavoidable jitter.  (Check with Dave Mills or Judah Levine at NIST)
The difference between those numbers should give you on the order
of ten or so bits with highly unpredicatable contents.

One set of Dave Mills's graphs I found at random is:
http://www.eecis.udel.edu/~mills/database/brief/gnss/gnss_files/v3_document.htm

Or see figures 3 and 4 of
http://people.freebsd.org/~phk/interruptlatency.ps.gz



>If you can push on the network
>device, you can push on the entropy. I hope we all see why that's
>bad at this point.

It has *not* been shown. I dont know any good reason to to conclude it
acutally *is* bad: all we have is skepticism (good, in security) and
caution (again, good in security). But once you understand the
difference between modulating traffic volumes, and modulating
packet-arrival times and interrupt latency to precision of
nanoseconds, you will understand why ``pushing on the network''
hasn't been shown to be "bad" in the context of this thread.