tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Changes to make /dev/*random better sooner



On 10 Apr, 2014, at 05:34 , Thor Lancelot Simon <tls%panix.com@localhost> wrote:

> On Wed, Apr 09, 2014 at 04:36:26PM -0700, Dennis Ferguson wrote:
>> 
>> I'd really like to understand what problem is fixed by this.  It
>> seems to make the code more expensive (significantly so since it
>> precludes using timestamps in their cheapest-to-obtain form) but
>> I'm missing the benefit this cost pays for.
> 
> It's no more expensive to sample a 64-bit than a 32-bit cycle counter,
> if you have both.  Where do we have access to only a 32-bit cycle
> counter?  I admit that the problem exists in theory.  I am not so sure
> at all that it exists in practice.

32 bit ARM processors have a 32-bit CPU cycle counter, when they
have one.  PowerPC processors have a 64-bit counter but the 32-bit
instruction set provides no way to get an atomic sample of all 64
bits.  It requires three "special" instructions followed by a check
and a possible repeat of the three instructions to get a consistent
sample, which makes that significantly less useful for accurate event
timing than the single atomic instruction which obtains the low order
32 bits alone.  I know i386, and 32-bit sparc running on a 64-bit
processor, can get atomic samples of 64 bits of cycle counter from
the 32-bit instruction set but I think those are exceptions rather
than rules.

I should make clear the reason I'm interested in this.  I would
like to add packet arrival timestamping to the system for purposes
other than random number generation.  This could be done while
ignoring the random number sampling, treating them as ships in
the night, but beyond the annoyance of now taking two packet
arrival timestamps when one would do it is also the case that
when both processes are sampling the same counter (the one
providing the system clock) then I will be taking timestamps
which are strongly correlated with the timestamps you are taking
and sometimes delivering mine directly to users while you are
simultaneously assuming your timestamps are somehow a bit
difficult to guess.

This is unlikely to matter in practice, I guess, but it smells
very bad.  I hence think the packet timestamping implementations
need to be integrated to avoid using stuff being told to users
for random number fodder (or, at least, not counting it as entropy).
This would, incidentally, eliminate the extra timestamps too.  I'm
hence interested in impediments to doing this well, and the 64 bit
thing is one of those.

> The problem that is fixed is multiple wraparound of the counter.  I
> saw this effect with many sources on many systems.  Some samples really
> are very low rate, but they nonetheless are worth collecting.
> The multiple-wrap problem is one of several problems with the delta
> estimator, which I am beginning to suspect is not really well suited
> for entropy estimation of timestamps (it does work well on sequences
> like disk block or memory page addresses).

I understand there may be a problem associated with counter wrap
around but I still have no idea what it is. I was hoping you'd
explain it, or at least describe what made you think it is a
problem.  I can't even guess, my reading of the actual code and
the changes you made to it for this comes up with bupkis.  I also
spent a while running 64 bit values (random and samples from a
small random number of cycles of an lfsr) and the same values truncated
to 32 bits through the function and found the latter increased the
rate of 0 returns from the function from about none to about 1 in
10^-9 (which is in fact the predictable result).  I'm at a loss as
to why this would be important, so I'm just lost.

> The delta estimator seems to almost _always_ attribute entropy to timestamps,
> on modern machines, so it is a good thing it attributes only 1 bit.  I am
> more than open to suggestions of a better entropy estimator for timestamps;
> I had some hopes for LZ but the data seem to show clearly that's not it.

I don't know.  Modern machines have extremely high speed counters and
those, sampled asynchronously, should always have some unguessable bits
at the low order end (and probably more than one).

> It is noteworthy though that this work is not done when the samples are
> enqueued; it is done in batches when they are dequeued and processed,
> and without locks held.  So though there is a cost, it may not be happening
> in the context in which you're worried about it.

No, the cost I'm worried about is the cost of calling nanouptime() for
timestamps in this context on the machines where you do this.  I can
give you a value which is significantly cheaper to obtain than with
nanouptime(), which is guaranteed to have all the entropy that the
call to nanouptime() would provide but which may only carry 32 bits
(or less).  I'm wondering why you couldn't use this instead, but the
answer to that would require me to understand what it is about counter
wrap that is a problem.

Dennis Ferguson



Home | Main Index | Thread Index | Old Index