tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: getrandom and getentropy

On Wed, May 13, 2020 at 12:19:06PM +0300, Andreas Gustafsson wrote:
> Joerg Sonnenberger wrote:
> > On Tue, May 12, 2020 at 04:59:52PM +0300, Andreas Gustafsson wrote:
> > > I don't particularly care if we require 100 or 384 bits of estimated
> > > entropy, nor do I particularly care if the entropy estimate of a
> > > keystroke timestamp is 0 or 1 bit.  But I do very much care that if I
> > > accidentally try to generate an ssh key on a system that has no
> > > entropy at all, it must not succeed.
> > 
> > Once more and alone and maybe it will sink in:
> > 
> >     There is no reasonable way to estimate system entropy. 
> > 
> > Please think what that statement means. Consider for fun emulating a 20
> > year old computer with a deterministic high precision model keeping all
> > storage in memory. There is no source of entropy in such a system and no
> > way for the emulation to tell.
> What exactly are you objecting to here - the general idea of calculating
> an entropy estimate and comparing it against a "full entropy" threshold,
> or my willingness to consider a 1-bit estimate for a keystroke timestamp?

The general idea of the entropy estimation is as done by older NetBSD is
pseudo-scientific voodoo and gives a misleading impression at best.
Estimating a bit for a keystroke for example is completely unjustified
in the context of a virtual machine monitor.

> There's nothing wrong with the general idea of entropy estimation as
> implemented in NetBSD-current.  If you run -current on your hypothetical
> emulator, it will calculate an entropy estimate of zero, and
> /dev/random will block, as it should.  The question we are trying to
> decide in this thread is whether getentropy() (and consequently, based
> on nia's list, things like openssl) should also block when this
> happens, and I'm saying they should.

How should it known that it is not running on real physical hardware
with random timing vs a deterministic environment with a programmable
timing pattern? Hint: it can't. Estimating entropy correctly is hard
even from an external perspective of the system. It can be done somewhat
reasonable if there is no whitening step involved and the actual
physical process can be modelled (e.g. for microphone noise or a lava
lamp), but it is still dangerous. See the mess of the Intel ICH random
number generator where the floating diode accidentially got connected
during the die shrink. So yes, the idea that the kernel can do more
than completely guessing entropy for processes that designed to provide
entropy is extremely questionable and therefore the idea that it makes
sense to block in random places is the same. "The operator is
responsible for providing enough entropy during the boot process" is not
really different from what we did before, just without all the voodoo.
Few programs need to be aware of it at all.


Home | Main Index | Thread Index | Old Index