[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: getrandom and getentropy
> > For the OpenBSD strategy to work, the system needs to actually refuse
> > to run if the seed can't be loaded (or full entropy achieved in some
> > other way). NetBSD doesn't do that. As long as there is any way
> > userland can start before full entropy has been achieved, all APIs
> > that provide randomness for security purposes must support blocking
> > (or returning errors).
> How are you measuring "full entropy"?
I'm referring to the NetBSD kernel's definition of "full entropy"
(since Taylor's entropy overhaul), which boils down to having gathered
256 bits of estimated entropy.
> We no longer attempt to estimate the value of the majority of samples
> fed into the pool - it was deemed unscientific.
> As Taylor mentioned, OpenBSD operates under the under the premise
> that either there's a HWRNG, saved seed, or there's enough entropy in
> timing &c. measurements early at boot (or that the kernel using the
> RNG lots is enough to put it into an unpredictable state, as Theo
> is insisting). All of that depends on assumptions and trust - it
> does no measurement of the value of the entropy being provided.
> Do you believe that timing samples are enough? The NetBSD kernel doesn't -
> that puts us into an "interesting" situation on hardware with no HWRNG.
> This hardware can reasonably block forever on first boot, due to
> the large number of sources of entropy that are no longer measured.
This unfortunate situation should be addressed by providing more
entropy sources, not by burying our heads in the sand and pretending
we have entropy when we don't. Adding more sources could mean
reintroducing some timing based sources after careful analysis, but
also things like having the installer install an initial random seed
on the target machine (and if the installer itself lacks entropy,
asking the poor user to pound on the keyboard until it does). But
that's all outside the scope of this thread.
As long as NetBSD does decide to block, it should do so consistently
for all interfaces that claim unpredictability, meaning both
/dev/random and getentropy().
> Even then, can you fully trust HWRNG? There is no magic guarantee of
> safety unless you have a fully audited atomic decay measurement device
> at your disposal. Most people don't, so we rely on good-as-unpredictable.
Of course, but this has little to do with the issue at hand.
> Following the logic that even with consolidation of available entropy,
> never blocking is inherently unsafe due to the early-in-first-boot
> characteristics of old and low end hardware, you'd have to also assume
> every single key ever generated on a NetBSD machine is unsafe, because
> every significant security library targeting NetBSD either uses
> kern.arandom or /dev/urandom.
I would not go quite as far as saying "every single one" is unsafe,
but many probably are. One of the reasons why I'm asking for
getentropy() to work like /dev/random rather than /dev/urandom is so
that we don't make this problem even worse by adding the keys
generated by software using getentropy() to the list of suspect ones.
> Since so many projects have recently happily accepted my patches to use
> kern.arandom with full understanding of how it works,
Specifically using kern.arandom for getentropy()? Which other
projects are these?
> we already have a nonblocking, sandbox-safe source of randomness in
> wide use for security critical applications on NetBSD. I only want
> to make it easier for developers to access, and not harder.
kern.arandom may be nonblocking and sandbox-safe, but it is not suitable
for security critical applications.
Andreas Gustafsson, gson%gson.org@localhost
Main Index |
Thread Index |