tech-crypto archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Initial entropy with no HWRNG



I fully agree with Andreas that all currently defined or proposed
randomness interfaces should have either the new /dev/random behavior or
fail if that would block since silent use of not all that random data is
not something that should ever happen if there is any possible way to
prevent it or raise an alarm about the situation.  Since some software
assumes it to always work, the system should require an adequate source of
randomness (or manual intervention) or refuse to fully boot.  This is what
"just use /dev/urandom" has already been assuming happens, even though it
generally doesn't for anyone who doesn't have a hardware randomness
source.  At the very least the system should pop up a message early in
boot if sufficient randomness is not available and refuse to continue
without manual intervention.

"All" that is needed is a tiny file with randomness that is only used
once.  Almost all end-user situations where that is difficult would at
least have some audio source that could be used in various ways (recording
or amplifier noise) to provide randomness (not necessarily automatically,
however options could be documented).  Almost everyone these days has
access to some system with adequate randomness, the only exception I can
think of would be if you only have an older system that doesn't currently
have adequate randomness and are receiving a generic installer via
physical means or downloading via a shared system that you don't trust to
generate randomness.  Most of the time, the easiest way would be to write
the file from a different system.  However, the installer could be read
only media with no other non-network transfer possible (or convienient) or
a disk image that is not understood at all by the system with adequate
randomness (this would still work if the exact location the random data
needs to be written to the image is provided).  While remote networking
can't be secured without randomness, local networking in some cases can be
secure without encryption, although some care is needed on both systems
and the network to make sure that is the case.  A remote or less secure
local network random source could be used as an additional source to
convert "hopefully good enough" into "good as long as either the starting
randomness was actually ok or that one connection was not observed (and
the remote system not compromised)".

A few other issues:  The current data structure is entropy estimation,
data, SHA1 hash of data.  It could be helpful to have a way for non-netbsd
systems to generate a file that could be used automatically with widely
available utilities (say a text SHA1 hash in a different file instead of
binary hash after the data and assume full entropy unless read only). 
Strictly speaking it would be a bit better to load the random data,
overwrite the file with new data, and only when that succeeds assign an
entropy value to the loaded data.  Currently it is assumed that being able
to open the file with write permission means it will actually be possible
to write it.  I'm not sure if there are potentially relevent cases where
that is incorrect (the more likely case where this assumption causes
trouble is a disk image duplicated behind the scenes that can be
impossible to detect from the running OS).  Currently there is a race
condition between reading the data and writing the new file, although it
seems difficult to both trigger that and use the randomness in a way that
could cause trouble if the same value is used twice.  Considering the
importance of the file and inconvenience on some systems if something
happens to it there should be a backup that is not written at the same
time.  To help with read only root it might make sense to support a tiny
randomness partition (or at least mounting some partition before rndctl).

-M




Home | Main Index | Thread Index | Old Index