tech-crypto archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Initial entropy with no HWRNG

> Date: Tue, 12 May 2020 14:05:14 -0400
> From: Thor Lancelot Simon <>
> On Tue, May 12, 2020 at 04:39:50PM +0000, Taylor R Campbell wrote:
> > What is the model you're using to justify this claim that actually
> > bears some connection to the physical devices involved?
> The fan's the easy example, right?  I mean, it is quite literally a physical
> object in rotational motion, driven by an electric motor which should
> produce smooth changes in speed and little change in acceleration.  Taking
> higher-order derivatives of its position is a reasonable way to extract
> the changes that are in fact due to turbulence, or so it seems to me.

That sounds like a good start.  I'm looking for literature like

which has a reasonably physically justified probabilistic model for
the class of devices, and which considers different sources of
variation some of which are dependent on external factors -- including
some that are very dependent on activity elsewhere on the die and some
that are reasonably independent.

> > > B) One thing we *could* do to help out such systems would be to actually run
> > >    a service to bootstrap them with entropy ourselves, from the installer,
> > >    across the network.  Should a user trust such a service?  I will argue
> > >    "yes".  Why?
> > > 
> > > B1) Because they already got the binaries or the sources from us; we could
> > >     simply tamper those to do the wrong thing instead.
> > 
> > Tampering is loud, but eavesdropping is quiet.  There is no way to do
> > this that is resistant to eavesdropping without a secret on the client
> > side.
> > 
> > (This would also make TNF's infrastructure a much juicier target,
> > because it would grant access to the keys on anything running a new
> > NetBSD installation without requiring tampering.)
> You snipped the entire discussion of mitigations for this which was in my
> original message, with no indication that you'd done so...

Sorry, I thought that all I had to add to was the part I quoted, and I
generally trim quotations to the parts I'm replying to (and I didn't
want to turn it into a sprawling thread that's hard to keep track of
the topic of).  But if you like, here are some longer thoughts on the
other two mitigations you mentioned:

> Date: Tue, 12 May 2020 11:45:58 -0400
> From: Thor Lancelot Simon <>
> B2) Because we have already arranged to mix in a whole pile of stuff whose
>     entropy is hard to estimate but which would be awfully hard for us, the
>     OS developers, to predict or recover with respect to an arbitrary system
>     being installed (all the sources we used to count but now don't, plus
>     the ones we never counted).  If you trust the core kernel RNG mixing
>     machinery, you should have some level of confidence this protects you
>     against subversion by an entropy server that initially gets the ball
>     rolling.

Here's a threat model that this could defend against:

. client<---adversary--->
. client<---no adversary--->

However, if the adversary is _also_ between client and, it's not clear it adds much.

In particular, consider two scenarios:

(a) client talks, with adversary listening, to
(b) client talks, with adversary listening, to,
    and then client talks, with adversary listening, to

If the adversary could break scenario (a), say by guessing the
client's DH secret for TLS to, then they could _also_ break
scenario (b) by guessing the client's DH secret for TLS to and learning whatever additional secrets delivered to the client.

(Again, no subversion required -- only eavesdropping.)

If the adversary _couldn't_ break scenario (a), it's not clear that
putting into the mix added anything, except a more
complex system with more failure modes.

That said, if there is a substantial number of clients for which the
threat model that this defends against is relevant, then becomes a juicy target -- either for breaking into
the server itself, or just for hijacking BGP enough to eavesdrop on
the exchanges.

So my concerns are:

1. It's not clear there's a substantial number of endpoints that this
   would _help_.

   (It could even _hurt_ if we weren't careful with execution: compare
   locally generating ssh keys, which doesn't expose anything on the
   network but may leave a warning on the console for the operator,
   with talking TLS over the internet, which reveals a hash of the
   state of the entropy pool that could be used for a brute-force
   search of the state.)

2. If it _were_ necessary to help a substantial number of hosts, I'd
   be nervous about being responsible for operating the infrastructure
   that keeps it working, since as you mentioned verification would be
   extremely difficult.

> Date: Tue, 12 May 2020 11:45:58 -0400
> From: Thor Lancelot Simon <>
> B3) Because we can easily arrange for you to mix the input we give
>     you with an additional secret we don't and can't know, which you
>     may make as strong as you like: we can prompt you at install
>     time to enter a passphrase, and use that to encrypt the entropy
>     we serve you, using a strong cipher, before using it to
>     initially seed the RNG.

If we are already configuring cgd with a password, it seems reasonable
to me that sysinst could feed the password into /dev/random (subject
to some care).

However, if we're not already configuring cgd with a password, I'm
extremely leery of just adding any new user prompts to sysinst like
this.  Every time I try to use sysinst, I'm already overwhelmed by the
dizzying array of questions that it asks me to make decisions about --
it's bad enough that every time I try it, once every couple years, I
invariably just give up and do it by hand with gpt/newfs/tar/MAKEDEV.

So adding more steps that aren't necessary for the installation to
function strikes me as magnifying a user experience disaster.  The
best way to make users hate security and circumvent it is to thrust it
in their faces with decisions, barriers, and warnings.

Home | Main Index | Thread Index | Old Index