tech-security archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Lightweight support for instruction RNGs



I think there's some disconnect here, since we're obviously talking
past each other.

My concern is the output from the random devices into userland. I
couldn't care less about the in-kernel sources, except as they operate
together to produce bits in that output. I'm talking about running the
tests on entropy as collected from the kernel, not sampling the
individual sources. I need to be convinced that the collective output
is not predictable in some way. That the output from random/urandom is
as unpredictable as it can be. That the interactions of various
sources does not produce something that is more predictable rather
than less. How do you know that certain inputs are not going to skew
more or less from certain patterns when run through the output
mechanism?

What I'm trying to get at is that some people seem to think "more
entropy is good", and I don't believe it's as simple as that.

As an aside, I think that we all need to remember, on some platforms,
our entropy pool is not as full as we'd like it to be. Take this VM,
for example, hosted on a machine with an SSD as storage:

% sudo rndctl -l
Password:
Source                 Bits Type      Flags
cd0                       5 disk estimate, collect, v, t, dt
wd0                 4266255 disk estimate, collect, v, t, dt
fd0                       0 disk estimate, collect, v, t, dt
cpu0                 266677 vm   estimate, collect, v, t, dv
wm0                       0 net  v, t, dt
pms0                      0 tty  estimate, collect, v, t, dt
pckbd0                    0 tty  estimate, collect, v, t, dt
system-power              0 power estimate, collect, v, t, dt
autoconf                104 ???  estimate, collect, t, dt
printf                    0 ???  collect
callout                 577 skew estimate, collect, v, dv
%

The entropy collected from wd0 is predictable, as is cd0 (it's an ISO
image in the host file system on the same ssd storage). That leaves us
with cpu, autoconf and callout. For DSA signatures, we need good
entropy, as we do for any ephemeral https connection. And here is
where I'm starting to grow concerned.

Now to get to the tin foil hat bit - since the source code for RDRAND
is not available, we can't review it. How does adding that into the
mix, on an otherwise entropy-free system make me more secure? By
trusting to various digests and linear shifts to obscure it. How do we
know that there is no bias when running through these? We don't, we
just assume there is none.

I want to get rid of all these assumptions, hopes, and calls for me to
get checked in to various rest cures -- by automating the testing of
the output from the random device. The best way I've found so far is
by running dieharder; if there are other ways, or similar packages,
I'd love to hear about them.



On 19 December 2015 at 18:33, Thor Lancelot Simon <tls%panix.com@localhost> wrote:
> On Sat, Dec 19, 2015 at 05:23:58PM -0800, Alistair Crooks wrote:
>> On 19 December 2015 at 17:10, Thor Lancelot Simon <tls%panix.com@localhost> wrote:
>> > On Sat, Dec 19, 2015 at 04:54:20PM -0800, Alistair Crooks wrote:
>> >> The point is to see if RDRAND plus other inputs does not regress to
>> >> produce an output that is, in some way, "predictable". And while
>> >> running dieharder does not guarantee this, it may show up something
>> >> unusual. Given that there's previous history in this area, I'd
>> >> consider it a prudent thing to do.
>> >
>> > You understand, I hope, how the relevant machinery works.  Samples are
>> > mixed together in a very large state pool which is a collection of LFSRs,
>> > then hashed with SHA1 before output.
>> >
>> > We then use _that_ output to key the NIST CTR_DRBG.
>>
>> And to just expect that everything is mixed in as hoped, with nothing
>> being missed out because of coding errors, is something I should be
>> embracing, and feel better "just because"? Thanks, but we've been
>> bitten that way once before. I want to make sure we're not bitten that
>> way once again. I can't believe there's pushback on this. Lessons
>> learned, etc.
>
> There's pushback because you're suggesting doing something that's pure
> security theater, with no value.  We've had several bugs in the RNG; we've
> never had a bug in the RNG that would have actually caused Dieharder
> failures!
>
> The construction is not amenable to being tested in the way you suggest
> unless one just wants to *say* one tested it without doing any test that
> is meaningful -- in other words, engage in security theater.  I will not
> do that.
>
> So, I'm asking you what I asked you before: where exactly do you want
> the test rig hooked up to this thing?  I'll remind you:
>
>         * The RDRAND values should be expected to *pass* the tests even
>           if the generator is untrustworthy.
>
>         * The raw LFSR output should be expected to *fail* these tests.
>
>         * The SHA1 output should be expected to *pass* the tests even if
>           it is in fact not safe, due to bugs new or old, properties of
>           its input, or any other reason.
>
>         * The CTR_DRBG output has the same properties in this regard as
>           the SHA1 output -- only more so.
>
> I am not going to do security theater, and you are the one proposing
> the test, so since you think it has value, please, tell me where
> exactly to extract the output for testing, and I'll give it a shot.
>
> Please appreciate that this means quite a bit of work to dump output
> from the kernel in places where it is deliberately *not* made available
> to userspace, and that the test you propose, as far as I can tell:
>
>         * Would not have caught any of the RNG bugs we or anyone else
>           have had in the past
>
>         * Would not, according to any hypothesis you've been willing
>           to put forward, catch any bug we or anyone else are likely
>           to have in the future
>
> I'll do it, but all the rhetoric you're spitting out looks like pure
> marketing to me, because what you're proposing looks like pure
> theater; no actual security value.  That's not the you I know, so
> I must be missing something -- what is it?
>
> Thor
>


Home | Main Index | Thread Index | Old Index