tech-userlevel archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: getrandom and getentropy



> Date: Mon, 11 May 2020 12:42:13 -0700 (PDT)
> From: Paul Goyette <paul%whooppee.com@localhost>
> 
> Why can't we allow the user to configure/enable estimation on a
> per-source basis?  The default can certainly be "disabled", but
> why not override?  Just like any other super-user thing, there's
> no reason not to enable shoot-my-random-foot mode.

You can shoot yourself in the foot with wild abandon by doing

dd if=/dev/urandom of=/dev/random

The `estimation' -- as in, feeding samples through an extremely dumb
statistical model that bears no relation whatsoever to the physical
process that produced them, and deciding based on that how much
entropy must be in that physical process -- is no longer in the kernel
at all, because it is not merely a foot-gun; it is total nonsense.
Having it around at all is -- and always has been -- essentially
dishonest.

When each driver feeds a sample in, it specifies a number which
represents a reasonable lower bound on the number of bits of entropy
in the process that generated the sample; usually this number is
either zero, or determined by reading the manufacturer's data sheet
and/or studying the physical device that the driver handles.

There's no way for the operator to override the number that is passed
alongside every sample at run-time because that would add substantial
complexity to an already complex subsystem -- you would, effectively,
be changing the driver logic at run-time -- for essentially zero
benefit.

It would be comparable to inventing a general mechanism by which to
make every DELAY issued by a driver configurable at run-time, even
though the delays are documented in the data sheet.  (If appropriate
for a particular driver, that driver could have a sysctl knob to
adjust it, of course.)


Home | Main Index | Thread Index | Old Index