Current-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: regarding the changes to kernel entropy gathering



Btw i track 

  https://github.com/smuellerDD/jitterentropy-library.git

for about two years, and i "never" (which is a couple of years at
least) understood why something like this isn't simply used.
For example in the myriads of times the scheduler runs each
second, a little bit of that can be done on pretty compact local
data.

Quite honestly speaking, this random and bits-worth shit always
annoyed me, not being mathematician, as most terrible pseudo
chatter.  Incorporating things like rdtsc with applied intermixing
etc. to pimpen an entropy that as such is never revealed in order,
not to talk about only serving bytes generated through it by
cryptographical checked digest algorithms.  I at least always
mixed low-order/high-order bits.  Wow.

Yes it is unscientific.  But whereas the new OpenSSL RNG
mysteriously can fail (which it never did in the past), the Linux
kernel now uses a pretty simple (last i looked) such wait-and-mix
thing to overcome the seeding-blocks problem.  So i (who still
uses random-entropy devastating Python Mailman to serve minor MLs)
have to use haveged, which, whereas the kernel with the I/O, the
network, the process starts, the mapping( addresse)s, the (VM host
served) timers etc., generates a bit of random in a second (or
something like that, last i looked), generates thousands and
thousands of bits of entropy at a glance.  That is sick.

Hopping off,

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)


Home | Main Index | Thread Index | Old Index