On Tue, Dec 05, 2006 at 01:48:28PM -0500, Thor Lancelot Simon wrote: > 2) Some RNGs do not actually guarantee that successive reads give > data that are actually represent different samples from whatever > means is used to digitize the underlying source (for example, > the hifn 7[79]xx do not, because of a design mistake). Can you link me/us to any block diagrams or other documentation? I think I wrote off to Soekris about their Hifn-based VPN board that has a HWRNG but never received a response. > 3) With design documentation it is usually possible to determine a > worst-case estimate for bits of underlying source entropy per > output bit of the hardware generator. In this case, you want > to be sure that you feed the random pool enough generator > output at once to ensure that you can give it a valid estimate > of at least one bit I seem to recall a compression technique that used fractional bits, which IIRC it called jots. Three bytes seems like an awful lot for one bit of entropy. On the cryptography mailing list, there was some discussion of entropy estimation, and it seemed clear that what was really desired is a min-entropy for the samples. Obviously that calculation is a floating point one, but it should be possible to make an integer algorithm out of it so we can do it in-kernel, right? However, entropy is not necessarily related to unpredictability, which is what we really want. It is just an upper bound. I'm fond of fortuna as a PRNG, since it explicitly avoids attempting to measure unpredictability, due to its reseeding technique. Basically it has N pools, with the "fastest" pool being used every reseed and the "slowest" pool being used only 2^N reseeds. > 4) Be careful, with regard to some cryptographic hardware now on > the market at least, that what you are feeding the random pool > is not actually the *output* of a cryptographic PRNG keyed from > a source of dubious quality (this is the case for a number of > 'FIPS compliant' RNGs integrated into currently available > crypto chips). I spoke with the Quantis engineers a while back and they told me that they don't even use any bias-correction; the outputs of the module should be exactly the photodetection of the soliton at one location or the other, as latched for that time quantum. This is great news because we can look for predictability in the distribution of the raw source, not after post-processing, which greatly simplifies the matter. I would hope that we can get the data out without passing through the kernel pool, so that it could be graphed various ways (a picture is worth ten statistics); if it _must_ pass through a pool and hash function, it is much more difficult to look for patterns or bias or problems/failures of the source. -- "Cryptography is nothing more than a mathematical framework for discussing various paranoid delusions." -- Don Alvarez <URL:http://www.subspacefield.org/~travis/> -><-