Subject: Re: FUD about CGD and GBDE
To: None <>
From: Steven M. Bellovin <>
List: tech-security
Date: 03/03/2005 11:57:30
In message <>, Thor Lancelot Simon writes:
>On Thu, Mar 03, 2005 at 05:31:34PM +0100, Poul-Henning Kamp wrote:
>> In message <>, "ALeine" writes:
>> >Not necessarily, if one were to implement the ideas I proposed
>> >I believe the performance could be kept at the same level as now.
>> I gave up on journalling myself because IMO it complicates
>> things a lot and the problem it solves is very very small.
>> The impact in disk seeks is non-trivial to predict, but it is
>> very hard to argue that it will not lead to an increase in
>> disk seeks.  (This is really a variant of the age old argument
>> between jounaling filesystems and "traditional" filesystems)
>> I can only recommend that you try :-)
>> We need more ideas and more people trying out ideas.
>I could not disagree more.  When it comes to nonstandard homebrewed
>cryptosystems foisted off on unsuspecting users with a bundle of
>claims of algorithm strength that they're not competent to evaluate
>for themselves, we do not need more ideas, nor more people trying
>out ideas; we need less.
>Standard, widely analyzed cryptographic algorithms are good.
What Thor said.

It's instructive to quote from Vol. 2 of Knuth:

	With all the precautions taken in Algorithm K, doesn't it seem
	plausible that it would produce at least an infinite supply of
	unbelievably random numbers?  No!  In fact, when this algorithm
	was first put onto a computer, it almost immediately converged to
	the 10-digit value 6065038420, which---by extraordinary
	coincidence---is transformed into itself by the algorithm (see
	Table 1).  With another starting number, the sequence began to
	repeat after 7401 values, in a cyclic period of length 3178.

	The moral to this story is that *random numbers should not be
	generated with a method chosen at random*.  Some theory should be

And Knuth was talking about a situation without an adversary.

I don't claim that there's a flaw.  I do assert that that I haven't seen a
threat model that would justify extra complexity.

Let me go one step further.  The cryptographic literature is full of
examples of broken protocols.  My favorite is the flaw in the original
Needham-Schroeder protocol, from 1978, that went unnoticed until 1996,
when an automated tool found it.  I should add that once pointed out, the
flaw is blindingly obvious -- but it went unnoticed for 18 years, in the
oldest protocol in the open literature.  Btw, in modern terms this
protocol is 3 lines long.

One more quote, this time a remarkably prescient one from that Needham
and Schroeder:

	Finally, protocols such as those developed here are prone
	to extremely subtle errors that are unlikely to be detected
	in normal operation. The need for techniques to verify the
	correctness of such protocols is great, and we encourage
	those interested in such problems to consider this area.