Subject: Re: FUD about CGD and GBDE
To: Poul-Henning Kamp <>
From: Thor Lancelot Simon <>
List: tech-security
Date: 03/03/2005 13:10:44
On Thu, Mar 03, 2005 at 06:48:51PM +0100, Poul-Henning Kamp wrote:
> In message <>, "Steven M. Bellovin" writes:
> >And Knuth was talking about a situation without an adversary.
> If the component (well respected etc etc) algorithms I have used
> in GBDE contains flaws so that they become individually less
> intrinsicly safe because their input is the output of another such
> algorithm, then the crypto-world has problems they need to work on.

The algorithms in question are evaluated _for a particular purpose_.
It is absurd to claim that because those algorithms are generally
considered suitable for one (or more) particular purposes, it is
reasonable to consider them suitable for all purposes.

It is plainly possible to use even well-understood algorithms that
are indeed acceptably secure for their design purposes in ways that
in fact do not provide the security one naively claims.  Steve's
Needham-Schroeder example is a good case: should we claim that the
flaw in question cannot exist unless there is some fundamental flaw
in DES?

I note that GBDE uses a number of algorithms in ways that are not
consistent with their design purposes.  For instance, it truncates
a non-keyed hash (SHA512); the fact that this is not necessarily a
good idea is one of the major motivators for the design of HMAC.
It also uses MD5 in a way that I would characterize as not exactly
ordinary -- leaving aside that using MD5 at all is a questionable
proposition these days.  Indeed, the large number of algorithms
used in the keying and encryption process for any block in GBDE
does not necessarily increase its security: in fact, certain
kinds of flaws in any one of those algorithms could in fact make
the decryption of any particular block _more_ likely -- and Roland
has pointed out how the design of GBDE allows such failures to
cascade through the entire set of encrypted data.

In other words, unless you are very, very careful about design
(the TLS PRF is one example of this) using more algorithms may
well make you less secure, not more secure: you may inherit
vulnerability to flaws in _any_ of the algorithms you use.

> Considering the protection periods people asked for, I could convince
> neither myself nor any of the, (often very clued) people I talked
> to, that just taking a current standard algorithm and applying
> it using the same keymaterial to each sector of the media would be
> safe for a sufficient amount of time.

Fine.  You don't believe that AES256 will be sufficiently resistant to
known- (or perhaps chosen-) plaintext attacks for the next several
decades.  The question is, what rational warrant do you have for
believing that your cryptosystem will be more secure than AES256
would be.  The very complexity of your system makes it very, very
difficult to evaluate just how secure it is, and you seem to think
that that is a benefit: comparing the incommensurables "I don't
believe" and "I don't know, but I suspect", you land on the side of
"I suspect".

Somehow I do not find that terribly persusasive.