Subject: Re: FUD about CGD and GBDE
To: None <>
From: ALeine <>
List: tech-security
Date: 03/04/2005 10:55:48 wrote:

> There are at least two ways to determine this information fairly easily:

As easily as one can get accepted into the crypto community? :->

> 1) If you're doing analysis of a cold disk, it is ~trivial to tell
> the difference between a sector that has been written only once and
> a sector that has been rewritten.

This is hardly trivial, you are basing your statement on the false
assumption that one cannot or will not do anything to protect the
encrypted image after the initialization. One can do a lot.

For example, one can regularly scrub the unused areas around the
encrypted image (padding) with dd(1) using if=/dev/{u,}random and
similar. This can be fully automated with a cron job.

One can also regularly scatter files with misleading names and
contents. These files would be of various sizes, some would be
filled with random garbage and others would be specially
constructed as to cause a false negative upon possible decryption
by brute force or other means. This is easy to implement and
very effective, for example, you can use this approach to encrypt
a file full of random garbage with the same algorithm as the
underlying mechanism (AES 128 in this case). You could also
construct a file containing fake metadata and fragments (even from
other filesystems like EXT2 etc.), which would send an attacker on
a wild goose chase if they were to somehow manage to decrypt part
of the contents of such a file. Such a fake metadata file would
have to be padded to the appropriate length with other valid-looking
data in such a way that it would be contained entirely in a single
zone, otherwise it would expose the remaining sectors is the zone.

Perhaps GBDE could be extended to make such a thwarting subterfuging
mechanism possible through means of a utility which would guarantee
the exclusive placement of a file in a separate zone (or zones if it
were bigger than a single zone, of course). It could be called gezcp
or something like that. Poul-Henning, what do you think of this
feature? :-)

One could then have a cron job periodically (say, every 5 minutes)
launch a script which would scatter the misleading files in specified
locations, always making sure not to use more than the specified
amount of disk space. These files would be recycled, of course, with
older ones being randomly replaced by new ones. This would be fully
automated and would introduce a good level of dispersion of misleading
contents and regular data written to the disk by the user(s). It would
also obscure access and write patterns to the point of making this
attack vector very unattractive.

> 2) When used in a SAN environment, or an environment where
> multiple accesses to the drive can be done over time, it is
> possible to determine this fairly quickly using traffic analysis.
> The GBDE paper even touches on this in section 10.3.  Have you
> read it?

First of all, protection against traffic analysis on a SAN is in
the territory of hot disk protection and GBDE, as you must have
surely read, is designed for cold disk protection. SANs are by
definition high availability environments and as such have high
volume traffic, so if you have someone who has access to be able
to monitor that traffic and can also analyze such high volumes
of traffic and can also clone your entire SAN storage devices
unnoticed without causing a service disruption then you have
much bigger problems, so worrying about GBDE should be the
least of your concerns. :-)

Second of all, the cleaning lady copy attack (described in section
10.3), where someone can regularly make bit-wise copies of the
entire disk containing the encrypted image and determine the
location of sensitive structures by means of differential analysis
is not very practical. If someone has that kind of access to your
computer then they are more likely to use a hardware keylogger and
intercept the passphrase. You can get such keyloggers for less
than $100, take a look at Key Katcher, for example:

Also, the journaling mechanism I mentioned in my previous posts
combined with the thwarting and subterfuging approaches I described
above would make any kind of differential analysis very difficult
and no longer practical.

Who is clutching at straws now? :->

WebMail FREE