Subject: Re: Compressed filesystem (was Re: CryptoGraphic Disk.)
To: Todd Vierling <tv@pobox.com>
From: Daniel Carosone <dan@geek.com.au>
List: tech-security
Date: 10/11/2002 06:58:49
On Thu, Oct 10, 2002 at 04:35:07PM -0400, Todd Vierling wrote:
> Now, this could be accomplished with gzip/zlib compression, provided you
> have some layer to intercept lseek(), through crafty finagling of zlib
> internals (by stashing the decompressor's state machine and Lempel-Ziv
> window periodically as "smart restore points").  That's memory intensive
> depending on the compression level (-z9 is a 32KB LZ window, I believe), and
> how your backoff algorithm chooses when to throw away saved state.
> 
> The other alternative, which has been used in some compression schemes, is
> to partition the uncompressed blocks so that there's a "clean" state machine
> at known points throughout the stream.  Typically this consists of
> independently compressed blocks of a fixed uncompressed size, with some sort
> of periodic index of the blocks (such as a N-block table between each N data
> blocks, or a pointer to the next block at the start of each, or similar).
> Obviously, the compression achieved here is far less than you'd get from a
> complete-stream gzip pass, but it doesn't have nearly the requirements of
> changing horses^W^Wrestarting gzip decompression in the middle of a stream.

How big is the state machine?

You could combine the ideas, at least for read-only media prepared
for the purpose.  Save the state machine at each of the partition
points, rather than resetting it.  This could go in an adjunct
helper file, so the actual files remain standard gzip format.

If it's too big, you could still play the tradeoff game between file
size, state size, and spacing of restore points.

--
Dan.