Subject: Re: CryptoGraphic Disk.
To: Hubert Feyrer <email@example.com>
From: Todd Vierling <firstname.lastname@example.org>
Date: 10/06/2002 11:30:17
On Sat, 5 Oct 2002, Hubert Feyrer wrote:
: Indeed, cool thing!
: While there, I have heared repeated requests for a compressed filesystem.
: How hard would it be to do a similar thing for compression?
: I guess the problem is that you don't have fixed-sized blocks (after
Partly. The main issue is that all modern compression algorithms use
reverse lookup pattern repetition windows (LZ or similar). This means that
you'll need to restrict compression operations to one of the following
* sequential access only (requires a single sliding window buffer)
* random access using "restart points" (requires a sliding window buffer per
decompression restart point; still doesn't allow random access writing)
* block-based compression, which causes the variable sized blocks above
(and typically doesn't achieve very useful compression for the overhead)
Something similar to a portalfs would probably work better, mind you,
because it can all be emulated in userland. In fact, sequential readonly
access is possible with the current portal scheme. However, to implement
writing or random access emulation, you'll need to intercept seek(), etc.
-- Todd Vierling <email@example.com>