Subject: Re: Compressed cache system [Re: Google Summer of Code project
To: None <tech-kern@netbsd.org>
From: Jed Davis <jdev@panix.com>
List: tech-kern
Date: 04/23/2006 18:07:51
"Volker A. Brandt" <vab@bb-c.de> writes:

>> On Sun, Apr 23, 2006 at 01:31:44AM +0200, Hubert Feyrer wrote:
>> > "Input" data is fixed size (== pagesize), but "output" size depends on the
>> > compression factor, and thus you'll need some way to arrange things with a
>> > non-fixed offset. Worse, if you replace a page with something that
>> > compresses worse, you may not be able to use available space.
>>
>> Anyone which wants to work on this should really take a look at the
>> design of Stacker or Doublespace/Drivespace. The German "PC Intern" in
>> version 5 or so descriped the latter. Good reading for an introduction
>> to this problem.
>
> Note that OpenSolaris ZFS already contains compression, and they
> have just set up a project to add pluggable encryption to ZFS.
> It might be worth while checking out the ZFS sources to see how
> they solved the variable compressed block size problem:

AIUI, they solve it by having a general-purpose heap allocator as one
of the lower layers of their filesystem; so, writing compressed data
blocks is just a matter of "malloc"ing fewer bytes.  So, any other
subsystems that want to use variable-sized blocks can share the same
anti-fragmentation measures.

-- 
(let ((C call-with-current-continuation)) (apply (lambda (x y) (x y)) (map
((lambda (r) ((C C) (lambda (s) (r (lambda l (apply (s s) l))))))  (lambda
(f) (lambda (l) (if (null? l) C (lambda (k) (display (car l)) ((f (cdr l))
(C k)))))))    '((#\J #\d #\D #\v #\s) (#\e #\space #\a #\i #\newline)))))