Subject: Re: compressed virtual memory - interested?
To: None <tech-kern@NetBSD.ORG>
From: Aleksi Suhonen <ams@lenkkari.cs.tut.fi>
List: current-users
Date: 05/31/1995 15:13:24
In message 20bff35e.8d9a1-is@Beverly.Rhein.DE  Ignatios Souvatzis told
}-Hi VaX#n8 <vax@ccwf.cc.utexas.edu>,

He's

}-> ... contemplating working on a compressed virtual memory system for NetBSD.
}-> This would be for 3 hours of credit over the summer semester (i.e. expected
}-> target date around August).

}-3:1, with a algorithm worse than lzw? I don't believe that for most of
}-my VM memory space. lzw and friends do about 40-50% on normal text, c 
}-source text, or binaries... a bit more on News batches, of course, due
}-to lots of repetitive information in the headers (and bodies :-) gzip is
}-a bit better, but uses LOTS of cycles.

}-As you probably will only compress small amounts at a time (the 8k
}-pages), the ratio will even be worse.

When I was thinking about this I came to the conclusion that the (possible)
Huffman tree would be static or there would be a few versions of it and
as an old one was not used it was reused as a new one. If the trees would
be static they would be platform specific (make a tree out of the kernel
for example and then use it). You could of course use several different
packing algorithms at the same time. Compress the first 1K (for example)
with Huffman, Run Length Encoding and what have you. The one that had
performed best in the first 1K will be used for the rest too. You will
nevertheless need flags for each packed block. Those flags can easily
house which method of packing was used and it's personal flags (like
the tree used for packing. [ == 3 bits] ). You can take these ideas
and take them a lot further. (For example: All the way to the garbage
bin ;-)

--
	Aleksi Suhonen