Subject: Re: Netpliance Iopener booted with NetBSD...
To: Todd Whitesel <firstname.lastname@example.org>
From: Bill Studenmund <email@example.com>
Date: 03/17/2000 11:02:35
On Fri, 17 Mar 2000, Todd Whitesel wrote:
> > The problem is that our file system layering system really hasn't dealt
> > with data caching issues when a file layer generates data. While things
> > won't be better under UBC at first, UBC will be the long-term win.
> Why mess around with the buffer cache??
Because that's where recently accessed data blocks read from devices live,
and performance will be dismal without it. :-)
The problem is that in the general case is that if you write things in the
top layer, you might want to dealy re-compressing the new block info, just
like when you might want to do a delayed write on a file system. If you
then read the underlying file, that read needs to somehow force the
not-yet-compressed data to get compressed and written down. Also, if you
(for whatever foolish reason) write the underlying layer, you have to tell
the upper layer to invalidate its blocks.
> I think what we really want here is a compressed vnode device. I'd be
> happy with a read-only one because for quite a few applications we can
> just union mount over it either MFS or a small uncompressed flash FFS.
By deciding it's read-only, you've simplified the problem tremendously :-)
. If you make whatever it is (special file system or modified vnd) be
something where you use a tool to make the image that this thing looks at,
it's not so hard. Mainly as the add-in can mark the file and the layer
read-only, and the data caching issues go away.
> The naive version is just like vnd except you mount ramdisk.fs.gz instead
> of ramdisk.fs; it rewinds and gunzip's as needed to get the blocks. This
> would come in handy for INSTALL_TINY on low-memory machine installs.
> A faster version would gzip every 8K or so of the filesystem image, so it
> didn't have to rewind the entire file because somebody asked for a block
> number smaller than the last one.
Sounds like a tuning parameter. :-) You'll definitly want some sort of
> Depending on how much RAM you have and how much performance you want,
> allocate multiple buffers to cache uncompressed bits in. Now it starts
> smelling like the i386-translator hardware in non-intel x86's.
These buffers are the buffer cache. :-) Using anything else for caching
wouldn't make sense. :-)
Also, don't forget that most of the programs on the install media are
cruched into one file. So that will give different access patterns than
from a file system full of random files.
> It should be possible to work out how much scratch memory the device
> will use, and pre-allocate that during initialization, so we aren't
> generating data that has to be managed dynamically.
While we might tune things some, this is the job of our current buffer
cache. :-) We just need to watch how it does in low-memory situations.