Subject: Re: LKM support
To: None <perry@piermont.com>
From: John S. Dyson <toor@dyson.iquest.net>
List: current-users
Date: 11/10/1996 10:46:40
> 
> 
> "John S. Dyson" writes:
> > One minor clarification -- MFS allocates VIRTUAL MEMORY, or normal
> > process memory space.  So, that space is backed by swap, and can
> > (with dubious effects) be paged out.  You really want to avoid
> > paging your MFS, but it can/will be under tight memory conditions.
> > 
> > Actually, I think of the paged MFS as having an advantage over a pure
> > ramdisk (that takes up wired memory all of the time.)  When memory
> > does get tight, that allocated space for metadata, and unused files
> > is pushed out to swap space...  The key is to NOT take advantange
> > of its pageability to excess.
> 
> I agree that it is good to have an MFS paged out.
> 
> I also agree with Chris, however, that the design is suboptimal.
> 
Right, I certainly was not disagreeing with him, but Chris did state
that the MFS uses MEMORY, and that was unclear what kind.  There is a really
easy (good) thing that can be done with the MFS code (haven't gotten
around to it.)  I suggest that using process space is silly, since
you can simply have a VM object around that is a container for the
space.  Further, there are unnecessary copies in/out to/from the
backing object.  (I can remedy these first two deficiencies quickly,
but simply have not had the time, DG and I have discussed this off/on
for a few years.)  The harder, but more optimal thing to do is to eliminate
the on-disk file structure from memory (it is really silly to have such 
in memory, since memory does not have any of the rotational design
of a disk drive :-)).  In order to fix that "problem", I would
propose one solution of using a VM object per file for backing store.
Copying in/out from that object would be done by juducial frobbing with
pages to/from the buffer cache and backing VM object.  Simply using
the VM object doesn't properly address the issue of file growth (NO
*BSD O/S can handle growth of swap backed objects once paging has
occurred with them.)  Therefore, instead of using a VM object per file,
we could use a VM map per file.  Of course, we could contrive another
data structure scheme instead of using the VM code, but there are some
advantages of using the VM code:

1) The code is already being used, therefore the static size of the
   kernel will not increase.
2) The code cache footprint of the kernel will be minimized (cache footprint
   and system performance are inversely related on modern architectures.)
3) The code is already written, and adopting it to this usage is trivial.

In general, the scheme that DG and I have been discussing addresses
the following issues:

1) Minimal copying (all page moves would be virtual.)
2) Minimal metadata/directory overhead.
3) Almost totally dynamic sizing.
4) Re-use of existing kernel code, for purpose that it
   was originally designed: memory management.
5) Potential total removal of MFS from any association
   with a user-mode process.

The most significant disadvantage of our scheme is that
directories and files would have a 1 page granularity,
unless the code did some very careful memory management.
I could imagine that a follow-up design would be able
to address the issue easily, if it became bothersome.

John
dyson@freebsd.org


> 
> 1) No fixed size, just an ability to set the maximum.
> 2) Data structures for "file" storage that take advantage of the fact
>    that you are in RAM, not on disk.
> 3) pageable if necessary
> 4) Efficient storage; minimal number of memory copies needed to get to
>    the bits.
> 
My comments above address these issues (we have been talking about
such efficiency issues for years :-)).