Subject: Re: MFS over ISO-9660 union mounted with no swap space?
To: Mike Cheponis <mac@Wireless.Com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-kern
Date: 05/14/1999 15:25:02
In <Pine.BSI.4.05L.9905141441290.16629-100000@NameServer.Culver.Net>,
Mike Che ponis writes:

>In fact, it seems a better and better idea the more I answer the concerns.

The fundamental issue here is whether it makes sense to have a single
pool of empty space which can be completely consumed by disk files --
leaving no room at all for backing storage -- or by dynamic backing
store, leaving no room at all for file storage.

A number of people have said ``In my environment, that doesn't make sense''.

As the sole mechanism for backing-store allocation, this just
wont fly, okay?


>>The fundamental reason is that the `flexibility' you want acually has
>>negative utility in many environments.
>
>Please explaing exactly why this is so, or give me pointers so I can read
>the research papers explaining why this is so.

HP-UX has behaviour rather like what you describe.  Did you miss
people saying why that was bad in their environments?



>>VM/CMS used a very similar policy for each VM's `virtual memory' on
>>their godforsaken minidisks: VMs `paged' to unallocated filespace on
>>their minidisk.  I cant recall that it being anything but an unmitigated pain.
>
>Could you please be more specific than "unmitigated pain"? 

Heavy VM activity fills up the disk, or nearly so, leaving no room for
the output files of the job that was consuming the disk space.  So you
lose both the time put into that compute job and the output.  It
would've been better if the VM system had used its own pool, so that
it failed earlier and let the user grab more temporary space to let
the job complete *and* write its output files.

The principle at stake is very simple: you dont want separate pools.
But some people *do* want the pools to be separate -- for much the
same reasons some sites separate, say, maildrops and user files into
separate partitions; so a pig in one pool doesn't disrupt service for
all users in the other pool.


>Also, let's consider the "typical" piece of h/w that I think will be common
>when I finish writing this code: An Alpha or Merced at >= 500 MHz
>with at least 1/4 GB of DRAM and around 50 GB of disk.

But all the old machines that NetBSD runs on will still have the same
memory and bandwidth constraints they have now.  And, heck, I've been
using systems with half a gig for about a decade now, and I still
wouldnt want this :).