Subject: Re: sw0 performance...
To: Mike Frisch <mfrisch@saturn.tlug.org>
From: David Gilbert <dgilbert@jaywon.pci.on.ca>
List: port-sparc
Date: 03/05/1996 19:37:54
>>>>> "Mike" == Mike Frisch <mfrisch@saturn.tlug.org> writes:

Mike> At 08:05 AM 3/5/96 -0500, David Jones wrote:
>> The other big difference: NetBSD has a fixed size buffer cache that
>> is separate from the VM system.  SunOS merges the VM and buffer
>> cache so the buffer cache could grow to fill all physical memory.
>> This might give SunOS better performance in some applications.

Mike>         Is the NetBSD method of buffering better or worse than
Mike> SunOS?  (I'd suspect better since NetBSD is supposed to be
Mike> "better" than SunOS in terms of performance).  I am familiar
Mike> with the caching scheme in Linux in which (I believe) it's
Mike> managed like SunOS.  Is this correct?

Mike>         I thought being able to utilize all "free" RAM as a
Mike> cache would be beneficial to performance.

	Well, for total thoughput, I'm sure that you can manufacture
numbers that show that the dynamic buffer-cache will be a win.
However, I have found a number of implementations lacking.  It is
difficult to tell when to move the 'line' (between buffer cache and VM
pages).

	One example that particularly irked me was when HpUx first
implemented the idea (7.x?).  At the time, I was using a 9000/755 with
196 Meg of RAM. We also had a _lot_ of disk.  One thing that I had to
do fairly often was to create a tar archive of about 200 meg of
files.  One of the reasons that we had so much memory is that we had a
very large database application that had to react in real time to
incoming serial data.

	When I wrote the 500 meg tar file, it would write it
surprisingly fast.  Then you would try to move the mouse or use the
application and would have to wait while is started to flush buffers.
Since those files were about to be IP transmitted, the fact that you
wouldn't need much disk IO bandwidth to immediately transmit them
(ethernet is much slower than disk), all that memory wasted on the tar
file in core would cause real performance problems for other parts of
the system.

	This was even more aparent when we used our magneto-optical
drives (which take forever to write (about 5x or 10x slower than the
hard drives we had).

	This may sound like an extreme case, but, to this day, I find
that a lot of disk activity on HpUx can knock VM pages out of memory
far too quickly.  In general, for applications like news servers,
buffer cache substantially more than the stock 10% may not be a win
simply because the same data isn't visited rapidly enough.  YMMV.

	When we go to implement this --- and I am convinced that it
can be a good idea --- I would like to see it relatively easy to move
the 'line' around near to the 'target' (the target being around 10%),
with it getting harder as you move away from it.  I really, really,
really want to prevent the situation where creating a large tar swaps
out everything on you.

Dave.

-- 
----------------------------------------------------------------------------
|David Gilbert, PCI, Richmond Hill, Ontario.  | Two things can only be     |
|Mail:      dgilbert@jaywon.pci.on.ca         |  equal if and only if they |
|http://www.pci.on.ca/~dgilbert               |   are precisely opposite.  |
---------------------------------------------------------GLO----------------