Subject: Re: paging like crazy
To: Jukka Marin <jmarin@pyy.jmp.fi>
From: Greg A. Woods <woods@weird.com>
List: current-users
Date: 09/02/2001 02:57:50
[ On Saturday, September 1, 2001 at 20:13:00 (+0300), Jukka Marin wrote: ]
> Subject: Re: paging like crazy
>
> Well, if I use ps after a backup, I see that every process (which is
> sleeping) has only 4K of resident space and the rest is on the disk.
I'm guessing then that you didn't continue trying to use the system for
normal activities during the time the backup was running....
In other words I'll be you wouldn't have experienced the same problem if
you'd kept moving the mouse around in your important windows throughout
the entire period the backup ran.
> Well, I have never asked the system to be as stupid as that. It's OK for
> the machine to use any unused RAM as disk buffers - but the memory occupied
> by my interactive programs is _not_ _unused_ memory.
Ah, but from the system's point of view that memory _is_ unused (at
least so long as the processes it belongs to are not accessing it
regularly). Interactive programs are probably the worst case since
they'll almost certainly leave memory pages idle for many seconds or
even minutes. If some other process comes along then they're going to
be paged out, like it or not, and regardless of whether the other
process needs them for its or VM or for buffer cache is irrelevant.
> The backup data is
> read only once, it makes no sense to flush everything else from RAM just to
> fill it with one-time data which is no longer useful in RAM after that
> single occasion.
IIRC there's been some discussion on some list or another of making the
buffer cache a bit smarter when handling sequential file accesses. If
this is possible it should make this worst-case scenario a little less
harsh on "idle" process pages.
> It also makes no sense using all RAM for buffers to
> "speed up" things - at the expense of the processes which _also_ need RAM
> to do their work (and process the data that fills up all RAM).
There's obviously a balance to be struck here, but if you look at
anything except the worst-case scenario of something like a backup (or a
sequential system-wide "find | xargs fgrep", etc.), using RAM to cache
disk data is almost always a _major_ benefit. Large "interactive"
processes which might have most of their memory pages idle for what
appears from the system's perspective to be extremely long amounts of
time are very good candidates for getting paged out. Obviously some
other more demanding non-interactive process has come along with
resource demands that can be fulfilled by making use of these inactive
pages.
Now what would be interesting is if there were some algorithm or tuning
flag that could be used to restore the pages of such interactive
processes once the other demands have gone away again -- this way if
you've run a backup overnight the X server, browser, et al, will already
have been paged back in when you come along in the morning and wiggle
the mouse and the system will seem just as responsive as it was before
the backup ran. Doing this for the general case gets a bit tricky
though if not all process will fit entirely in memory -- lots more page
use accounting might have to be done to try to determine what should be
paged in in the background in anticipation of future user demands.
Personally though I don't really mind waiting for the likes of Mozilla
to page back in after I've left it idle for a while -- at least I know
the memory's been used for something else I've asked the system to do.
The good thing is that the system now (with UVM+UBC) seems to be able to
keep *active* interactive processes sufficiently in memory to avoid
annoying lags when other processes make heavy demands on memory
resources.
> Just let me set the maximum amount of RAM that is _ever_ allocated to
> buffers and I'll be happy.
Setting NBUF and BUFPAGES (or BUFCACHE to make BUFPAGES dynamic w.r.t.
the hardware configuration) should do this. Beware though of the
potential for NBUF headers to suck up lots of space too.
If my understanding of NBUF memory usage is correct (as per the comments
I posted before) I suspect its default should be more dependent on NPROC
(and maybe it should be (maxproc) or (maxproc*2) at most, and at the
same time be capped at some (small) percentage of total available RAM
too. It most certainly should not be equal to BUFPAGES, except when the
latter is very small. For example on my i386 server, with 192MB RAM,
I've got maxprox=4116, but only NBUF=1200 (and BUFCACHE=10%, which if
I've done the calculation right means 4900 BUFPAGES).
Note also I've got a 128MB MFS allocated for /tmp. It's usually only
got a very little bit resident at any given time, though I'm guessing it
does change the memory use profile of my system as compared to yours....
There are still many things about NetBSD's kernel tuning which are
either black magic, or at least not well documented, and it would seem
that NetBSD with UVM+UBC is sufficiently unlike other types of Unix
systems that special care needs to be taken to tune it properly.
> If I set it to 128 MB, I want the rest of
> the RAM be devoted for running my processes. If there _is_ such a limit,
> it doesn't seem to work.
I think you just haven't caculated the full impact of the buffer cache
parameters. Without caclculating the potential usage of the NBUF
headers you're possibly vastly underestimating the amount of memory that
the buffer cache can consume in the worst case.
> Otherwise, I would still have most processes
> in RAM after a backup. In the 1.4 days, the system felt much more
> responsive (although I was running on much smaller systems that time)
> because there was nothing forcing all processes off the memory to slow
> swap space.
Prior to UVM+UBC I found NetBSD to be a real dog for anything that did a
lot of disk I/O. My old 3B2 running SysVr3.2 was a lot better at
dealing with heavy I/O demands while paging than older NetBSD was (after
taking into account the differences in hardware, of course). Now
finally (with some tuning) NetBSD seems to be closer to other types of
Unix in these conditions.
For example in the past a 'du' or 'ls -R' or 'find', etc. on a
reasonably deep hierarchy would almost always take the same amount of
time to execute a second time on NetBSD, no matter how large your buffer
cache and how much free RAM was available, even in single user mode.
Now however NetBSD finally behaves as other Unix systems have for nearly
decades and subsequent executions of such jobs will fly by with very
pleasing speed!
> As it is now, I guess having _less_ RAM might actually make things faster.
> At least it would keep the machine from doing 300+ MB's worth of stupid
> things as it's doing now.. :-I
>
> Sorry, but I _really_ don't like the current behavior - no matter how
> nice and fine the buffer cache may be.
>
> Let my processes have their RAM! ;-)
If your processes are using their RAM then I'm sure they'll be allowed
to keep it, but if they've gone idle, or stopped using a large segment
of their pages, why shouldn't thos idle pages be used to potentially
give orders of magnitude faster access to disk data?
Indeed the system's not as smart as you, the user, and it won't always
make the best decisions on how to use RAM, but I think it is using some
fairly well tested algorithms for giving the best fair use of resources
given the tuning you've done (or not done, as the case may be! ;-), and
given the demands you've put upon it.
--
Greg A. Woods
+1 416 218-0098 VE3TCP <gwoods@acm.org> <woods@robohack.ca>
Planix, Inc. <woods@planix.com>; Secrets of the Weird <woods@weird.com>