Subject: failing to keep a process from swapping
To: None <tech-kern@netbsd.org>
From: Arto Selonen <arto@selonen.org>
List: tech-kern
Date: 10/22/2004 13:18:52
Hi!

On October 7th, I sent the following to current-users:

	http://mail-index.netbsd.org/current-users/2004/10/07/0014.html

Since I got no replies to that one, I'm making another effort here.

Basically, what I am trying to achieve is to keep 'squid' in memory.
I've set the following limits (after having sent the above email):

	vm.anonmin = 65
	vm.execmin = 2
	vm.filemin = 10
	vm.anonmax = 80
	vm.execmax = 5
	vm.filemax = 15

And I've checked the proc.<pid-of-squid>.rlimit values for the running
process. I've also set reasonable values to those in /etc/login.conf
(I haven't yet modified the squid startup script to use ulimit).

However, no matter what I do, as soon as RSS of squid grows to
~330-350 MB, it starts to throw pages to swap (ie. swap usage starts
to grow and RSS of squid shrinks). At the same time, file cache is
kept at ~350-400MB range. For a 1 GB system that really does not use
memory for anything else besides squid, this is not bad, but I would
like to at least feel like *I* am the one controlling the balance
between memory and disk caching (from squid's point of view).
Squid also seems to take a performance it once the swap issue surfaces
(from looking at the access statistics; kind of makes sense if swap is
used).

Why can't I shoot myself in the foot, and control the VM usage?


<RANT>
In case this could be a bug of any sort, here are some thoughts
surrounding the issue. After having set the vm.anon{min,max}
and friends, I thought everything was set. After seeing squid bounce
back from a magical ~350MB limit several times, I learned that even
though root session has ulimits set to max/unlimited, memorylocked
was set to ~350MB (or 35% of 1GB). I thought that I had finally found
the reason, and set new, higher limits, and started squid again.
I made sure that sysctl reported those limits for the running process.
Now that the process is again hitting an invisible limit of ~340MB
I am starting to think that either there is a bug, or I'm still missing
some pieces of this puzzle. Is there no way for a process to grow
to ~500MB size in a 1GB system where the OS is the only other major memory
consumer?

And no, I don't claim to understand what I've done so far, though
I think I've grasped some of the most trivial explanations from
that 'Bad response' thread on current-users. I thought I knew
how the system would behave depending on the vm.{anon,exec,file}{min,max}
values, but my system seems to be making fun of my expectations. :)
Who is to blame, man or machine?
</RANT>

Any comments or suggestions are welcome...


Artsi

PS. In case it makes any difference, I'm running -current from ~20041012
-- 
#######======------  http://www.selonen.org/arto/  --------========########
Everstinkuja 5 B 35                               Don't mind doing it.
FIN-02600 Espoo        arto@selonen.org         Don't mind not doing it.
Finland              tel +358 50 560 4826     Don't know anything about it.