Subject: Re: swap space for a memory based filesystem
To: None <email@example.com>
From: Perry E. Metzger <firstname.lastname@example.org>
Date: 03/04/2004 07:59:00
> I was confused by the free memory going down, -:(
> but after the explanation on the list, I could appreciate it ..
> The issue it seems for my problem was that while some file copy or say ftp
> of a huge file is in progress the memory / pages are used up by prorgrams
> already executing and so in the meanwhile if one our process asks for
> memory, it is turned down,
It will not be turned down if there are any file pages that could be
> anyways, I think in my case, limiting the max number of pages for data
> buffer caching could be the only way out . using sysctl .. ?
You have it exactly in reverse. If there are more than the minimum
number, file pages will be evicted until it hits the min watermark if
you need the pages for something else. "Max" is only the point at
which file pages will always be evicted if new file pages are needed.
There are four sets of tunables -- minimum and maximum for
executables, anonymous pages (i.e. program data) and files, and the
size of the metadata buffer cache.
> would just decreasing the filemax to a reasonable number do the thing ?
It will do nothing for you at all.
If you are hitting the limit on the number of pages for executables,
you may have execmax too low, or you may have some of the other min's
too high for execmax to be reached.