Subject: Re: explaining TOP memory output and constant 1.0 load averages
To: NetBSD User's List <netbsd-users@netbsd.org>
From: Johnny Billquist <bqt@update.uu.se>
List: netbsd-users
Date: 07/19/2006 00:10:48
I just don't agree with your conclusions.
No, I'm not short on RAM. And no, more ram don't solve every problem.
A typical case of this is when much of a large file system is run
through once. This will fill up the caches with lots of data that will
not be referenced again, over and over again pushing out executable
programs from memory, which will be referenced again.
And my systems are not starved for I/O throughput, the problem is (was)
that programs got way too little memory, and caches way too much. There
is no I/O throughput in the world that will help that situation, and
memory wise it will not start to be acceptable until you're getting
close to as much memory as you are using disk.
Not either a very likely, nor sensible scenario I think you would agree...
I find it just silly to say that my problem is that I have too little
ram, when obviously the system was very zappy before the new vm system
came into place, and the system was very zappy once again, once I
changed the vm tuning parameters.
To suggest, in such a situation, that the problem is that I have too
little memory is on the verge of offensive.
I thought it was companies like Microsoft who used that excuse and
answer to all performance problems. It saddens me that the same
philosophy have come to permeate this place as well.
Johnny
Greg A. Woods wrote:
> At Tue, 18 Jul 2006 08:12:26 +0200,
> Johnny Billquist wrote:
>
>>Greg A. Woods wrote:
>>
>>>At Mon, 17 Jul 2006 11:57:32 +0100,
>>>Mark Cullen wrote:
>>>
>>>
>>>>Perhaps changing from 50MB to 5MB helped as it was filling up the file
>>>>system buffer cache or something, as Johnny suggested. If the buffer
>>>>cache code pushes things out to swap I guess it could cause things like
>>>>this to happen?
>>>
>>>Keep in mind (and if I understand correctly), the buffer cache won't
>>>page anything out onto swap unless it's willing to go, and it also won't
>>>do it unless you've got I/O demands to fill the buffer cache.
>>
>>Almost anything is willing to go, if nudged. Not much is wired in place.
>
>
> Exactly -- but it has to be nudged. If there's no starvation of the
> available memory for every desired purpose then there won't be any
> nudging and swap will remain empty.
>
>
>
>>>I.e. this machine is too small/slow for what's being attempted with it.
>>>NetBSD has been asked to to make compromises and it's doing the very
>>>best it can with the information it has been given. Give it more memory!
>>
>>I'm sorry, but I think this is nonsense. The same response was given to
>>me when I was fiddling around. Atleast in my case, the machine did *not*
>>have too little memory, and was not asked to do too much. It was simply
>>not doing the best it could with the resources. And earlier it had been,
>>which is why I was annoyed with the sudden drop in performance.
>
>
> Please do pay attention to the words I wrote: "the best it can with the
> information it has been given"! :-)
>
> The fact was your machine did have too little RAM to perform the
> combined loads you subjected it to with the VM tuning parameters it was
> given to use. You preferred to limit file I/O to physical I/Os and give
> up on caching in order to get better performance from other memory
> hungry programs, but the system wasn't initially tuned to do that, and
> had your preference actually been to reuse the file data frequently then
> the caching that it was doing would actually have increased your system
> throughput, albiet at the expense of the big memory hungry program(s).
>
> In any case giving a machine more memory to work, if possible, and with
> when there's stuff out on swap, is NEVER EVER a bad idea, regardless of
> what that memory will be used for and regardless of what nudged those
> pages out onto swap. Indeed many folks will likely agree that giving a
> machine more than enough RAM is far easier and more productive than
> trying to finely tune it to get by on the minimum amount of RAM.
>
>
>
>>After tuning the system a lot, it now once more runs like a champ, so
>>it's simply a case of the defaults nowadays being very poorly tuned. I
>>don't know if the defaults might be a good setting for some machine with
>>loads of memory that only runs as a server without any interactive
>>users, but atleast I have not found a situation where the defaults are
>>particularly good.
>
>
> Inded the defaults are poorly tuned for some uses, such as yours, on
> some hardware configurations perhaps. However it can be shown that with
> other kinds of workloads, and appropriate hardware resources, the
> default tuning may actually benefit overall system throughput greatly.
>
>
>
>>And I would expect that on such large server machines, modifying the
>>defaults will not make it run bad either, so my belief is that the
>>defaults are simply wrong for most anyone, but it don't hurt everyone
>>enough to complain about.
>
>
> Actually on machines with ample amounts of RAM the defaults aren't the
> best, at least for workloads I've put such machines to.
>
> Perhaps the only default which we might collectively agree is
> inappropriate is vm.filemin. I think it should be at or below 5%.
>
> Perhaps vm.execmin should be higher too (and thus vm.execmax should
> probably be increased by default), however this might abnormally favour
> large programs on systems where the best throughput would actually come
> from raising vm.filemin. The same could be said of vm.anonmin.
>
>
>>Interesting thoughts on loads followed. I don't exactly disagree with
>>it, but the one big thing in my book is that if I have a load over 1, I
>>definitely don't want to see a 90% idle cpu... Unless the load sampling
>>is syncronizing with the process creating the load, you then have a very
>>badly tuned system.
>
>
> Indeed -- try giving it more memory so that maybe it won't be starved
> for I/O throughput! :-)
>
> (of course one can go broke trying to give a machine, or build a machine
> even capable of accepting, enough RAM to hold it's whole dataset and all
> program memory, etc. one must know one's limits and adjust one's
> patience to match! sometimes paging is the only way to get through a
> big job! ;-))
>
--
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: bqt@update.uu.se || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol