Subject: Re: Bad response...
To: Thor Lancelot Simon <email@example.com>
From: Johnny Billquist <bqt@Update.UU.SE>
Date: 08/30/2004 16:50:31
On Mon, 30 Aug 2004, Thor Lancelot Simon wrote:
> On Mon, Aug 30, 2004 at 09:40:17AM +0200, Johnny Billquist wrote:
>> On Sun, 29 Aug 2004, Thor Lancelot Simon wrote:
>>> I find it a bit strange that you'd expect to be able to run binaries 100
>>> times as large as the average program was 10 years ago, while building the
>>> operating system, whose sources are 10 times as large as they were, with
>>> an optimizing compiler that works 10 times as hard to compile the same
>>> while serving up files -- data files and binaries *both* often 100 times
>>> as large as they were a decade ago -- to various other machines, with only
>>> twice or perhaps four times the RAM a decent desktop *or* engineering
>>> workstation *or* fileserver would have had then, and yet blame *the
>>> system* when you experience the obvious symptoms of having a working set
>>> far larger than the amount of physical RAM on the machine.
>> Whoa! Hold on to your horses here.
>> Are you claiming that I'm doing something unnormal or not?
>> And are you claiming that my hw is unusual or not?
> I'm claiming that your expectations are way out of line. You're trying to
> work with data and executables that are somewhere between one and two orders
> of magnitude as large as they were when the amount of memory on your system
> was appropriate for its job -- yet expecting performance to be good with
> default system tuning. I think that's absurd, and I think that changing the
> default system tuning to accomodate this use would probably break more than
> it fixes.
But actually, if we're talking in perspective to what we did/have 20 years
ago, I'm asking way less now. Back in the '80s, the memory demands far
outpaced the supply. Memory was much more scarce then, even compared to
the usage. So, if anything, we have a situation nowadays making much less
demands on the machine and OS.
And yet we seem to do worse.
Yes, I'm obnoxious and silly. I remember when I was sitting on a PDP-11/70
(admittedly running RSTS/E) back in the early eighties. We had 512KB
memory, and usually were 40 people running on the machine, and peaked at
63, which was the OS limit on terminals. Didn't become any more sluggish
than my machine was before tweaking the knobs a few days ago.
Anyhow, this very same machine was behaving much better two years ago.
And yet you claim that my expectations are way out of line.
They are very much based on the behaviour of the same machine, with the
same OS, with very similar workload of a few years ago.
But okay, basically we should expect serious degration of performance
with newer versions of NetBSD then.
> As I pointed out, the working set of your system vastly exceeds the size of
> its physical memory. You're not going to see good performance in that case,
> no matter what you do; all you can really do is choose which applications on
> the system will get hit the worst with lousy performance.
No. I do get very good performance now, thank you. The file caching is
actually just a big waste in this case. Building the system means most
disk blocks only get hit once. Sooner or later, they will be kicked out.
It's actually better if they get kicked out right away, instead of first
kicking my application out, and then they get kicked out.
File caching isn't always a win. If we really have unused ram, then by all
means, put it to some good use. But when we have any kind of contention
for memory, file caching is seldom a big win. Not that many things are
hitting the same disk blocks over and over again. Metadata for the file
systems are a good thing to cache, as is directories, but actually disk
data blocks are not. Most likely they will just cause you to get
lots of additional page faults for very few extra cache hits.
Do we keep any statistics on disk cache hits, and preferrably, atleast
metadata and directory data separated from other kind of disk data.
Just my $.02 atleast.
Johnny Billquist || "I'm on a bus
|| on a psychedelic trip
email: firstname.lastname@example.org || Reading murder books
pdp is alive! || tryin' to stay hip" - B. Idol