Subject: Re: But why?
To: None <firstname.lastname@example.org>
From: David S. Miller <email@example.com>
Date: 10/23/1996 22:39:02
From: Jason Thorpe <firstname.lastname@example.org>
Date: Wed, 23 Oct 1996 18:15:51 -0700
[ Sorry to jump into this so late; My e-mail address got popped off
tech-kern somehow... I've read the archives to catch up :-) ]
No biggie, it's just starting to get fun and intellectually
stimulating Jason ;-)
I'd definitely agree with Perry, here. Concentrate on
optimizations which give you a big gain, at first. As Perry noted,
often these optimizations come in the form of different data
structures and/or algorithms.
One of the points I was trying to make is that, in certain cases I can
document pretty well, these "micro optimizations" can do quite a bit.
In any case, in my experience, I've never seen a Linux system stay
up long enough to get a real benefit from micro-optimization :-)
See my previous posting about fluid analysis jobs etc... But I know
of some other SparcLinux machines which serve the world, only to be
brought down to get the latest features I have released in a snapshot
[ David sez ]
> > I end with a simple question: "If Charles Schwab on system B
> could get > transactions more quickly then any other broker,
> much more quickly > than they do now with system A, do you think
> they would switch to B?"
I would say that Charles Schwab ought to have some real, concrete
evidence that there's a real performance benefit from switching
systems. Moving your bread-and-butter around is a risky
proposition, one which a conservative individual would likely baulk
at if the gain was (at best) negligible.
I totally agree, it is a huge risk and it would be suit city.
It occurs to me that David is arguing by assertion... I want to see
numbers (other than lmbench, which is a micro-benchmark, and thus
useless for measuring real-world performance gains, as far as I'm
concerned) that prove that shaving 3 usec off of system call
overhead really makes a difference to applications (which is what
computers are for, right?).
This is not the only thing lmbench measures, it is only one miniscule
run that it happens to perform. See the list in one of my recent
postings, and how that list applies to the real world at least to a
But, to address Perry's point, optimizing your VM system isn't
about micro-optimizations. You're typically looking for more
efficient ways to store/find information.
You'd be surprised what "a better cache flush" (not eliminating them,
performing the ones you already do better) or a "better tlb flush" can
do. I sat around one hour on the INDY and was able to make fork() and
exec() overhead go down drastically because of this alone.
David S. Miller