Subject: Re: But why?
To: David S. Miller <davem@caip.rutgers.edu>
From: Perry E. Metzger <perry@piermont.com>
List: tech-kern
Date: 10/23/1996 19:37:37
"David S. Miller" writes:
> > If you can find 10 or 20 places where you can do something
> > similar, what does this work out to?
>
> On my machine, a savings of 0.4%. Still a bit of a yawn, I'd say.
>
> You're missing my point just to be rude and obnoxious.
No, I got your point. The point was that your point is
meaningless. Saving 20 instructions here and 10 instructions there is
useless except in the inner loop of DES or an MPEG decoder.
What matters far more is algorithms. I remember when Linux used a
linear list for the routing table, and NetBSD used a PATRICIA
tree. No number of instruction bums on the linked list were going to
come near a PATRICIA tree no matter what you tried.
I've got to admit to being an old fart who's seen this sort of thing
before. The rule is that you profile, find your bottleneck, and then
go for a better algorithm. You don't bother optimizing anything that
comes to a tiny percentage of your time, because it isn't worth the
effort. After you've fixed the bottleneck's algorithms, if its still a
real bottleneck, then you bum instructions or recode in assembler and
use tricks like paying attention to the pipeline behavior of the
processor.
You don't bother, though, for stuff that isn't a bottleneck, and
assembler and such is a last resort.
> I've said repeatedly, the kernel isn't a bottleneck for most
> applications.
>
> (I'm quoting Jacobson here, when he heard someone at a conference say
> that TCP could never be made to go fast) "Bullshit!"
I didn't say things couldn't be optimized. I said the kernel isn't a
bottleneck.
On networking, well, my kernel already totally saturates my ethernet
without taking significant CPU time. This means there is no point in
optimizing it further. When I go for 100Mbps ethernet, then it will be
time.
Sure, if you find that something is a real problem (like you have a
100Mbps ethernet and you are getting 11Mbps out of it) you should
optimize. However, there is no point if you don't see a problem.
> In general, the machines are already more than fast enough for the
> job, and the problem people face is application software. Those
> machines that aren't fast enough need more CPU for the userland
> stuff, not for the kernel.
>
> "We have faster boxes, we can afford to let the kernel go a bit slow
> because the hardware is faster now."
Are you being deliberately thick?
Most of the machines I use spend only a few percent of their lives in
the kernel. Wall Street applications, which you explicitly named, are
very much like this. Compiles, which my machines do a lot of, are
bound on CPU in userspace and I/O during lexing. The kernel time?
Negligible. Maybe a few percent.
What is the point of spending serious time on optimization of
something that only consumes a few percent of your CPU? If you are
hitting the wall, 5% usually doesn't help you. You mentioned Wall
Street. Well, on Wall Street, there are some very nasty
applications. For instance, take tickerplants. A tickerplant has to
store vast numbers of securities transactions per second -- sometimes
tens of thousands or more. However, tickerplant machines are almost
NEVER bound on Kernel CPU. On things like tickerplants, once your VM
is decent, what you need is lots of memory, because memory ends up
being a giant disk cache, and lots of fast disk, because memory can't
store everything. Usually the CPU doesn't tick over on these machines
-- they are almost always I/O bound.
> I am reminded repeatedly every single day by people like you why we
> are indeed amidst a software crisis.
The software crisis is that software can't be written fast enough to
satisfy the needs we have. The software is usually fast enough.
Some applications, like desktop video, are still cutting edge CPU
wise, but these rarely even touch the kernel.
The only place the kernel is still a bottleneck is in IPC, especially
in network stuff. Removing excess kernel copies from the IP stack is
important if we want to use things like ATOMIC as our lans of the
future, running at 600Mbps or 1.2Gps. However, for most stuff, the
kernel isn't noticed.
Take disk I/O for example. Most decent operating systems already get
virtually every last drop of I/O the disk is capable of. Sure, maybe
you can chop a few instructions here or a few instructions there --
but why bother? No point. If you hit the wall, you need faster disk,
or better cache or layout algorithms, or striping -- shaving an
instruction or two on the kernel won't help you.
Perry