Subject: Re: But why?
To: David S. Miller <email@example.com>
From: Perry E. Metzger <firstname.lastname@example.org>
Date: 10/23/1996 18:06:38
"David S. Miller" writes:
> ??? You call this a micro-optimization and a phenomenal waste of
> Consider an activity the kernel does say 2,500 times per second. If
> you scrape say 10 cycles out of that operation, what does that work
> out to?
On my machine, a savings of 0.02%
> If you can find 10 or 20 places where you can do something
> similar, what does this work out to?
On my machine, a savings of 0.4%. Still a bit of a yawn, I'd say.
I frequently implement optimizations, but they tend to be better
algorithms. These usually get you BIG improvements -- things like
multiples in performance, not tiny increments.
> I end with a simple question: "If Charles Schwab on system B could get
> transactions more quickly then any other broker, much more quickly
> than they do now with system A, do you think they would switch to B?"
I actually work in the financial industry, actually -- I've worked on
trading systems at firms like Morgan Stanley, Lehman Brothers and
Moore Capital Management. The answer is no, this is not the way such
things get built.
I've said repeatedly, the kernel isn't a bottleneck for most
applications. If you are building a tickerplant, the quality of your
VM system and the amount of memory you have is the key. If you are
building an automated trading system, the algorithms you use for
picking trades are far more important than the speed of your
kernel. If you are building a brokerage system, your bottleneck is
your database, not your OS. A delay of a few hundred microseconds
won't make a difference on a trade going down the wire, and thats the
realm we are talking about here. 99% of the problem is in userland. In
general, the machines are already more than fast enough for the job,
and the problem people face is application software. Those machines
that aren't fast enough need more CPU for the userland stuff, not for