Subject: Re: But why?
To: Linus Torvalds <torvalds@cs.Helsinki.FI>
From: Tim Newsham <email@example.com>
Date: 10/28/1996 12:11:05
> On Wed, 23 Oct 1996, Tim Newsham wrote:
> > ok. say I'm reading 256k at a time and perhaps
> > getting 3.5Mbytes/sec out of my filesystem.
> Umm? WHO CARES?
people runing programs that do a lot of large reads.
> As you correctly point out, saving 20 machine cycles for this case is
> negligible, because you'll be spending all your time copying or waiting for
> the disk.
my previous post above was the result of mail claiming that
tweaking system call overhead is important for programs
that do large reads.
> HOWEVER, that isn't the issue, and it never has been. IF you're doing 256kB
> reads, you may be correct (and as David pointed out, even then you may not be
> correct, because interrupt latency does matter, as does interrupt tlb and
> cache footprint).
interrupt latency would not affect these numbers much.
The math is left as an excercise to the reader.
> However, there are lots of things that don't do 256kB reads. In fact, I'd be
> surprised if 256kB would be anywhere _close_ to a normal read. I'd say most
> IO requests are in the <16kB range, and many of them are much smaller.
A general objection I have to most of the information in
this thread is pulling numbers out of thin air and using
them as justification. It is not uncommon for observed
parameters to differ from what experience would lead you
to expect. If you're tuning a system you should tweak
something just because "I think most IO requests are <16kb".
That said I think you're probably right that most IO
requests are <16kb. You can do the math to figure out
that any program that does reasonable size transfers
(ie. anything using buffered I/O) will not be improved
much by tweaking interupt latency. This was one of
the objections brought up earlier in this thread.
Before you say "but not all programs behave that way"
I would like to say that I agree.
> Micro-optimizations do make sense. Some other idiot claimed that it doesn't
> make any difference how fast a loop of "gettimeofday()" goes, because that's
> only something a benchmark would ever be interested in measuring.
I think in general most people agree that making the
syscall path faster will help certain types of programs.
> I'll tell you why: Because I think people who dismiss latency issues are
> STUPID people. Lots of UNIX people optimize throughput, and if you do that
> and don't feel that latency is important enough, then yes, X will be slow for
> you. But I personally feel that latency is very important, and that there
> isn't really any reason why X should be all that slow (*).
It depends on what you're interested in. If you're using
X as a user interface for running vi and gcc on
code that will take 5 days to run then you probably
dont care very much that X is a little slow. In fact
you probably wont ever notice that X is a little slow.
What you really care about is that your 5 day program
run as quickly as possible after you're done developing.
If you're doing graphics intensive work then you
most likely do care how fast X is.
> In contrast, latency is _much_ more difficult, yet it is as important as
> throughput. For latency:
This is exactly why the benefits of latency optimizations
should be examined first. Programmer time is a scarce
resource. Interesting projects abound. Are the benefits
of particular latency optimizations worth more than
the other projects that could be done?
Linux clearly has people who are interested in doing
the latency optimizations. If they want to convince others
that they should spend programmer time doing the
same they should quantify the benefits.
> This is where a lot of UNIX bigots seem to trip up. It's _unbelieveable_ how
> many "sane" people will argue against the above four points because they
> think those kinds of optimizations are a waste of time. They think the three
> rules of bandwidth makes up for things. Damn idiots,
Are you saying it is not a waste of time? How did you
come to this conclusion? Could you elaborate on the
I know computer science is more black magic than science
but there is much to be said for knowing the system
and its weaknesses (where time is being spent, what demands
are usually placed on it), figuring out what to attack
based on this information and understanding the results
(did you get the results you expected? if not why?).
I would love to see a justification for the changes
made along with an analysis detailing the benefit
of the new system over the old system. Unfortunately
the closest I've seen so far was:
"Linux with these changes ran faster than solaris
without these changes"