Subject: Re: scsi adaptor speeds etc
To: Justin T. Gibbs <>
From: Michael L. VanLoon -- <>
List: port-i386
Date: 04/01/1996 22:18:45
>I would also suggest using this little program from Bruce Evans.
>It attempts to measure the combined controller and disk command

I'm not sure how useful this program is in finding any real-world
information.  The throughput figure really only shows how well the
cache on each of my hard drives works.  I was consistently getting
between 8 and 9MB/s, which is physically impossible with my drives
(actually, with almost any commercial drive), so had to be coming
directly from the drive's cache.  I know the drive was at least
involved, because the drive's LED was on the entire time the test was
in progress.

This is on an AMD 486DX2/80, 512K write-back cache, EISA bus, BT747s,
one 1GB HP 3323, one 850MB Quantum Trailblazer, and one 540MB Quantum
LPS.  Not cutting-edge technology.

The overhead calculation was a bit more interesting.  It still
bothered me, somewhat, though, that it could vary by as much as 50% on
the same drive, on a totally quiescent (single-user, as root) system,
even after increasing the number of iterations to 10000.  It generally
stayed around 1200-1300 msec, but got as high as ~1800 several times,
and over 2000 a couple.

One interesting note: a Western Digital EIDE drive on a dumb IDE
controller averaged around 800 msec for overhead, and a little over 2
MB/s in throughput.

I still feel that of the disk benchmarks I've seen, iozone does a good
job determining throughput, especially larger than the buffer cache.
And, Bonnie does a good job at measuring "overall" drive performance.
Both measure through the filesystem, which is more "Real-World".
Neither measure disk subsystem overhead, unfortunately.  And, both can
have figures that vary noticably between an empty filesystem, and one
that is well-used, somewhat fragmented, and fairly full, which, I
guess, is also pretty much the way it works in the "Real World".

However, that kind of hides one of the things we're trying to test
here, which, I assume is overhead induced by using various different
SCSI controllers.  On the other hand, I have a feeling that is a very
small part of the performance picture when calculating "Real-World"
performance of a disk subsystem.  Especially when Berkeley FFS is on
top of it.

  Michael L. VanLoon                       
       --<  Free your mind and your machine -- NetBSD free un*x  >--
     NetBSD working ports: 386+PC, Mac 68k, Amiga, HP300, Sun3, Sun4,
                           DEC PMAX (MIPS), DEC Alpha, PC532
     NetBSD ports in progress: VAX, Atari 68k, others...