Subject: Re: scsi adaptor speeds etc
To: None <,>
From: Bruce Evans <>
List: port-i386
Date: 04/02/1996 20:29:50
>>I would also suggest using this little program from Bruce Evans.
>>It attempts to measure the combined controller and disk command

>I'm not sure how useful this program is in finding any real-world

It's not supposed to be.

>The throughput figure really only shows how well the
>cache on each of my hard drives works.  I was consistently getting
>between 8 and 9MB/s, which is physically impossible with my drives
>(actually, with almost any commercial drive), so had to be coming

The throughtput figure is only printed as a consistency check.  Run
the program several times and discard the results of all runs where
the reported transfer rate isn't very close to the controller transfer
tate (typically 10MB/s).  Typical results for my controllers:

controller	drive		command overhead	transfer speed
SC200		Quantum XP34301	1460 usec		1.0179e+07 B/s
BT445C		Toshiba MK537FB	4710 usec		9.69422e+06 B/s

9694220 isn't as close to 10000000 as I'd like but it seems to be typical.
Most of the overhead is in the drive at least for the Toshiba.  4710 usec
is huge.  It limits the drive to 212 transfers/second, so the transfer
rate is at most 111K/s for a block size of 512 bytes and 1676K/s for a
block size of 8K.  The block size needs to be 16K just to keep up with
the old Toshiba drive which has a average platter speed of about 2500K/s.
These speeds are consistent with the speeds for reading using dd.  Old
versions of FFS had abysmal performance for almost everything on the
Toshiba.  Clustering has fixed this (only) for large i/o's.  Metadata
updates are still very slow.

>The overhead calculation was a bit more interesting.  It still
>bothered me, somewhat, though, that it could vary by as much as 50% on
>the same drive, on a totally quiescent (single-user, as root) system,
>even after increasing the number of iterations to 10000.  It generally
>stayed around 1200-1300 msec, but got as high as ~1800 several times,
>and over 2000 a couple.

I haven't seen such a high variance but many other people have.

>One interesting note: a Western Digital EIDE drive on a dumb IDE
>controller averaged around 800 msec for overhead, and a little over 2
>MB/s in throughput.

The test was written partly to prove that IDE has a much lower overhead
than SCSI :-).  I get an overhead of 682 usec for a transfer rate of
1.7591e+06 B/s on a 486/33 with a _slow_ IDE controller and disk.  This
doesn't mean much, because the IDE transfer speed is actually about
3MB/s for the "rep insw" part.  Also, the 4K transfer involves 8 separate
i/o commands to the IDE controller, so the overhead may be as low as
682/8 = 85 usec.

>I still feel that of the disk benchmarks I've seen, iozone does a good
>job determining throughput, especially larger than the buffer cache.
>And, Bonnie does a good job at measuring "overall" drive performance.
>Both measure through the filesystem, which is more "Real-World".
>Neither measure disk subsystem overhead, unfortunately.  And, both can
>have figures that vary noticably between an empty filesystem, and one
>that is well-used, somewhat fragmented, and fairly full, which, I
>guess, is also pretty much the way it works in the "Real World".

They only measure something just as unreal as the disklatency benchmark:
the speed of reading and writing huge sequential files.  I think this
is not typical use of BSD systems.  They are worse than useless for
estimating disk subsystem overhead.  First, everything goes through the
buffer cache, so the times are sometimes distorted by the (lack of)
speed of bcopy.  Bonnie's %CPU numbers are distorted by interrupt and
memory access contention not being counted.  Only the real times are
meaningful for most benchmarkers.