Subject: Re: scsi adaptor speeds etc
To: Michael L. VanLoon -- HeadCandy.com <michaelv@HeadCandy.com>
From: Justin T. Gibbs <gibbs@freefall.freebsd.org>
List: port-i386
Date: 04/01/1996 22:31:57
>
>>I would also suggest using this little program from Bruce Evans.
>>It attempts to measure the combined controller and disk command
>>overhead:
>
>I'm not sure how useful this program is in finding any real-world
>information.  The throughput figure really only shows how well the
>cache on each of my hard drives works.  I was consistently getting
>between 8 and 9MB/s, which is physically impossible with my drives
>(actually, with almost any commercial drive), so had to be coming
>directly from the drive's cache.  I know the drive was at least
>involved, because the drive's LED was on the entire time the test was
>in progress.

This is exatly the point of the benchmark.  If the data wasn't in the
cache, you wouldn't have a snowballs chance of trying to determine
the command overhead.  They're just printed out so you can get an
idea of how well you can saturate the SCSI bus with a particular
card/drive pair.

>This is on an AMD 486DX2/80, 512K write-back cache, EISA bus, BT747s,
>one 1GB HP 3323, one 850MB Quantum Trailblazer, and one 540MB Quantum
>LPS.  Not cutting-edge technology.

Nope.

>The overhead calculation was a bit more interesting.  It still
>bothered me, somewhat, though, that it could vary by as much as 50% on
>the same drive, on a totally quiescent (single-user, as root) system,
>even after increasing the number of iterations to 10000.  It generally
>stayed around 1200-1300 msec, but got as high as ~1800 several times,
>and over 2000 a couple.

I've never seen it vary as much as this (usually 10-15msec max).
Perhaps microtime has a better resolution in FreeBSD, but I really
don't know.

>One interesting note: a Western Digital EIDE drive on a dumb IDE
>controller averaged around 800 msec for overhead, and a little over 2
>MB/s in throughput.

This is to be expected, but SCSI can win.  A 2940 with a Quantum
Atlas gets ~360msec of overhead IIRC (it wasn't on my machine).

>I still feel that of the disk benchmarks I've seen, iozone does a good
>job determining throughput, especially larger than the buffer cache.

It tells you sequential throughput yes.

>And, Bonnie does a good job at measuring "overall" drive performance.
>Both measure through the filesystem, which is more "Real-World".

Right.  Bonnie is more affected by tagged I/O than IOZone, and it is
especially in real world applications with multiple processes accessing
the disk concurrently (News, WWW and ftp serving, etc) that having tagged
I/O is a big win.

>Neither measure disk subsystem overhead, unfortunately.

Bruce's program does.

>And, both can
>have figures that vary noticably between an empty filesystem, and one
>that is well-used, somewhat fragmented, and fairly full, which, I
>guess, is also pretty much the way it works in the "Real World".

Yup.

>However, that kind of hides one of the things we're trying to test
>here, which, I assume is overhead induced by using various different
>SCSI controllers.  On the other hand, I have a feeling that is a very
>small part of the performance picture when calculating "Real-World"
>performance of a disk subsystem.  Especially when Berkeley FFS is on
>top of it.

Not for things like News serving where you TPS is very important.  Command
overhead directly relates to overall throughput in most "real-world"
scenarios.

>-----------------------------------------------------------------------------
>  Michael L. VanLoon                                 michaelv@HeadCandy.com
>       --<  Free your mind and your machine -- NetBSD free un*x  >--
>     NetBSD working ports: 386+PC, Mac 68k, Amiga, HP300, Sun3, Sun4,
>                           DEC PMAX (MIPS), DEC Alpha, PC532
>     NetBSD ports in progress: VAX, Atari 68k, others...
>-----------------------------------------------------------------------------

--
Justin T. Gibbs
===========================================
  FreeBSD: Turning PCs into workstations
===========================================