Subject: Re: RAID controller support, and/or RAIDFrame
To: Luke Mewburn <lukem@NetBSD.org>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: port-i386
Date: 10/08/2003 17:41:00
In message <20031008235907.GH24725@mewburn.net>Luke Mewburn writes
>On Tue, Oct 07, 2003 at 11:25:58AM -0700, Jonathan Stone wrote:
>  | Luke,
>  | 
>  | >        * 3Ware 850x serial ATA  (current card - fast too!)
>  | 
>  | Could you quantify "fast" in both MB/sec and IO ops/sec?
>
>Got a specific benchmark configuration and RAID configuration in mind?

Compared to, say, a benchmark system with 30 to 50 nonredundant 10,00RPM
SCSI-3 FC drives, hooked up via multiple Qlogic 2340 HBAs.

With modest NFS retuning, 128 nfsds, and configured with 1 specfs
stream per drive, a 2.4GHz Xeon can exceed 10,000 specsfs ops.  (Note
carefully: no redundancy _at all_, the system is purely for benchmarking,
not production data storage.)  Sustained I/O rate at that point is,
*very* roughly, 50 Mbyte/sec (nfs traffic through a gigabit NIC).

specsfs is designed to never hit in in the filesystem cache, so I
could make an educated guess from per-controller filesystem read/write
throughput and latency for iozone-like "cold" numbers.

I was curious, how many RAID controllers I'd need to deliver
equivalent throughput at comparable latency (below 10ms). The gotcha
here is keeping as many heads as busy as possible, until latency
vs. throughput goes nonlinear (hits the knee of the curve).
IDE has historically been poorer at that than SCSI (though fine for
single-stream sequential I/O requests).


>BTW: I've currently got the card in a 32bit 33MHz slot.  Given it's a
>64bit 66MHz card there may be some skew in the results because of this...

For the sake of discussion, lets assume multiple 64-bit 3WARE cards on
multiple PCI-X segments.