Subject: Tuning RAIDframe configs
To: NetBSD SPARC list <port-sparc@netbsd.org>
From: Christian Smith <csmith@micromuse.com>
List: port-sparc
Date: 06/16/2002 23:45:33
Hi,
I'm having a play with RAIDframe, as I've recently acquired a spare 911 
enclosure with a couple of seagate barracuda disks in there.

My plan was to try RAID 0 configuration to see what performance I can get.

Unfortunately, the aggregate performance using RAID 0 was ~10 slower than 
I can manage to get out of both disks without raid.

My primitive performance testing was was done using dd to read from the
device to /dev/zero, with block size of 64k, and monitor the performance
using systat vm 1, to give approximate disk throughput along with CPU
utilisation.

With RAIDframe, I could manage >5MB/s from any one disk, or >7MB/s
throughput using both disks together.

Finally, I have a seagate cheetah disk in there as well, on the same 
channel, which could move >7MB/s on it's own.

From this it's obvious that the SCSI bus gets saturated at ~7-8MB/s, which
is reasonable as the controller is a 10MB/s fast SCSI-2 (a combined
SCSI/ethernet SBUS card.)

Now, with RAIDframe compiled in, the cheetah tops out at ~2.2MB/s, and the
RAIDed barracudas can manage <700KB/s between them! I've tried 1, 16, 32 
and 64 sector interleaves, but performance is just bad.

I'm running on an IPX with 52MB RAM. Could it be that the IPX is just not 
upto the task (sys usage is very high during dd.) I'd have thought that 
RAID-0 would not be very CPU taxing, certainly no more than straight disk 
reading. And it doesn't explain the drop in performance of a non-RAIDed 
disk.

Any ideas?

Christian

PS. As I've been writing this, I've just noticed that doing dd on a disk 
pushes lev3 interrupts through the roof! Before RAIDframe, I'd get ~200/s 
when doing dd, now I get more 1500/s! Would certainly explain the lack of 
performance. What gives?

-- 
    /"\
    \ /    ASCII RIBBON CAMPAIGN - AGAINST HTML MAIL 
     X                           - AGAINST MS ATTACHMENTS
    / \