Subject: Re: Raidframe experiments and scsi woes
To: None <current-users@netbsd.org>
From: Manuel Bouyer <bouyer@antioche.lip6.fr>
List: current-users
Date: 12/01/1998 19:11:34
So, given the explaination I got from some people I've re-run some tests
on the same box, using a sectPerSU of 128 (=64k, the size of a typical
I/O with clustered read/writes). I use a start queue of: fifo 100, as I did
for the previous tests. I now have:
File './Bonnie.150', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...
-------Sequential Output-------- ---Sequential Input-- --Random--
-Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
raid4 100 1854 9.5 1747 3.0 941 1.3 12968 61.8 15303 10.9 116.1 2.1
raid5f 100 2049 10.1 1881 2.7 796 1.4 5980 30.9 6463 7.2 109.5 2.4
The raid4 entry if for a raid4 config. Now we have the same performances
as with a ccd. The raid5f is a raid5 array with one failed componnent.
As we don't need to do 2 disk write for one write, writing is faster.
Because of hardware problem I can't run a full raid5 test, one of the disks
always go bad before the end of the write test (If I can play with such drives,
it's because they can't be in production use :).
I also noticed that this disk has a lower transfer rate than the others
(I got a few media errors on it - maybe it has to retry some
of the reads ...). This one was not involved in the ccd, and I think it's the
one who holds the parity on the raid4 array. It's also the one who was failed
in raid5f ... hum, this could be the cause of the bad raid5 performances ...
I guess I'll have to re-run some tests with good drives.
--
Manuel Bouyer, LIP6, Universite Paris VI. Manuel.Bouyer@lip6.fr
--