Subject: Performance of RAIDframe and SOFTDEP
To: None <tech-kern@netbsd.org>
From: None <Lloyd.Parkes@vuw.ac.nz>
List: tech-kern
Date: 02/24/2000 14:43:50
I have run some tests on a NetBSD box to see how RAIDframe and SOFTDEP
interacted. I was mostly interested not paying too high a penalty for
using RAID 5. The goal was satisfying my own personal interest, rather
than anything scientific, so the tests were not particularly rigorous.

The test environment was an Intel 486DX100 with 16MB of RAM. The system
disk was an old IDE disk. The disks being tested were one to three
Seagate ST41200N disks, with data being read from a new Seagate IDE
disk. The SCSI disks were all attached to a single Adaptec 1542C. The
kernel was a newish -current, but not new enough to contain Greg Oster's
changes of Feb 14.

The test was a simple test of write performance only. After a 1GB file
system had been created, it was mounted and a large tar file was
extracted into it. The tar file was not compressed, and contained 700 to
800 KB of data. The file system was recreated for each test and the tar
file resided on an new IDE disk.

Three disk configurations were tested, both with and without soft
dependencies enabled. The first configuration was a single disk without
RAIDframe. The second configuration was RAID 0 using two disks and the
third configuration was RAID 5 using all three disks. In each case all
of physical disk except for the first block was used. Obviously the file
systems for the last two configurations were twice the size the file
system in the first configuration.

The results are from 'time -l'. The figures reported are:

	real time / user time / system time
	block input operations / block output operations
	voluntary context switches / involuntary context switches

	    Without soft dependencies	With soft dependencies
RAID 5	    3618.41 / 6.8 / 860.77	2657.96 / 8.1 / 801.14
	    13035 / 115057		12991 / 94558
	    22621 / 18902		4019 / 27289 

RAID 0	    1541.38 / 5.84 / 734.46	1073.19 / 6.94 / 801.14
	    13038 / 114721		12991 / 94558
	    22485 / 15422		2169 / 29473

no RAID	    1824.84 / 6.28 / 622.07	1488.70 / 5.27 / 678.51
	    12925 / 114686		13610 / 99112
	    22394 / 6239		3628 / 6553

The performance improvements gained by enabling soft dependencies were

	no RAID	1.23x
	RAID 0	1.44x
	RAID 5	1.36x

Nice improvements from soft dependencies, but I was expecting better
performance from RAID 0 (and worried about worse from RAID 5). Still, a
50% performance improvement can't be ignored.

Things to do.
More rigorous testing.
Test with a kernel build rather than tar.
Start testing a variety of RAIDframe parameters.
Use more disks for RAID 5. (Well, one more. I don't think narrow SCSI
will appreciate too many more disks).
Enrol in postgraduate degree.
Get a life.

Cheers
-- 
Lloyd Parkes, Network Manager, School of Earth Sciences
Victoria University of Wellington