Subject: RAID-5 benchmark results
To: None <netbsd-users@netbsd.org>
From: Johnny Lam <jlam@jgrind.org>
List: netbsd-users
Date: 12/11/2001 20:29:05
Hi,
In my ongoing search to improve my RAIDframe RAID-5 performance on
NetBSD-1.5.3_ALPHA, I've benchmarked using bonnie++ with a few different
parameters. First off, it was _vital_ that write-back-cache was enabled
on the SCSI disks. Turning it on made a _huge_ difference in the write
performance.
The following results from from using a 32K stripe width with the
RAID device newfs'ed with 32K/4K or 8K/1K block/fragment sizes and and 16
cylinders per group. The FFS partition starts at sector 0 on the RAID
device.
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
flatland-32/4 300M 30506 80 33599 16 5061 7 25119 76 37547 16 136.8 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 58 9 +++++ +++ 165 5 57 9 930 99 187 14
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
flatland-8/1 300M 26301 69 29264 16 5008 7 30380 95 38413 18 166.8 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 42 6 +++++ +++ 81 1 42 6 874 99 78 5
As you can see, enabling the write-back cache made the write/read performance
ratio nearly 1:1.
Also, on a hint that perhaps the 8K area set aside at the start of the
partition might make a difference in causing 32K of data to be written
across two stripes instead of as one full stripe, I moved the FFS partition
to start at 8K, 16K, and 24K from the beginning of the RAID device. The
results are all about the same. I list the one for FFS at an 8K offset
below.
Version 1.01 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
flatland-32/4 300M 4047 8 4041 2 2915 2 27866 87 44722 21 118.7 2
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 37 6 +++++ +++ 81 2 37 6 916 99 74 5
As you can see, the write performance suffered, probably due to the reason
we were trying to avoid where the write of a block may be split across two
stripes.
That just about covers it. The only decision I need to make now is whether
to use the larger (32K) block size or the smaller (8K) one. I suppose it'll
depend on the number of files < 4K I can expect to store on the system as
it concerns how much space is wasted. Can anyone give me a quick run-down
of the pros and cons of the choice of different block sizes?
Thanks,
-- Johnny Lam <jlam@jgrind.org>