Subject: Re: RAIDframe performance revisited
To: Matthias Scheler <tron@zhadum.de>
From: David Brownlee <abs@NetBSD.org>
List: tech-perform
Date: 07/06/2005 14:09:27
On Wed, 6 Jul 2005, Matthias Scheler wrote:

> These are the benchmark results:
>
> Raw read performance ("dd if=/dev/r<x> of=/dev/null bs=1024k count=4096"):
> wd0			45992046 bytes/sec
> wd1			46018657 bytes/sec
> wd0+wd1 in parallel	46011262 bytes/sec + 46022108 bytes/sec
> raid0			45991061 bytes/sec
>
> Raw write performance ("dd if=/dev/zero of=/dev/r<x> bs=1024k count=4096"):
> wd0			45789540 bytes/sec
> wd1			45936953 bytes/sec
> wd0+wd1 in parallel	45823737 bytes/sec + 45905039 bytes/sec
> raid0			45724705 bytes/sec
>
> These numbers are what I expected:
> 1.) RAIDframe reads with almost full speed of a single drive because it
>    cannot read from disk alternating for a single reader.
> 2.) RAIDframe writes with full drive speed of a single drive because it
>    writes to both components in parallel.

 	It might be interesting to compare two simultaneous dds to wd0
 	and then to raid0, to see any alternative benefit.

> Next thing I measured was "newsfs" performance:
>
> wd0			1:18.23 [min:sec]
> wd1			1:18.28
> raid0			37.625
>
> RAIDframe wins clearly in this case.

 	This is almost certainly because the geometry of the raid
 	partition is different. I would guess if you setup the
 	geometry of the raid device to ~match that of the underlying
 	devices the numbers would be much closer.

> The final test was to extract NetBSD-current src+xsrc source tarballs on
> a freshly create filesystem on the above device:
>
> wd0			4:03.79 [min:sec]
> wd1			3:32.38
> raid0			7:39.86
>
> On this benchmark RAIDframe is suddenly a lot slower than the physical disks.
> What could cause this? Ideas which come to my mind are:
>
> - high overhead per I/O operation in RAIDframe => slow performance on
>  small I/O as issues by the filesystem
> - different FFS block layout on the physical disks vs. the RAIDframe volume
>  because they report a different geometry which might also explain the
>  difference in the "newfs" performance

 	Its great that you are posting these. Other interesting values
 	might if the frag/block size was changed to 2k/16k and 4k/32k.
 	That might help keep the size of the blocksize sent to the
 	underlying components up.


-- 
 		David/absolute       -- www.NetBSD.org: No hype required --