Subject: RE: RAIDFrame and RAID-5
To: None <thomas@hz.se>
From: Greg A. Woods <woods@weird.com>
List: current-users
Date: 09/09/2003 14:50:51
[ On Monday, September 8, 2003 at 21:50:53 (+0200), Thomas Hertz wrote: ]
> Subject: RE: RAIDFrame and RAID-5
>
> What really works like a charm to lock my system up, is to write large
> files (~800Mb) while simultanously writing a series of smaller files.
> With vanilla settings it will only take a few minutes to bring to system
> to a freeze.

Ah, well that could be the difference.  I don't generally write very
large files -- rarely anything over 100MB.  Even in the full release
builds I do the biggest file is the ISO image, but it's not all that big
even when I've built a full static-linked alpha system with "-g" (~300MB).

I've been doing dumps of about 5GB to the test array, and I dd'ed the
the raw disk of $HOME to it once too, but it's not doing any other work
when I do such things.....

> How large is your array? When benchmarking my system to determine
> appropriate values for the chunk size, I used only small partitions
> (~1gig each) and I didnt experience one single crash even with extensize
> iozone and bonnie running.

	/dev/raid0a  21879470 19510862  1931018  90% /home
	/dev/raid1a  53389172 45367388  5352324  89% /test

I'm a little wary to try what you're sort of suggesting, though maybe
once I get that test array moved over onto the other system it's going
to be on then I'll run a big "postmark" benchmark while at the same time
creating full-sized ISOs or similar to give it a real workout.

-- 
						Greg A. Woods

+1 416 218-0098                  VE3TCP            RoboHack <woods@robohack.ca>
Planix, Inc. <woods@planix.com>          Secrets of the Weird <woods@weird.com>