Subject: FFS tuning question
To: None <netbsd-users@netbsd.org>
From: Joel Votaw <jovotaw@cs.nmsu.edu>
List: netbsd-users
Date: 10/05/2000 10:46:25
Background:  I'm using RAIDframe's RAID 5 implementation across four 60GB
IDE drives for bulk storage for a home media server.  The four drives are
on just two IDE controllers, so two of the drives are IDE masters and two
are IDE slaves.
	Obviously, the combination of calculating and writing parity,
5400rpm IDE drives, and hitting both master and slave on the same
controller does not make for great performance.  I've tried to improve
performance a little by setting the RAID "chunk" size to 64k (128
sectors/stripe-unit) in the belief that that means that all reads/writes
will be 64k in size, which is optimal for IDE controllers.  However, that
belief is open to debate since http://www.zomby.net/work/ indicates that
sequential writes are 10 times slower when you get above 16
sectors/stripe-unit.

I'm formatting this as a single filesystem.  I'm using FFS since LFS
doesn't sound quite stable enough yet, especially on large drives, for my
comfort.  I'm creating my file system using, I believe,

	newfs -c 700 -m 5 -i 65536 -b 32768 -f 4096 -r 5400 /dev/raid0c

So it has 32k blocks and 4k frags, in addition to my other attempts to
make it a nice FS on a huge partition.


My questions are: 

Would there be a benefit in making the FFS block size equal to the
stripe-unit size (64k)?

Can frag sizes reasonably be a size other than 1/8th the size of a block?

Can several small files be in frags in the same block, or are there some
weird semantics with frags to improve performance?  (I dunno, like "you
can only have the tail of ONE file in a block; the remaining space must be
separate files"; I'm just making this up.)


Of course I can test out some different settings, but rebuilding RAID
parity takes about 6 hours so I want to avoid that as much as possible.

Suggestions?

	-Joel