Subject: Re: nfs tuninng with raid
To: Brian Buhrow <buhrow@lothlorien.nfbcal.org>
From: Greg Oster <oster@cs.usask.ca>
List: current-users
Date: 07/10/2001 13:15:57
Brian Buhrow writes:
> 	Hello folks.  I've been trying to increase the performance of the 
> box I'm using as a large RAID NFS server and have a few questions.  
> I seem to be able to serve up about 160Kbytes/sec to about 8 clients
> simultaneously for reading, and about 50Kbytes/sec for writing.  I've tried
> increasing the numver of nfsd's running, from 4 to 12, and the number of
> kern.nfs.iothreads from 1 to 12.  This made things much worse.  Knocking
> the number of iothreads down to 4, while leaving the number of nfsd's
> running make things better, but still not very fast, it seems.
> 	Running ps -lpid on the various nfsd processes shows that they're 
> spending a lot of time waiting on vnlock or uvn_fp2.  I tried increasing
> the number of kern.maxvnodes to 50,000 from 6,700, but this seems to have
> little to no effect.
> 	Any rules of thumb on how many iothreads 
> for NFS are optimal, versus the number of nfsd's running?  Are there rules
> of thumb on how to tune vnodes, and other parameters to help streamline the
> system?  This is running in an I386 box with 1.5R kernel and 1.5 user land
> programs.  The machine has a raid 5 array of 15  75GB IDE disks on it.

*15* drives?  In a single RAID 5 set?  What's the stripe width? (not that it 
matters much with 15 drives).  Also: what is the size of the files/data
being transfered in the benchmark, and/or what are you running as the 
benchmark?

> It's using an Intel on-board 10/100MBPS ethernet adapter with the fxp
> driver in 100-MBPS/full duplex operation.
> Any suggestions/guides/things to look at would be greatly appreciated.

Later...

Greg Oster