Subject: Re: nfs tuninng with raid
To: Greg Oster <firstname.lastname@example.org>
From: Brian Buhrow <email@example.com>
Date: 07/10/2001 13:09:15
Hello greg. The stripe width of the raid set is
63 sectors, the maximum burst size I could get from the IDE controllers.
The bench mark I'm using is the output from show int on the cisco switch
it's connected to.
On Jul 10, 3:15pm, Greg Oster wrote:
} Subject: Re: nfs tuninng with raid
} Brian Buhrow writes:
} > Hello folks. I've been trying to increase the performance of the
} > box I'm using as a large RAID NFS server and have a few questions.
} > I seem to be able to serve up about 160Kbytes/sec to about 8 clients
} > simultaneously for reading, and about 50Kbytes/sec for writing. I've tried
} > increasing the numver of nfsd's running, from 4 to 12, and the number of
} > kern.nfs.iothreads from 1 to 12. This made things much worse. Knocking
} > the number of iothreads down to 4, while leaving the number of nfsd's
} > running make things better, but still not very fast, it seems.
} > Running ps -lpid on the various nfsd processes shows that they're
} > spending a lot of time waiting on vnlock or uvn_fp2. I tried increasing
} > the number of kern.maxvnodes to 50,000 from 6,700, but this seems to have
} > little to no effect.
} > Any rules of thumb on how many iothreads
} > for NFS are optimal, versus the number of nfsd's running? Are there rules
} > of thumb on how to tune vnodes, and other parameters to help streamline the
} > system? This is running in an I386 box with 1.5R kernel and 1.5 user land
} > programs. The machine has a raid 5 array of 15 75GB IDE disks on it.
} *15* drives? In a single RAID 5 set? What's the stripe width? (not that it
} matters much with 15 drives). Also: what is the size of the files/data
} being transfered in the benchmark, and/or what are you running as the
} > It's using an Intel on-board 10/100MBPS ethernet adapter with the fxp
} > driver in 100-MBPS/full duplex operation.
} > Any suggestions/guides/things to look at would be greatly appreciated.
} Greg Oster
>-- End of excerpt from Greg Oster