Subject: Re: nfs tuninng with raid
To: Brian Buhrow <firstname.lastname@example.org>
From: Greg Oster <email@example.com>
Date: 07/10/2001 21:23:35
Brian Buhrow writes:
> Hello greg. The stripe width of the raid set is
> 63 sectors, the maximum burst size I could get from the IDE controllers.
> The bench mark I'm using is the output from show int on the cisco switch
> it's connected to.
So for a given 8K read you're going to be reading from 1 or 2 disks.
For a given 8K write, you're going to be reading from 2 or 3 disks,
and writing to 2 or 3 disks. However: with a single RAID 5 set with 15 disks
with a stripe width of 63 sectors, you're never going to get the 882 blocks
required to be able to do a 'full stripe write' (the most efficient write.)
With a width of 16, you could *guarantee* to never read from more than one
disk for a regular block read, and never read and write more than 2 disks for
a regular block write. Depending on what local IO you have, a width of 32
might even be better. Of course this completely ignores updating file access
times and updating other metadata, which is going to make performance for
random file access of smallish files even worse :-/
BTW: is that 160KB/sec split over the 8 clients, or per client?
> On Jul 10, 3:15pm, Greg Oster wrote:
> } Subject: Re: nfs tuninng with raid
> } Brian Buhrow writes:
> } > Hello folks. I've been trying to increase the performance of the
> } > box I'm using as a large RAID NFS server and have a few questions.
> } > I seem to be able to serve up about 160Kbytes/sec to about 8 clients
> } > simultaneously for reading, and about 50Kbytes/sec for writing. I've tri
> } > increasing the numver of nfsd's running, from 4 to 12, and the number of
> } > kern.nfs.iothreads from 1 to 12. This made things much worse. Knocking
> } > the number of iothreads down to 4, while leaving the number of nfsd's
> } > running make things better, but still not very fast, it seems.
> } > Running ps -lpid on the various nfsd processes shows that they're
> } > spending a lot of time waiting on vnlock or uvn_fp2. I tried increasing
> } > the number of kern.maxvnodes to 50,000 from 6,700, but this seems to have
> } > little to no effect.
> } > Any rules of thumb on how many iothreads
> } > for NFS are optimal, versus the number of nfsd's running? Are there rule
> } > of thumb on how to tune vnodes, and other parameters to help streamline t
> } > system? This is running in an I386 box with 1.5R kernel and 1.5 user lan
> } > programs. The machine has a raid 5 array of 15 75GB IDE disks on it.
> } *15* drives? In a single RAID 5 set? What's the stripe width? (not that i
> } matters much with 15 drives). Also: what is the size of the files/data
> } being transfered in the benchmark, and/or what are you running as the
> } benchmark?
> } > It's using an Intel on-board 10/100MBPS ethernet adapter with the fxp
> } > driver in 100-MBPS/full duplex operation.
> } > Any suggestions/guides/things to look at would be greatly appreciated.
> } Later...
> } Greg Oster
> >-- End of excerpt from Greg Oster