[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: nfsd "serializing" patch
On Fri, Dec 07, 2012 at 06:46:41AM +0000, YAMAMOTO Takashi wrote:
> > Hello,
> > while working on nfs performance issues with overquota writes (which
> > turned out to be a ffs issue), I came up with the attached patch.
> > What this does it, for nfs over TCP, restrict a socket buffer processing
> > to a single thread (right now, all pending requests are processed
> > by all threads in parallel). This has two advantages:
> > - if a single client sends lots of request (like writes comming from a
> > linux client), a single thread is busy and other threads will be
> > available to serve other client's requests quickly
> > - by avoiding CPU cache sharing and lock contention at the vnode level
> > (if all requests are for the same vnode, which is the common case),
> > we get sighly better performances.
> > My testbed is a linux box with 2 Opteron 2431 (12 core total) and 32GB RAM
> > writing over gigabit ethernet to a NetBSD server (dual
> > Intel(R) Xeon(TM) CPU 3.00GHz, 4 hyperthread cores total) running nfsd
> > -tun4.
> > Without the patch, the server processes about 1230 writes per second,
> > with this patch it processes about 1250 writes/s.
> > Comments ?
> but doesn't it have ill effects if the client has multiple indepenedent
> activities on the mount point?
They will be hitting the same physical disc, so probably queue behind
I've never seen any reason for the historical '4 nfsd server processes'.
A lot of configurations work better with only 1.
I've seen cases where the nfs client would be buffering writes, then
decide to write a whole load of pages out of the buffer cache.
This (or maybe somthing else) led to a considerable number of concurrent 8k
nfs writes. The server processes pick one each and the disk becomes busy.
The disk access algorythm (probably staircase) leaves one of the requests
unfulfilled as new requests for nearer sectors keep ariving.
The stalled nfs request times out and is retried.
The stalled request finally finishes, but the rpc request has been timed
out so is discarded.
You now have multiple retry requests making matters worse, almost no
progress is made (this is the the ethernet trace I was given!).
This is fairly typical if the server is slow/overloaded.
With only one server process it is all fine.
David Laight: david%l8s.co.uk@localhost
Main Index |
Thread Index |