Subject: Re: NFS wedging.
To: Todd Whitesel <email@example.com>
From: Jim Reid <firstname.lastname@example.org>
Date: 01/28/2000 11:19:21
>>>>> "Todd" == Todd Whitesel <email@example.com> writes:
>> Wouldn't more nfsiod processes keep at least a few free for
>> other NFS access (to other servers)?
Todd> Maybe, but if this can happen from 4 nfsiod's going full
Todd> blast, why have more? Recall that my rm's were running 16
IIUC, nfsiod handles asynchonous I/O requests: like read-ahead and
page management for the VM system when swapping over NFS. Applications
generally make their NFS requests directly. For instance a file read
is translated into an NFS read request by the kernel and the process
blocks until that request is answered. It doesn't go anywhere near
nfsiod. So your 16-way remove results in 16 simultaneous rm processes
sending NFS unlink operations 16 at a time to the server. These
processes blocking until the server replies. If that server can't
handle 16 requests at once, you lose. So if you want 16-way
parallelism on the NFS client, you probably need 16 nfsd server-side
processes on the server to support that, assuming 16's a "good number"
of NFS server processes for that box.
>> Well, the loop shouldn't be infinite, should it? It should
>> just take a long time to complete. But I agree that tuning
>> the parameters might help.
Todd> Sounds like I was just losing patience then. Either way I
Todd> think the remedy is "don't do that".
I'd tend to agree with that. Bursty, intensive NFS traffic like your
stream of 16 simultaneous unlink requests will hurt most untuned NFS