tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: [PATCH] Fixing soft NFS umount -f, round 3



On Sun, Jul 05, 2015 at 05:19:41PM +0000, Emmanuel Dreyfus wrote:
> On Fri, Jul 03, 2015 at 09:59:40AM -0700, Chuck Silvers wrote:
> > what's the reason for hardcoding the new timeouts to 2 seconds?
> > there's a "-t" mount option to specify a timeout duration.
> 
> I used the same delay as for hard mounts. Note this is not the 
> connect timeout, this is the time we spend in uninterruptible sleep.
> If you look at for instance nfs_receive(), nfs_reconnect() is
> called in a loop where retrans count (as given with mount_nfs -x)
> is checked. 

there are several new soft-mount timeouts added in your diff,
let's look at each of them:

 - in nfs_asyncio(), if an nfsiod has been assigned to the mount
   but the request queue is long, we currently wait indefinitely
   for the queue to drain a bit before enqueueing a new request.
   your diff would change this so that if the queue does not drain
   at all within 2 seconds, then the new request is failed.
   that is much too aggressive, the timeout duration should be
   more like the (timeout * retrans) formula that would apply
   if the request were actually being processed.

 - in nfs_sndlock(), we currently wait indefinitely to acquire
   the lock which serializes callers of nfs_send() for connected
   sockets (ie. TCP).  your diff would change this so that if
   the lock is not acquired within 2 seconds then the attempt
   to acquire the lock fails.  nfs_sndlock() is called from
   three places and none of them retries, so the result is that
   this change will cause an RPC will fail in 2 seconds.
   this also seems too aggressive, and again the (timeout * retrans)
   formula seems more appropriate.

 - nfs_rcvlock() has the same issues as nfs_sndlock().

 - in nfs_reconnect(), we current retry connecting to the server
   (for connected sockets) an unbounded number of times.
   your diff would change this to give up after one attempt to connect.
   nfs_reconnect() is called from three places in nfs_receive().
   the middle one is retried in a loop like you mentioned,
   but the other two do not have any retry logic, so one failure
   from those calls to nfs_reconnect() will result in nfs_receive()
   failing overall.  TCP might be doing its own retries under the covers,
   but as you mention below, it might fail immediate if the server host
   is on the network but nfsd is not initialized yet.
   I would think that NFS should make sure that something is retrying
   for the desired period.  if nfs_reconnect() takes a long time to fail
   then NFS can assume that TCP did its own timeout/retry work,
   but if nfs_reconnect() fails quickly then NFS ought to wait
   and retry itself.


> What is not tunable is the delay between each reconnect attempt.
> We could immagine using the timeout (as given with mount_nfs -t)
> but the man page does not say we use this mount option for 
> connexion timeouts. Connect failure can be immediate, for instance
> when trying a TCP mount on a host which is up but without NFS service
> available: once we get a deserved TCP RST, do we want to wait for
> timeout before retrying?

it seems reasonable to use the "-t" and "-x" values to control handling of
soft-mount TCP connection failures, in the manner I outlined above.

-Chuck


Home | Main Index | Thread Index | Old Index