Subject: Re: kern/15463: NFS client bug - incorrect caching of file-handles
To: email@example.com <firstname.lastname@example.org>
From: David Laight <email@example.com>
Date: 02/04/2002 18:19:42
> nfs server is a Linux user-land server (Debian GNU/Linux testing)
Is this NFS over UDP or NFS over TCP?
If TCP, Does the server do ANY validation of file handle? (ie check
that it was given out over the same TCP connection - is this actually
There seem to be 2 problems here:
1) The NetBSD NFS client is failing to update information returned when
it gets a 'duplicate' file handle. This shouldn't really ever happen
2) The NFS server is reusing file handles. I'm not exactly sure what
the lifetime of a file handle is, but I think the NFS (over UDP)
protocol requires it to be 'for ever'. This is the more serious bug!
For instance, consider following senario:
1) system A looks up file 'F1' and is given file handle 'H-AF1', the
is 'opened' by an application.
2) system B deletes file 'F1'
3) Any access to file 'F1' from system A would now fail ESTALE
4) File 'F2' is created, reusing the inode number that F1 had.
5) system A tries to write to file F1 - should fail ESTALE
Now, if he NFS server is assuming that the inode number a is 'good
enough' file handle it will write to (corrupt) file F2. If it is only
willing to access files it knows the mapping for, then maybe system 'B'
doing a lookup on file F2 will generate the handle 'H-AF1' (now mapping
to file F2) so that a write to F1 will succeed (corrupting file F2).
Maybe it needs system A to access file F2. In any case this replication
of file handles is a serious bug in the server code.
I don't know what the NFS server is trying to use as a file handle? If
it can't access the inode generation number (which has probably been
'clobbered' from the stat response for 'security' reasons), then it must
allocate a random number and keep state somewhere.
I've cut the bumf, just leaving the trace of the NFS responses for the
two 'files' - the filehandle is being reused by the server!
Response for lookup of Makefile:
> 14:35:45.097718 ming.empire.pick.ucam.org.nfs > cleopatra.empire.pick.ucam.org.2482867553: reply ok 128 lookup fh Unknown/7A7F03700650E7626DA575000000000000000000000000000000000000000000 LNK 120777 ids 4015/500 sz 12 nlink 1 rdev 847 fsid 1 nodeid 70037f7a a/m/ctime 1012487679.000000 1012487679.000000 1012487679.000000 (ttl 64, id 43090, len 156)
'Makefile' deleted (using NFS, but that doesn't acually matter!)
Response for lookup of md2_dgst.o:
> 14:36:28.799930 ming.empire.pick.ucam.org.nfs > cleopatra.empire.pick.ucam.org.2482873187: reply ok 128 create fh Unknown/7A7F03700650E7626DA575000000000000000000000000000000000000000000 REG 100644 ids 0/0 sz 0 nlink 1 rdev 201 fsid 1 nodeid 70037f7a a/m/ctime 1012487788.000000 1012487788.000000 1012487788.000000 (ttl 64, id 49114, len 156)