Subject: NFS info...
To: None <current-users@NetBSD.ORG>
From: After 5 PM please slip brain through slot in door. <greywolf@defender.VAS.viewlogic.com>
Date: 11/03/1995 10:57:44
Disclaimer: I realise anyone could have asked this fellow about infor-
mation regarding NFS, so this is nothing super-special in content, but I
thought along the rule which states: "For any given circumstance X such
that the question 'Am I the only one who X?' can be asked, the answer is
in all probability 'No.'", so here's the info I got from David Robinson
Disclaimer #1: I make no pretense on knowing which would be The Most
Appropriate[TM] list to send to, so I sent it here.
(Black hat. Grey band. Not evil. You figure it out.)
#define AUTHOR "David.Robinson@Eng.Sun.COM (David Robinson)"
* [greywolf wrote]:
* > Is there any information you could share with me (and the
* > rest of the NetBSD community) regarding the implementation of said locking
* > protocol without going into too much detail (i.e. short of an NDA and a
* > source license)? Mostly I'm interested in the protocol as prescribed
* > by Solaris 2.5, and how it differs from Solaris 2.4 and previous and,
* > if applicable, how Solaris 2.x differs from SunOS 4.x.
* The Solaris 2.5 implementation of the lock manager is 100% compatible with
* all the previous implementations shipped, both 2.1-2.4 and 4.X. Any
* incompatibilites are considered bugs. We have tested extensively against
* older systems as well as other vendors.
* The only noticeable difference is that a 2.5 client will prefer to use
* a TCP transport instead of UDP. It also will never issue any of
* the *_MSG procedures, this is because we have a fully threaded client
* so we can block and don't need to issue these psuedo-async procedures.
* > We are interested in what does what, i.e. how to communicate with a lockd
* > and what to expect when we send in said requests. Can you refer us to an
* > on-line document or send one our way?
* I don't know of anything online other than the rpcgen ".x" files and
* headers in /usr/include/rpcsvc of any Solaris or SunOS release. ".x"
* files are all considered Copyrighted but freely redistributable. (They
* essentially define the wire protocol which Sun doesn't license or control).
* The best source of a detailed description is the X/Open specification
* called X/NFS. Unfortunately it costs ~$150. It is what we use to
* answer any protocol questions.
* > Does the kernel locking mechanism make use of flock()/lockf() type
* > locks or are NFS locks something entirely different?
* From an applications perspective all file locking is done via the fcntl()
* system call. Both flock and and lockf are simple subsets of the fcntl()
* calls. Neither applications or libraries know the difference between
* an NFS filesystem or a local filesystem. Within the kernel we implement
* a virtual filesystem (VFS) function called VOP_FRLOCK which hides the
* details of a particular filesystem implementation of locking from
* the filesystem independant code. The local UFS filesystem maintains
* locks in a set of in core data structures while the NFS implementation
* issues RPC's across the wire. On the server side the locking server
* simply calls VOP_FRLOCK with the request to the local filesystem
* and responds accordingly.
* In all the previous releases we split the functionality between the
* kernel and a user level daemon. Applications still talked to fcntl()
* system calls but the kernel would call out to the daemon with private
* RPC calls (the infamous KLM protocol) so the daemon would issue over the
* wire RPC calls on the kernels behalf. On the reverse side, the daemon
* would act as an RPC server and then use a private overloaded argument
* to the fcntl() system call to make lock requests on behalf of the client.
* This is a maintainence nightmare in keeping the state between the kernel
* and daemons in sync.
* In really old days, all locking was done through the local daemon, including
* local filesystem locks. On simple method is to implement the
* fcntl "system" call for lock requests as a local call to the lock
* daemon bypassing the kernel. Also having the daemon answer RPC requests.
* This has the disadvantage that applications statically linked will
* always want to talk directly to a deamon and will break if you change
* the implementation.
* Tips if you are going to write a lock manager. Start with a multithreaded
* design! Locks block and if you are single threaded you have to store
* largwe amounts of state to switch to another request, potentially the
* one that unblocks the current request. It is easier to maintain thread
* state than the locking state. If at all possible keep all lock information
* in the same address space (either user or kernel). 90% of our problems
* with the previous implementation were with maintaining two pieces of
* state about the same lock in two different contexts. A nightmare!
* Good luck,
#undef AUTHOR /* "David.Robinson@Eng.Sun.COM (David Robinson)" */