NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
kern/37914: nfs client-side locking implementation
>Number: 37914
>Category: kern
>Synopsis: nfs client-side locking implementation
>Confidential: no
>Severity: non-critical
>Priority: low
>Responsible: kern-bug-people
>State: open
>Class: change-request
>Submitter-Id: net
>Arrival-Date: Wed Jan 30 00:20:00 +0000 2008
>Originator: YAMAMOTO Takashi <yamt%mwd.biglobe.ne.jp@localhost>
>Release:
>Organization:
>Environment:
>Description:
this PR is a reminder of the client side locking implementation
by Edgar.Fuss at bn2.maus.net.
http://mail-index.NetBSD.org/tech-kern/2006/07/26/0005.html
http://mail-index.NetBSD.org/tech-kern/2006/07/26/0006.html
http://mail-index.NetBSD.org/tech-kern/2006/07/26/0007.html
http://mail-index.NetBSD.org/tech-kern/2006/07/26/0008.html
http://mail-index.NetBSD.org/tech-kern/2006/09/16/0004.html
i've put a copy of:
http://www.math.uni-bonn.de/people/ef/nfslock.tar.gz
at:
ftp://ftp.netbsd.org/pub/NetBSD/misc/yamt/nfslock/nfslock.tar.gz
and the following is a citation from a private mail from him to me.
(premitted by him)
> no way with our rpc library, afaik.
Hm. Anybody in a position to improve that? Any other project that would
benefit from such an improvement?
> i guess it's merely historical.
> fixed.
> documented.
> it shouldn't be a problem [...]
Thanks.
> you are right. it lacks unmonitoring.
As far as I remember, I also messed something up in my version.
EF> In rpc/lockd/lock_proc.c, getclient(), the comment talks, err,
writes
EF> about -udp- where the code uses -tp-.
YAMT> i'm not sure what you mean here.
OK, too many typos on my side. I meant get_client() and _udp_/_tp_:
> client = clnt_tp_create(host, NLM_PROG, vers, nconf);
[...]
> syslog(LOG_ERR, "%s", clnt_spcreateerror("clntudp_create"));
[...]
> Note that the timeout is a different concept
> from the retry period set in clnt_udp_create() above.
> LOCK_RES, you mean?
Yes, of course. Sorry.
> do you mean that the current behaviour like the following:
[...]
> should be:
Yes, that's what I was thinking of. With the main point being whether
the actual (time consuming) operation takes place before or after the LOCK_MSG
reply.
> but it's better to deal with such servers anyway.
Any idea how other servers behave? I would consider this behaviour
close to buggy.
> i think it requires some kind of threading.
Yes. That's what my threading version tries to do.
> well, instability of our pthread and thread-safeness of our libraries
> might be problems, tho
Hm, I ran my tests on sparc machines, and it pretended to work.
> a good point.
> yes, it's a big problem.
> it sounds reasonable.
Thanks.
> to implement nlm server properly, we need the ability to specify
> a remote lock owner.
That would probably be the most elegant way of implementing remote
locks.
I'm a bit out of it in the meantime, but I guess when I was close to
re-writing lockd as a whole some months ago I thought I could get away with a
kqueue event signalling unlocking of a file.
Essentiallly (as far as I remember), as long as you don't move the
whole server part of remote locking into the kernel (or tie it more closely to
it), you have two ways to deal with concurring local and remote locks on the
same file:
Either you simply don't handle it. You make lockd keeping track of it
own sets of locks and don't push the locks down to the file system. This makes
the server part of lockd close to trivial. Also, to me, it looks like something
realistic on a file server: if you do have local processes which need to hold
locks on the same files as remote processes do, you NFS-mount the local file
system (i. e. you mount /dev/sd0a to /export/home and localhost:/export/home to
/home).
If that's not feasable, the only tricky part, at least as far as I
remember, is if you get a remote request for a file, try to get the lock on the
local file system and fail to do so. What the current lockd does in that
situation is spawning a process that blocks on trying to aquire the lock. I
would feel much mor comfortable with a kqueue event notifying me if the lock is
released locally.
> an alternative would be moving lockd into kernel.
Since last Wednesday, I think this is a BAD idea.
When I started my work on locking, I indeed wondered why one would
bother with an extra device for kernel/userland communication and wouldn't
simply do all the RPC stuff from within the kernel. I asked ws about it and he
told me you don't put things into the kernel if you can do otherwise. So I
followed what he suggested (similar to what FreeBSD does).
Last week, we had strange problems with our (still Linux) fileserver
after moving our resolvers to different machines and suspected the lockd being
the culprit. So you just re-start the dammned lockd -- unless it's in-kernel as
in Linux. So you re-start the whole fileserver. You can imagine what
re-starting a fileserver means. Especially if, during shut-down, you see a
message flashing up on the console of which you can only grab the words "cannot
unmount" before the box re-boots and decides to fsck a
not-cleanly-unmounted-filesystem. During the following hour you wish you had
a) progress indication on fsck
b) some trail of what went wrong during shutdown
c) lockd in user space.
> fixed. (it was svc_reg/req, not rpc_.)
Typos in typos.
> do you mean clnt_freeres?
Yes.
>How-To-Repeat:
>Fix:
Home |
Main Index |
Thread Index |
Old Index