Subject: Re: CVS commit: src
To: YAMAMOTO Takashi <firstname.lastname@example.org>
From: Bill Studenmund <email@example.com>
Date: 06/19/2004 16:27:48
Content-Type: text/plain; charset=us-ascii
On Sat, Jun 19, 2004 at 01:54:00PM +0900, YAMAMOTO Takashi wrote:
> > Module Name: src
> > Committed By: hannken
> > Date: Tue May 25 14:55:47 UTC 2004
> > - Add function transferlockers to transfer any waiting processes from
> > one lock to another.
> i strongly object against adding a new fancy lockmgr feature.
> in this case, it should be handled in an ffs-internal manner.
Well, what else should he have done? He has to move the sleepers from one
lock to another, given what he's doing (I looked hard at this at the time
it was added). At the time he decides to do the transfer, the sleepers are
ALREADY asleep on a lock. So there's no ffs-internal way to handle this;=20
the sleepers are already outside of the ffs code.
Given the lock semantics, that all snapshot uses get locked out at the=20
same time, this lock migration is the only option.
What happens is that the second or subsequent snapshot on a file system=20
has its vnode lock folded to the mountpoint-specific snapshot lock. Hmm, I=
think the first one may get that behavior too, I'll need to go back and=20
look; something will need to be done so that releasing the first snapshot=
doesn't hose all the others.
I was a bit surprised when this code was added, as I never considered=20
v_vnlock to be changable once a vnode was in use. However there are=20
reference checks before this call which ensure that it will never get=20
executed for a node under a layered file system (that and the fact that I=
made VFS_SNAPSHOT() an explicit error for layered file systems).
So why do you object? What is wrong with using the locking we have?
> (sorry for objecting later...)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.3 (NetBSD)
-----END PGP SIGNATURE-----