Subject: Re: pr/35143 and layer_node_find()
To: Chuck Silvers <email@example.com>
From: Bill Studenmund <firstname.lastname@example.org>
Date: 12/03/2006 13:06:59
Content-Type: text/plain; charset=us-ascii
On Fri, Dec 01, 2006 at 09:52:49AM -0800, Chuck Silvers wrote:
> On Thu, Nov 30, 2006 at 10:10:22AM -0800, Bill Studenmund wrote:
> > > I think we need to figure that out before we decide on a fix.
> > Ok, here's an idea on how it can happen on our current kernel.
> > We have one process holding the lock on the lower vnode, upper vnode is=
> > unreferenced.
> > Then another process comes into the layered file system, does a lookup =
> > vnode, and blocks in VOP_LOOKUP() on the lower layer waiting for the lo=
> > Then a thrid process comes in and decides to recycle a vnode. It gets t=
> > layer vnode, sets VXLOCK, then goes to sleep waiting to get the stack's=
> > vnode lock.
> wouldn't getcleanvnode() skip over this locked VLAYER vnode?
Hmm... It's supposed to.
Please see Darrin's analysis in the PR. If you can see something that's=20
not right in it, please comment!
Even if you and I can't figure out how it happened, we have a report of an
observed situation where layer_node_find() got stuck because it had the
vnode stack's lock and VXLOCK was set on the vnode, thus the vget() call
Note also that if we fix this issue, the test in getnewvnode() can go=20
away. While it would be better to try an unlocked vnode, we would still be=
able to have things work with a locked one.
> > We have to have the vget() not wait if it sees VXLOCK.
> > I still don't see what's wrong with letting the being-destroyed nodes s=
> > in the hash table. For them to have VXLOCK set, there has to be a threa=
> > reclaiming them, so they will be removed from the hash list in due time.
> I don't know that it would cause any particular problem right now,
> it just doesn't seem like a good idea to allow multiple vnodes with
> the same identity to exist at the same time, even if all but one of
> them are in the process of being reclaimed.
One way to look at it is that the moment VXLOCK gets set, the vnode no=20
longer is the upper one for the given lower vnode.
I guess part of why I'm comfortable with this is that I've doen it before.
I've worked on an iSCSI target, and you can run into issues like this
(where something still exists since it hasn't been cleaned up, but its
visibility has been removed). Specifically an iSCSI task can come in,
acknowledge a previous command, and simultaneously start a new command
with the same tag.
I'd appreciate the suggestion of another way to fix, but I can't think of=
anything that'd help w/o doing somehting that can make a mess elsewhere.
The one thing I think I might like would be to add a vnode cleaning thread=
that tries to keep X vnodes clean-and-free.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.3 (NetBSD)
-----END PGP SIGNATURE-----