tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: fixing the vnode lifecycle



On Sep 22, 2013, at 5:28 AM, David Holland <dholland-tech%netbsd.org@localhost> 
wrote:

<snip>
> First, obviously the vfs-level vnode cache code should provide vnode
> lookup so file systems don't need to maintain their own vnode
> tables. Killing off the fs-level vnode tables not only simplifies the
> world but also avoids exposing a number of states and operations
> required only to keep the fs-level table in sync with everything else.
> This will help a great deal to get rid of the bodgy locking and all
> the races.
> 
> (If any file system does anything fancy with its vnode table, other
> than iterating it and doing lookups by inode number, I'd like to find
> out.)

Expect some file systems to use  a key size != sizeof(ino_t) -- nfs
for example uses file handles up to 64 bytes.

Expect some file systems to not use a vnode table at all -- tmpfs for
example.

While a vfs-level vnode cache replacing all these C&P implementations
inside file systems is a good idea it will be hard to reach a state
where all vnodes (beside some anonymous device nodes like rootvp) are
contained in this cache.

<snip>
> 6. When the last active reference is dropped, VOP_INACTIVE is called,
> much like it currently is, except that all the crap like VI_INACTNOW
> needs to go away. I see no reason that the vnode cache shouldn't just
> lock the vnode with the normal vnode lock while calling VOP_INACTIVE.

This is the protocol already: locked on entry and unlocked on return

--
J. Hannken-Illjes - hannken%eis.cs.tu-bs.de@localhost - TU Braunschweig 
(Germany)



Home | Main Index | Thread Index | Old Index