Subject: Re: an in-kernel getcwd() implementation
To: Jonathan Stone <jonathan@DSG.Stanford.EDU>
From: Bill Sommerfeld <sommerfeld@orchard.arlington.ma.us>
List: tech-kern
Date: 03/07/1999 10:51:05
> 1) how does it interact with amd's userspace finagling
>    of mount-points?

Could you be more specific?  I haven't used amd for a long time...  What
behavior are you looking for?  Does it generate carefully selected
device/inode numbers or something screwy like that to spoof getcwd?

> 2) what's the marginal cost of maintaining vnode->name links
>    in addition to name->vnode links in the cache?

Space:
	An additional hash table, sized at 1/8 the size of the name
	cache table (I pulled the 1/8 number out of thin air..)
	An additional LIST_ENTRY (two pointers) per cache node.

Time:
 	
cache_lookup:
	add a possible LIST_REMOVE on one low-probability path through
 	lookup, which occurs when:
 	/*
	 * Last component and we are renaming or deleting,
	 * the cache entry is invalid, or otherwise don't
	 * want cache entry to exist.
	 */

cache_enter:
	we add a test if the vnode is non-null and is a directory,
	and the path is not `.' or `..'.
	when the test succeeds, we do a LIST_INSERT_HEAD()

cache_purgevfs: 
        The routine is preceeded with a comment ending in "This makes
 	the algorithm O(n^2), but do you think I care?".
	I don't care either.

cache_purge: 
	no change (the cache uses generation numbers).

>    Does that make sense for a per-process cache as well as
>    the system-wide (I havent looked and dont know)

There isn't a per-process name->vnode cache in NetBSD (or least, not
one I could find..) so, no, I don't think the additional complexity
makes sense.

> 3)  Does it work with nullfs? :-/ 

If nullfs works (last I checked, it blew up with a recursive lock
deadlock on mount, but I haven't tried it since Frank's locking
changes went in), it should work.  nullfs wraps the looped-back vnode
in a null pass-through vnode, and the getcwd code has no way of
looking "inside" except by using the vnode ops.

It does work with opaque unionfs mounts (which are essentially another
way of doing nullfs..).  So far, I've also tried it with ffs, mfs,
procfs, kernfs, and AFS.

> 4) Has anyone  actually tried a linux oracle install?

I haven't.

					- Bill