Subject: vnode changes & 1.4E
To: None <current-users@netbsd.org>
From: Bill Studenmund <wrstuden@nas.nasa.gov>
List: current-users
Date: 07/07/1999 20:08:59
I just committed a large number of changes to modify how we do vnode
locking, as was discussed over the past few months on tech-kern.

The short answer is that nullfs now works.

I've done:

mount -t ffs	/dev/foo	/usr/src
mount -t nullfs	/usr/src	/w1
mount -t nullfs /usr/src	/x1

and done a make -j 10 in /x1/sys/arch/<foo>/compile/KERNEL while
doing ls & ls -l's in    /w1/sys/arch/<foo>/compile/KERNEL and in
/usr/src/sys/arch/<foo>/compile/KERNEL.

Part of this involved changing how we do vnode locking. As most fs's were
using the lock manager, I put a struct lock in struct vnode. Layered fs's
(nullfs, umapfs, etc - NOT unionfs) set their vnodes to use the struct
lock in the underlying vnode. That way the root and all overstacked vnodes
lock and unlock at the same time. This should prevent race conditions.

I fixed all of the leaf fs's in genfs to use lock manager locking too. It
was fairly easy.

Right now the only filesystems which are unchanged are unionfs and nfs.
nfs does no locking right now, and adding it would be a big mess. We need
to, but I don't have time. I didn't get a chance to do unionfs as I need
to understand it better first.

You will need to re-compile mount_null and mount_umap too.

I think I got everything in. An i386 GENERIC kernel compiled fine before I
committed, so I _think_ all is well.

ntfs users should check out how things work as I had to change its locking
too.

Let me know how it works!

Take care,

Bill