Subject: Redoing file system suspension API
To: None <tech-kern@netbsd.org>
From: Juergen Hannken-Illjes <hannken@eis.cs.tu-bs.de>
List: tech-kern
Date: 06/13/2006 16:18:09
--xHFwDpU9dbj6ez1V
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
We have an API for file system suspension which consists of the functions
vfs_write_suspend, vfs_write_resume, vn_start_write and vn_finished_write.
** This implementation of file system suspension has some serious problems:
1) Its definition (Prepare to start a file system write operation) is vague
and file system dependent. It is impossible to determine which VOP's
will change file system data or metadata without knowing details of the
underlying file system. It does not allow recursion.
2) Its implementation adds a layer above file systems AND a layer inside
file systems.
3) It may take forever to suspend a file system if there is a high load
on other file systems. Even a high "read load" on the file system we
are suspending makes the suspension take a long time. If softdep file
systems are involved the suspension may not succeed at all.
** The approach described here resolves these issues. It replaces the
"write gates" by "access file system gates". Goal is to make every system
call atomic with regards to file system suspension. It will do all operations
on a file system either before or after a suspension but will never do one part
before and another part after the suspension. Allowing recursion makes it
easier to place the gates. So the advantages are:
- It is semantically well defined.
- It is possible to add a DEBUG option to check it. Every VOP called on
a suspending or suspended file system is an error (there are minor
exceptions: syncer and part of pagedaemon).
- No modifications of file system internals are needed to implement it.
- A suspended file system is really quiet. No vnodes are locked during
suspension.
- It solves 1), 2) and the "read part" of 3).
To solve the rest of 3) it adds throttling on the first gate not involved
in a suspending file system.
** The new API is:
Use two types of gates. Normal gates need a "leave" operation. Permanent
gates are valid until the thread returns to user mode.
int vngate_enter(struct mount *mp, int flags)
Enter a vnode gate for the file system "mp". "flags" is a combination of:
V_WAIT Thread must sleep until a suspension is over.
V_NOWAIT Returns an error if a suspension is active.
V_NOERROR Panic on error. No need for the caller to check the result.
V_PERMANENT Enter a permanent gate.
void vngate_leave(struct mount *mp)
Leave a (non-permanent) gate for the file system "mp".
void vngate_leave_all(struct mount *mp, int destroy)
Leave all permanent gates of this thread. Assumes all normal gates are
already closed. If "mp" is set, only leave the gates for this file system.
"destroy" is set if all state has to be freed because this thread will
terminate.
void vngate_sleep(struct mount *mp)
Sleep until a suspension on this file system is over.
void vngate_suspend(void)
Suspend all gates of this thread. Must be called before a thread may go to
long (interruptible) sleep. Further vngate_(enter|leave) calls ar forbidden.
void vngate_resume(void)
Resume all gates of this thread.
int vfs_suspend(struct mount *mp, int wait)
Suspend the file system "mp". If "wait" is set, wait for a current suspension
otherwise return an error.
int vfs_resume(struct mount *mp)
Resume the file system "mp".
option VNODE_GATEDEBUG
Add code to check
- No VOP's are called without a taken gate.
- No long sleep without "vngate_suspend".
- Internal integrity checks.
** Attached is an implementation on a recent -current. It covers the kernel
without the various compat parts. A scan over all "FILE_USE()" should be
sufficient here. The implementation renames "vfs_write_XXX" to "vfs_XXX"
and adds the state pointer to "struct lwp". All other changes depend on
"option NEWVNGATE" to make it easy to test the new API.
vngate.diff) The new suspension code. "lookup" is made vngate-aware so all
system calls going through "namei" are ok. New "FILE_USE_GATED()"
macro enters a gate if needed.
mntref.diff) Add a reference counter to "struct mount" to defer the "free"
until all references are gone.
debug.diff) Debug hooks for "ltsleep" and all VOP's.
vprint.diff) Missing "vprintf_nolog".
--
Juergen Hannken-Illjes - hannken@eis.cs.tu-bs.de - TU Braunschweig (Germany)
--xHFwDpU9dbj6ez1V
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="vngate.diff"
Index: sys/conf/files
===================================================================
RCS file: /cvsroot/src/sys/conf/files,v
retrieving revision 1.781
diff -p -u -4 -r1.781 files
--- sys/conf/files 7 Jun 2006 22:33:34 -0000 1.781
+++ sys/conf/files 13 Jun 2006 12:45:24 -0000
@@ -169,8 +169,9 @@ defparam SB_MAX
#
defflag SOFTDEP # XXX files.ufs?
defflag QUOTA # XXX files.ufs?
defflag VNODE_LOCKDEBUG
+defflag VNODE_GATEDEBUG
defflag MAGICLINKS
# buffer cache size options
#
Index: sys/sys/vnode.h
===================================================================
RCS file: /cvsroot/src/sys/sys/vnode.h,v
retrieving revision 1.154
diff -p -u -4 -r1.154 vnode.h
--- sys/sys/vnode.h 14 May 2006 21:38:18 -0000 1.154
+++ sys/sys/vnode.h 13 Jun 2006 12:45:35 -0000
@@ -274,8 +274,11 @@ extern const int vttoif_tab[];
#define V_NOWAIT 0x0002 /* don't sleep for suspend */
#define V_SLEEPONLY 0x0004 /* just return after sleep */
#define V_PCATCH 0x0008 /* sleep witch PCATCH set */
#define V_LOWER 0x0010 /* lower level operation */
+#define V_PERMANENT 0x0020 /* vngate_enter: no corresponding
+ vngate_leave */
+#define V_NOERROR 0x0040 /* vngate_enter: panic on error */
/*
* Flags to various vnode operations.
*/
@@ -497,12 +500,52 @@ struct vop_generic_args {
* Functions to gate filesystem write operations. Declared static inline
* here because they usually go into time critical code paths.
*/
#include <sys/mount.h>
+#if defined(_KERNEL_OPT)
+#include "opt_vnode_gatedebug.h"
+#endif
+
+#ifdef NEWVNGATE
+int vngate_enter(struct mount *, int);
+void vngate_leave(struct mount *);
+void vngate_leave_all(struct mount *, int);
+void vngate_sleep(struct mount *);
+void vngate_suspend(void);
+void vngate_resume(void);
+#ifdef VNODE_GATEDEBUG
+void vngate_debug_vop(const char *, struct vnode *, int);
+void vngate_debug_longsleep(const char *);
+#else
+static inline void vngate_debug_vop(const char *f, struct vnode *vp, int i) { }
+static inline void vngate_debug_longsleep(const char *msg) { }
+#endif
+
+static inline int
+vn_start_write(struct vnode *vp, struct mount **mpp, int flags)
+{
+ if (vp)
+ *mpp = vp->v_mount;
+ return 0;
+}
+static inline void vn_finished_write(struct mount *mp, int flags) { }
+
+#else /* NEWVNGATE */
+
+static inline int vngate_enter(struct mount *mp, int flags) { return 0; }
+static inline void vngate_leave(struct mount *mp) { }
+static inline void vngate_leave_all(struct mount *mp, int deallocate) { }
+static inline void vngate_sleep(struct mount *mp) { }
+static inline void vngate_suspend(void) { }
+static inline void vngate_resume(void) { }
+static inline void vngate_debug_vop(const char *f, struct vnode *vp, int i) { }
+static inline void vngate_debug_longsleep(const char *msg) { }
int vn_start_write(struct vnode *, struct mount **, int);
void vn_finished_write(struct mount *, int);
+#endif /* NEWVNGATE */
+
/*
* Finally, include the default set of vnode operations.
*/
#include <sys/vnode_if.h>
@@ -586,10 +629,11 @@ int getvnode(struct filedesc *, int, str
/* see vfssubr(9) */
void vfs_getnewfsid(struct mount *);
int vfs_drainvnodes(long target, struct lwp *);
-void vfs_write_resume(struct mount *);
-int vfs_write_suspend(struct mount *, int, int);
+void vfs_resume(struct mount *);
+int vfs_suspend(struct mount *, int);
+int vfs_suspend_start_ticks;
#ifdef DDB
void vfs_vnode_print(struct vnode *, int, void (*)(const char *, ...));
void vfs_mount_print(struct mount *, int, void (*)(const char *, ...));
#endif /* DDB */
Index: sys/sys/lwp.h
===================================================================
RCS file: /cvsroot/src/sys/sys/lwp.h,v
retrieving revision 1.37
diff -p -u -4 -r1.37 lwp.h
--- sys/sys/lwp.h 22 May 2006 13:43:54 -0000 1.37
+++ sys/sys/lwp.h 13 Jun 2006 12:45:34 -0000
@@ -73,8 +73,10 @@ struct lwp {
void *l_ctxlink; /* uc_link {get,set}context */
int l_dupfd; /* Sideways return value from cloning devices XXX */
struct sadata_vp *l_savp; /* SA "virtual processor" */
+ void *l_vngate; /* vngate state */
+
int l_locks; /* DEBUG: lockmgr count of held locks */
void *l_private; /* svr4-style lwp-private data */
#define l_endzero l_priority
Index: sys/sys/mount.h
===================================================================
RCS file: /cvsroot/src/sys/sys/mount.h,v
retrieving revision 1.141
diff -p -u -4 -r1.141 mount.h
--- sys/sys/mount.h 14 May 2006 21:38:18 -0000 1.141
+++ sys/sys/mount.h 13 Jun 2006 12:45:34 -0000
@@ -105,10 +105,15 @@ struct mount {
struct statvfs mnt_stat; /* cache of filesystem stats */
void *mnt_data; /* private data */
int mnt_wcnt; /* count of vfs_busy waiters */
struct lwp *mnt_unmounter; /* who is unmounting */
+#ifdef NEWVNGATE
+ int mnt_vngate_count; /* active vngate counter */
+ int mnt_refcount; /* #references to this struct */
+#else
int mnt_writeopcountupper; /* upper writeops in progress */
int mnt_writeopcountlower; /* lower writeops in progress */
+#endif
struct simplelock mnt_slock; /* mutex for wcnt and
writeops counters */
struct mount *mnt_leaf; /* leaf fs we mounted on */
};
Index: sys/sys/file.h
===================================================================
RCS file: /cvsroot/src/sys/sys/file.h,v
retrieving revision 1.56
diff -p -u -4 -r1.56 file.h
--- sys/sys/file.h 14 May 2006 21:38:18 -0000 1.56
+++ sys/sys/file.h 13 Jun 2006 12:45:34 -0000
@@ -118,16 +118,27 @@ do { \
/*
* FILE_USE() must be called with the file lock held.
* (Typical usage is: `fp = fd_getfile(..); FILE_USE(fp);'
* and fd_getfile() returns the file locked)
+ * FILE_USE_GATED(fp) takes a permanent vngate on vnode.
*/
#define FILE_USE(fp) \
do { \
(fp)->f_usecount++; \
FILE_USE_CHECK((fp), "f_usecount overflow"); \
simple_unlock(&(fp)->f_slock); \
} while (/* CONSTCOND */ 0)
+#define FILE_USE_GATED(fp) \
+do { \
+ (fp)->f_usecount++; \
+ FILE_USE_CHECK((fp), "f_usecount overflow"); \
+ simple_unlock(&(fp)->f_slock); \
+ if ((fp)->f_type == DTYPE_VNODE) \
+ vngate_enter(((struct vnode *)((fp)->f_data))->v_mount, \
+ V_WAIT|V_PERMANENT|V_NOERROR); \
+} while (/* CONSTCOND */ 0)
+
#define FILE_UNUSE_WLOCK(fp, l, havelock) \
do { \
if (!(havelock)) \
simple_lock(&(fp)->f_slock); \
Index: sys/sys/userret.h
===================================================================
RCS file: /cvsroot/src/sys/sys/userret.h,v
retrieving revision 1.9
diff -p -u -4 -r1.9 userret.h
--- sys/sys/userret.h 11 Mar 2006 13:53:41 -0000 1.9
+++ sys/sys/userret.h 13 Jun 2006 12:45:34 -0000
@@ -71,8 +71,10 @@
#ifndef _SYS_USERRET_H_
#define _SYS_USERRET_H_
+#include <sys/vnode.h>
+
/*
* Define the MI code needed before returning to user mode, for
* trap and syscall.
* XXX The following port doesn't use this yet:
@@ -83,8 +85,11 @@ mi_userret(struct lwp *l)
{
struct proc *p = l->l_proc;
int sig;
+ /* Leave all VNGATES collected so far. */
+ vngate_leave_all(NULL, 0);
+
/* Generate UNBLOCKED upcall. */
if (l->l_flag & L_SA_BLOCKING)
sa_unblock_userret(l);
Index: sys/arch/vax/include/userret.h
===================================================================
RCS file: /cvsroot/src/sys/arch/vax/include/userret.h,v
retrieving revision 1.1
diff -p -u -4 -r1.1 userret.h
--- sys/arch/vax/include/userret.h 12 Mar 2006 02:04:26 -0000 1.1
+++ sys/arch/vax/include/userret.h 13 Jun 2006 12:45:22 -0000
@@ -29,8 +29,10 @@
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
* THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
+#include <sys/vnode.h>
+
static __inline void
userret(struct lwp *, struct trapframe *, u_quad_t);
/*
@@ -42,8 +44,11 @@ userret(struct lwp *l, struct trapframe
{
int sig;
struct proc *p = l->l_proc;
+ /* Leave all VNGATES collected so far. */
+ vngate_leave_all(NULL, 0);
+
/* Generate UNBLOCKED upcall. */
if (l->l_flag & L_SA_BLOCKING)
sa_unblock_userret(l);
Index: sys/kern/vfs_vnops.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_vnops.c,v
retrieving revision 1.112
diff -p -u -4 -r1.112 vfs_vnops.c
--- sys/kern/vfs_vnops.c 27 May 2006 23:46:49 -0000 1.112
+++ sys/kern/vfs_vnops.c 13 Jun 2006 12:45:28 -0000
@@ -39,8 +39,9 @@
#include <sys/cdefs.h>
__KERNEL_RCSID(0, "$NetBSD: vfs_vnops.c,v 1.112 2006/05/27 23:46:49 simonb Exp $");
#include "opt_verified_exec.h"
+#include "opt_ddb.h"
#include "fs_union.h"
#include <sys/param.h>
@@ -63,8 +64,17 @@ __KERNEL_RCSID(0, "$NetBSD: vfs_vnops.c,
#include <uvm/uvm_extern.h>
#include <uvm/uvm_readahead.h>
+#include <machine/stdarg.h>
+
+#ifdef DDB
+#include <ddb/ddbvar.h>
+#include <machine/db_machdep.h>
+#include <ddb/db_command.h>
+#include <ddb/db_interface.h>
+#endif /* DDB */
+
#ifdef UNION
#include <fs/union/union.h>
#endif
@@ -687,10 +697,14 @@ vn_ioctl(struct file *fp, u_long com, vo
*/
static int
vn_poll(struct file *fp, int events, struct lwp *l)
{
+ struct vnode *vp = (struct vnode *)fp->f_data;
- return (VOP_POLL(((struct vnode *)fp->f_data), events, l));
+ if (vngate_enter(vp->v_mount, V_NOWAIT|V_PERMANENT) != 0)
+ return 0;
+
+ return (VOP_POLL(vp, events, l));
}
/*
* File table vnode kqfilter routine.
@@ -937,8 +951,471 @@ vn_extattr_rm(struct vnode *vp, int iofl
return (error);
}
+#ifdef NEWVNGATE
+
+struct vngate_tag {
+ SLIST_ENTRY(vngate_tag) vgt_list;
+ struct mount *vgt_mount;
+ uint8_t vgt_count; /* # gates taken on this mount */
+ uint8_t vgt_perm_count; /* # of permanent gates */
+};
+struct vngate_state {
+ uint8_t vgs_vop_level; /* VOP_XXX: recursion counter */
+ uint8_t vgs_save_level; /* vngate_suspend: recursion counter */
+ uint8_t vgs_enter_count; /* # of calls to vngate_enter */
+ SLIST_HEAD(, vngate_tag) vgs_head;
+};
+
+POOL_INIT(vng_tag_pl, sizeof(struct vngate_tag), 0, 0, 0, "vngtag", NULL);
+POOL_INIT(vng_state_pl, sizeof(struct vngate_state), 0, 0, 0, "vngstate", NULL);
+
+#ifdef VNODE_GATEDEBUG
+static void
+vngate_debug_print(struct vnode *vp, const char *fmt, ...)
+{
+ va_list ap;
+
+ if (curlwp)
+ printf_nolog("vngate: proc %d.%d(%s): ", curlwp->l_proc->p_pid,
+ curlwp->l_lid, curlwp->l_proc->p_comm);
+ else
+ printf_nolog("vngate: proc NULL: ");
+
+ va_start(ap, fmt);
+ vprintf_nolog(fmt, ap);
+ va_end(ap);
+
+ printf_nolog("\n");
+#ifdef DDB
+ if (vp != NULL)
+ vfs_vnode_print(vp, 0, printf_nolog);
+ db_stack_trace_print((db_expr_t)__builtin_frame_address(0),
+ TRUE, 65535, "", printf_nolog);
+#endif /* DDB */
+}
+
+void
+vngate_debug_vop(const char *func, struct vnode *vp, int level)
+{
+ struct lwp *l = curlwp;
+ struct proc *p = l->l_proc;
+ struct mount *mp, *mp2;
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+ const char *state;
+
+ if (doing_shutdown)
+ return;
+
+ if ((vnstate = l->l_vngate) == NULL) {
+ vnstate = l->l_vngate = pool_get(&vng_state_pl, PR_NOWAIT);
+ if (vnstate == NULL) {
+ printf("vngate_debug_vop: cannot alloc state");
+ return;
+ }
+ memset(vnstate, 0, sizeof(*vnstate));
+ SLIST_INIT(&vnstate->vgs_head);
+ }
+
+ vnstate->vgs_vop_level += level;
+ if (level < 0 || vnstate->vgs_vop_level > level)
+ return;
+
+ if ((mp = vp->v_mount) == NULL)
+ return;
+ mp = mp->mnt_leaf;
+
+ if (vp->v_type == VBLK && vp->v_specinfo != NULL &&
+ vp->v_specmountpoint != NULL)
+ mp2 = vp->v_specmountpoint->mnt_leaf;
+ else
+ mp2 = NULL;
+
+ if (p == &proc0)
+ return;
+
+ if (strcmp(p->p_comm, "ioflush") == 0 &&
+ (mp->mnt_iflag & IMNT_SUSPENDED) == 0)
+ return;
+
+ if (strcmp(p->p_comm, "pagedaemon") == 0 &&
+ strcmp(func, "VOP_BWRITE") == 0 &&
+ (mp->mnt_iflag & IMNT_SUSPENDED) == 0)
+ return;
+
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list)
+ if ((t->vgt_mount == mp || t->vgt_mount == mp2) &&
+ t->vgt_count > 0)
+ return;
+
+ switch (mp->mnt_iflag & (IMNT_SUSPEND|IMNT_SUSPENDED)) {
+ case 0:
+ state = "";
+ break;
+ case IMNT_SUSPEND:
+ state = "suspending ";
+ break;
+ case IMNT_SUSPEND|IMNT_SUSPENDED:
+ state = "suspended ";
+ break;
+ default:
+ state = "illegal ";
+ break;
+ }
+
+ vngate_debug_print(vp, "called %s on %s(%s)",
+ func, state, mp->mnt_stat.f_mntonname);
+}
+
+void
+vngate_debug_longsleep(const char *wmesg)
+{
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+ char buf[128], *cp;
+
+ cp = buf;
+ if ((vnstate = curlwp->l_vngate) != NULL) {
+ if (vnstate->vgs_save_level != 0)
+ return;
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list)
+ if (t->vgt_count != 0)
+ cp += snprintf(cp, (int)(buf+sizeof(buf)-cp),
+ " (%s):%d",
+ t->vgt_mount->mnt_stat.f_mntonname,
+ t->vgt_count);
+ }
+
+ if (cp != buf)
+ vngate_debug_print(NULL, "sleeps on (%s) with%s", wmesg, buf);
+}
+#endif /* VNODE_GATEDEBUG */
+
+/*
+ * Update a vngate by an amount.
+ * If do_update, the tag is also updated.
+ */
+static inline void
+vngate_tag_update(struct vngate_tag *t, int amount, int do_update)
+{
+ struct mount *mp = t->vgt_mount;
+
+ simple_lock(&mp->mnt_slock);
+#ifdef VNODE_GATEDEBUG
+ if (do_update &&
+ (t->vgt_count+amount < 0 || t->vgt_count+amount >= UINT8_MAX))
+ vngate_debug_print(NULL, "vgt_count %d out of range",
+ t->vgt_count+amount);
+ if (mp->mnt_vngate_count+amount < 0)
+ vngate_debug_print(NULL, "mnt_vngate_count %d negative",
+ mp->mnt_vngate_count+amount);
+#endif /* VNODE_GATEDEBUG */
+ if (do_update)
+ t->vgt_count += amount;
+ mp->mnt_vngate_count += amount;
+ if ((mp->mnt_iflag & IMNT_SUSPEND) != 0 &&
+ mp->mnt_vngate_count == 0)
+ wakeup(&mp->mnt_vngate_count);
+ simple_unlock(&mp->mnt_slock);
+}
+
+/*
+ * Free a vngate.
+ */
+static inline void
+vngate_tag_destroy(struct vngate_tag *t)
+{
+ struct vngate_state *vnstate = curlwp->l_vngate;
+ struct mount *mp = t->vgt_mount;
+
+#ifdef VNODE_GATEDEBUG
+ if (t->vgt_count != 0 || t->vgt_perm_count != 0)
+ vngate_debug_print(NULL, "destroy with count %d/%d",
+ t->vgt_perm_count, t->vgt_count);
+#endif /* VNODE_GATEDEBUG */
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
+ SLIST_REMOVE(&vnstate->vgs_head, t, vngate_tag, vgt_list);
+ pool_put(&vng_tag_pl, t);
+}
+
+/*
+ * Enter a vngate.
+ *
+ * The current thread will sleep if this is the first access to a
+ * file system suspending or suspended.
+ *
+ * The current thread will be delayed if this is the first access and
+ * a suspension is in progress.
+ */
+int
+vngate_enter(struct mount *mp, int flags)
+{
+ int error, pflag, timo;
+ struct lwp *l = curlwp;
+ struct vngate_state *vnstate;
+ struct vngate_tag *tp;
+
+ if (mp == NULL)
+ return 0;
+
+ mp = mp->mnt_leaf;
+
+ error = 0;
+ pflag = ((flags & V_WAIT) ? PR_WAITOK : PR_NOWAIT);
+
+ vnstate = l->l_vngate;
+ if (vnstate == NULL) {
+ if ((l->l_vngate = pool_get(&vng_state_pl, pflag)) == NULL) {
+ error = EWOULDBLOCK;
+ goto done;
+ }
+ vnstate = l->l_vngate;
+ memset(vnstate, 0, sizeof(*vnstate));
+ SLIST_INIT(&vnstate->vgs_head);
+ }
+
+ if (vnstate->vgs_save_level != 0)
+ goto done;
+ if (vfs_suspend_start_ticks != 0 && vnstate->vgs_enter_count == 0 &&
+ (flags & V_WAIT) != 0 && (l->l_proc->p_flag & P_SYSTEM) == 0 &&
+ (mp->mnt_iflag & IMNT_SUSPEND) == 0) {
+ timo = (hardclock_ticks-vfs_suspend_start_ticks)/50;
+ if (timo <= 0)
+ timo = 1;
+ tsleep(&vfs_suspend_start_ticks, PUSER-1, "suspofs", timo);
+ }
+
+ SLIST_FOREACH(tp, &vnstate->vgs_head, vgt_list)
+ if (tp->vgt_mount == mp)
+ break;
+
+ if ((flags & V_PERMANENT) && tp != NULL && tp->vgt_perm_count != 0)
+ goto done;
+
+ while ((mp->mnt_iflag & IMNT_SUSPEND) != 0 &&
+ (tp == NULL || tp->vgt_count == 0)) {
+ if ((flags & V_WAIT) == 0)
+ error = EWOULDBLOCK;
+ else
+ error = tsleep(&mp->mnt_flag, PUSER-1, "suspfs", 0);
+ if (error)
+ goto done;
+ }
+
+ if (vnstate->vgs_enter_count < UINT8_MAX)
+ vnstate->vgs_enter_count++;
+
+ if (tp != NULL) {
+ vngate_tag_update(tp, 1, 1);
+ if (flags & V_PERMANENT)
+ tp->vgt_perm_count++;
+ goto done;
+ }
+
+ if ((tp = pool_get(&vng_tag_pl, pflag)) == NULL) {
+ error = EWOULDBLOCK;
+ goto done;
+ }
+
+ tp->vgt_mount = mp;
+ tp->vgt_count = 0;
+ tp->vgt_perm_count = 0;
+ simple_lock(&mp->mnt_slock);
+ MNT_REF(mp);
+ simple_unlock(&mp->mnt_slock);
+ vngate_tag_update(tp, 1, 1);
+ SLIST_INSERT_HEAD(&vnstate->vgs_head, tp, vgt_list);
+ if (flags & V_PERMANENT)
+ tp->vgt_perm_count++;
+
+done:
+ if (error && (flags & V_NOERROR) == V_NOERROR)
+ printf("vngate_enter: error %d", error);
+ return error;
+}
+
+/*
+ * Leave a vngate taken from vngate_enter().
+ */
+void
+vngate_leave(struct mount *mp)
+{
+ struct lwp *l = curlwp;
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+
+ if (mp == NULL)
+ return;
+
+ mp = mp->mnt_leaf;
+
+ vnstate = l->l_vngate;
+ if (vnstate == NULL) {
+#ifdef VNODE_GATEDEBUG
+ vngate_debug_print(NULL, "tag not found on leave");
+#endif /* VNODE_GATEDEBUG */
+ return;
+ }
+
+ if (vnstate->vgs_save_level != 0)
+ return;
+
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list)
+ if (t->vgt_mount == mp) {
+ vngate_tag_update(t, -1, 1);
+#ifdef VNODE_GATEDEBUG
+ if (t->vgt_count < t->vgt_perm_count)
+ vngate_debug_print(NULL, "perm %d > %d",
+ t->vgt_perm_count, t->vgt_count);
+#endif /* VNODE_GATEDEBUG */
+ return;
+ }
+
+#ifdef VNODE_GATEDEBUG
+ vngate_debug_print(NULL, "tag not found on leave");
+#endif /* VNODE_GATEDEBUG */
+}
+
+/*
+ * Leave all vngates taken so far.
+ *
+ * If mp != NULL, leave only vngates for this mount.
+ *
+ * If deallocate, must deallocate the gates.
+ */
+void
+vngate_leave_all(struct mount *mp, int deallocate)
+{
+ int n;
+ struct lwp *l = curlwp;
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+
+ if ((vnstate = l->l_vngate) == NULL)
+ return;
+
+#ifdef VNODE_GATEDEBUG
+ if (vnstate->vgs_save_level != 0)
+ vngate_debug_print(NULL, "leave_all with save level %d",
+ vnstate->vgs_save_level);
+#endif /* VNODE_GATEDEBUG */
+
+ if (mp != NULL) {
+ mp = mp->mnt_leaf;
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list) {
+ if (t->vgt_mount != mp)
+ continue;
+ vngate_tag_update(t, -t->vgt_count, 1);
+ t->vgt_perm_count = 0;
+ if ((t->vgt_mount->mnt_iflag & IMNT_GONE) == IMNT_GONE)
+ deallocate = 1;
+ if (deallocate)
+ vngate_tag_destroy(t);
+ break;
+ }
+ return;
+ }
+
+ vnstate->vgs_enter_count = 0;
+
+ n = 0;
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list) {
+#ifdef VNODE_GATEDEBUG
+ if (t->vgt_count != t->vgt_perm_count)
+ vngate_debug_print(NULL, "perm %d != %d",
+ t->vgt_perm_count, t->vgt_count);
+#endif /* VNODE_GATEDEBUG */
+ vngate_tag_update(t, -t->vgt_count, 1);
+ t->vgt_perm_count = 0;
+ if ((t->vgt_mount->mnt_iflag & IMNT_GONE) == IMNT_GONE)
+ deallocate = 1;
+ n++;
+ }
+
+ /*
+ * If we have collected too many gates,
+ * free them to speed up list searching.
+ */
+ if (!deallocate && n < 8)
+ return;
+
+ while ((t = SLIST_FIRST(&vnstate->vgs_head)) != NULL)
+ vngate_tag_destroy(t);
+ l->l_vngate = NULL;
+ pool_put(&vng_state_pl, vnstate);
+}
+
+/*
+ * Sleep while this file system is suspending or suspended.
+ */
+void
+vngate_sleep(struct mount *mp)
+{
+ while ((mp->mnt_iflag & IMNT_SUSPEND) != 0)
+ tsleep(&mp->mnt_flag, PUSER-1, "suspfs", 0);
+}
+
+/*
+ * Temporarily suspend all vngates.
+ */
+void
+vngate_suspend(void)
+{
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+
+ vnstate = curlwp->l_vngate;
+ if (vnstate == NULL) {
+ vnstate = curlwp->l_vngate = pool_get(&vng_state_pl, PR_WAITOK);
+ memset(vnstate, 0, sizeof(*vnstate));
+ SLIST_INIT(&vnstate->vgs_head);
+ }
+
+ if (vnstate->vgs_save_level++ != 0)
+ return;
+
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list)
+ vngate_tag_update(t, -t->vgt_count, 0);
+}
+
+/*
+ * Resume all vngates.
+ */
+void
+vngate_resume(void)
+{
+ struct mount *mp;
+ struct vngate_state *vnstate;
+ struct vngate_tag *t;
+
+ vnstate = curlwp->l_vngate;
+ if (vnstate == NULL) {
+#ifdef VNODE_GATEDEBUG
+ vngate_debug_print(NULL, "resume without state");
+#endif /* VNODE_GATEDEBUG */
+ return;
+ }
+
+#ifdef VNODE_GATEDEBUG
+ if (vnstate->vgs_save_level == 0)
+ vngate_debug_print(NULL, "resume on level 0");
+#endif /* VNODE_GATEDEBUG */
+
+ if (--vnstate->vgs_save_level != 0)
+ return;
+
+ SLIST_FOREACH(t, &vnstate->vgs_head, vgt_list) {
+ mp = t->vgt_mount;
+ while ((mp->mnt_iflag & (IMNT_SUSPEND|IMNT_SUSPENDED)) != 0)
+ tsleep(&mp->mnt_flag, PUSER - 1, "suspfs", 0);
+ vngate_tag_update(t, t->vgt_count, 0);
+ }
+}
+
+#else /* NEWVNGATE */
/*
* Preparing to start a filesystem write operation. If the operation is
* permitted, then we bump the count of operations in progress and
* proceed. If a suspend request is in progress, we wait until the
@@ -1027,8 +1504,10 @@ vn_finished_write(struct mount *mp, int
}
simple_unlock(&mp->mnt_slock);
}
+#endif /* NEWVNGATE */
+
void
vn_ra_allocctx(struct vnode *vp)
{
struct uvm_ractx *ra = NULL;
Index: sys/kern/vfs_subr.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_subr.c,v
retrieving revision 1.266
diff -p -u -4 -r1.266 vfs_subr.c
--- sys/kern/vfs_subr.c 14 May 2006 21:15:12 -0000 1.266
+++ sys/kern/vfs_subr.c 13 Jun 2006 12:45:28 -0000
@@ -170,8 +170,19 @@ POOL_INIT(vnode_pool, sizeof(struct vnod
&pool_allocator_nointr);
MALLOC_DEFINE(M_VNODE, "vnodes", "Dynamically allocated vnodes");
+#ifdef NEWVNGATE
+/*
+ * File system suspension state
+ */
+int vfs_suspend_start_ticks = 0; /* hardclock_ticks when the suspension
+ started. Zero if no suspension in
+ progress. */
+static struct lock vfs_suspend_lock = /* Serialize suspensions. */
+ LOCK_INITIALIZER(PUSER, "suspwt", 0, 0);
+#endif
+
/*
* Local declarations.
*/
static void insmntque(struct vnode *, struct mount *);
@@ -2435,20 +2447,72 @@ vfs_reinit(void)
}
}
/*
- * Request a filesystem to suspend write operations.
+ * Request a filesystem to suspend.
+ *
+ * If wait == 0 and there is a suspension in progress, return error.
*/
int
-vfs_write_suspend(struct mount *mp, int slpflag, int slptimeo)
+vfs_suspend(struct mount *mp, int wait)
{
+#ifdef NEWVNGATE
+ int error, flags;
+ struct lwp *l = curlwp;
+
+ flags = LK_EXCLUSIVE;
+ if (!wait)
+ flags |= LK_NOWAIT;
+ if (lockmgr(&vfs_suspend_lock, flags, NULL) != 0)
+ return EWOULDBLOCK;
+
+ vngate_suspend();
+ mp->mnt_iflag |= IMNT_SUSPEND;
+ vfs_suspend_start_ticks = hardclock_ticks;
+ if (vfs_suspend_start_ticks == 0)
+ vfs_suspend_start_ticks -= 1;
+
+ simple_lock(&mp->mnt_slock);
+ while (mp->mnt_vngate_count > 0)
+ ltsleep(&mp->mnt_vngate_count, PUSER-1, "suspwtcnt",
+ 0, &mp->mnt_slock);
+ simple_unlock(&mp->mnt_slock);
+
+ error = VFS_SYNC(mp, MNT_WAIT, l->l_proc->p_cred, l);
+ if (error) {
+ vfs_resume(mp);
+ return error;
+ }
+
+ vfs_suspend_start_ticks = 0;
+ mp->mnt_iflag |= IMNT_SUSPENDED;
+ wakeup(&vfs_suspend_start_ticks);
+
+#ifdef VNODE_GATEDEBUG
+ simple_lock(&mountlist_slock);
+ if (vfs_busy(mp, LK_NOWAIT, &mountlist_slock))
+ printf("vfs_suspend: %s: busy\n", mp->mnt_stat.f_mntonname);
+ else {
+ struct vnode *vp;
+ LIST_FOREACH(vp, &mp->mnt_vnodelist, v_mntvnodes) {
+ if (VOP_ISLOCKED(vp))
+ vprint("vfs_suspend: locked vnode", vp);
+ }
+ simple_lock(&mountlist_slock);
+ vfs_unbusy(mp);
+ }
+ simple_unlock(&mountlist_slock);
+#endif /* VNODE_GATEDEBUG */
+
+ return 0;
+#else
struct lwp *l = curlwp; /* XXX */
int error;
while ((mp->mnt_iflag & IMNT_SUSPEND)) {
- if (slptimeo < 0)
+ if (wait == 0)
return EWOULDBLOCK;
- error = tsleep(&mp->mnt_flag, slpflag, "suspwt1", slptimeo);
+ error = tsleep(&mp->mnt_flag, PUSER, "suspwt1", 0);
if (error)
return error;
}
mp->mnt_iflag |= IMNT_SUSPEND;
@@ -2460,9 +2524,9 @@ vfs_write_suspend(struct mount *mp, int
simple_unlock(&mp->mnt_slock);
error = VFS_SYNC(mp, MNT_WAIT, l->l_proc->p_cred, l);
if (error) {
- vfs_write_resume(mp);
+ vfs_resume(mp);
return error;
}
mp->mnt_iflag |= IMNT_SUSPENDLOW;
@@ -2473,21 +2537,32 @@ vfs_write_suspend(struct mount *mp, int
mp->mnt_iflag |= IMNT_SUSPENDED;
simple_unlock(&mp->mnt_slock);
return 0;
+#endif
}
/*
- * Request a filesystem to resume write operations.
+ * Request a filesystem to resume from suspension.
*/
void
-vfs_write_resume(struct mount *mp)
+vfs_resume(struct mount *mp)
{
+#ifdef NEWVNGATE
+ KASSERT((mp->mnt_iflag & IMNT_SUSPEND) != 0);
+ mp->mnt_iflag &= ~(IMNT_SUSPEND|IMNT_SUSPENDED);
+ wakeup(&mp->mnt_flag);
+
+ lockmgr(&vfs_suspend_lock, LK_RELEASE, NULL);
+
+ vngate_resume();
+#else
if ((mp->mnt_iflag & IMNT_SUSPEND) == 0)
return;
mp->mnt_iflag &= ~(IMNT_SUSPEND | IMNT_SUSPENDLOW | IMNT_SUSPENDED);
wakeup(&mp->mnt_flag);
+#endif
}
void
copy_statvfs_info(struct statvfs *sbp, const struct mount *mp)
@@ -2682,10 +2757,16 @@ vfs_mount_print(struct mount *mp, int fu
if (mp->mnt_unmounter) {
(*pr)("unmounter pid = %d ",mp->mnt_unmounter->l_proc);
}
+#ifdef NEWVNGATE
+ (*pr)("wcnt = %d, vngate_count = %d, refcount = %d\n",
+ mp->mnt_wcnt, mp->mnt_vngate_count, mp->mnt_refcount);
+#else
(*pr)("wcnt = %d, writeopcountupper = %d, writeopcountupper = %d\n",
- mp->mnt_wcnt,mp->mnt_writeopcountupper,mp->mnt_writeopcountlower);
+ mp->mnt_wcnt, mp->mnt_writeopcountupper,
+ mp->mnt_writeopcountlower);
+#endif
(*pr)("statvfs cache:\n");
(*pr)("\tbsize = %lu\n",mp->mnt_stat.f_bsize);
(*pr)("\tfrsize = %lu\n",mp->mnt_stat.f_frsize);
Index: sys/dev/fss.c
===================================================================
RCS file: /cvsroot/src/sys/dev/fss.c,v
retrieving revision 1.26
diff -p -u -4 -r1.26 fss.c
--- sys/dev/fss.c 14 May 2006 21:42:26 -0000 1.26
+++ sys/dev/fss.c 13 Jun 2006 12:45:24 -0000
@@ -739,9 +739,9 @@ fss_create_snapshot(struct fss_softc *sc
/*
* Activate the snapshot.
*/
- if ((error = vfs_write_suspend(sc->sc_mount, PUSER|PCATCH, 0)) != 0)
+ if ((error = vfs_suspend(sc->sc_mount, 1)) != 0)
goto bad;
microtime(&sc->sc_time);
@@ -750,9 +750,9 @@ fss_create_snapshot(struct fss_softc *sc
fss_copy_on_write, sc);
if (error == 0)
sc->sc_flags |= FSS_ACTIVE;
- vfs_write_resume(sc->sc_mount);
+ vfs_resume(sc->sc_mount);
if (error != 0)
goto bad;
@@ -946,8 +946,9 @@ fss_bs_io(struct fss_softc *sc, fss_io_t
int error;
off += FSS_CLTOB(sc, cl);
+ vngate_enter(sc->sc_bs_vp->v_mount, V_WAIT|V_NOERROR);
vn_lock(sc->sc_bs_vp, LK_EXCLUSIVE|LK_RETRY);
error = vn_rdwr((rw == FSS_READ ? UIO_READ : UIO_WRITE), sc->sc_bs_vp,
data, len, off, UIO_SYSSPACE, IO_UNIT|IO_NODELOCKED,
@@ -958,8 +959,9 @@ fss_bs_io(struct fss_softc *sc, fss_io_t
round_page(off+len), PGO_CLEANIT|PGO_SYNCIO|PGO_FREE);
}
VOP_UNLOCK(sc->sc_bs_vp, 0);
+ vngate_leave(sc->sc_bs_vp->v_mount);
return error;
}
@@ -1083,9 +1085,11 @@ fss_bs_thread(void *arg)
if (error) {
bp->b_error = error;
bp->b_flags |= B_ERROR;
bp->b_resid = bp->b_bcount;
- }
+ } else
+ bp->b_resid = 0;
+
biodone(bp);
continue;
}
Index: sys/dev/vnd.c
===================================================================
RCS file: /cvsroot/src/sys/dev/vnd.c,v
retrieving revision 1.147
diff -p -u -4 -r1.147 vnd.c
--- sys/dev/vnd.c 14 May 2006 21:42:26 -0000 1.147
+++ sys/dev/vnd.c 13 Jun 2006 12:45:24 -0000
@@ -563,8 +563,9 @@ vndthread(void *arg)
obp->b_error = ENXIO;
obp->b_flags |= B_ERROR;
goto done;
}
+ vngate_enter(vnd->sc_vp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
#ifdef VND_COMPRESSION
/* handle a compressed read */
if ((flags & B_READ) != 0 && (vnd->sc_flags & VNF_COMP)) {
compstrategy(obp, bn);
@@ -681,11 +682,13 @@ vndthread(void *arg)
if ((flags & B_READ) == 0)
vn_finished_write(mp, 0);
+ vngate_leave_all(NULL, 0);
s = splbio();
continue;
done:
+ vngate_leave_all(NULL, 0);
biodone(obp);
s = splbio();
}
Index: sys/kern/kern_descrip.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_descrip.c,v
retrieving revision 1.143
diff -p -u -4 -r1.143 kern_descrip.c
--- sys/kern/kern_descrip.c 14 May 2006 21:15:11 -0000 1.143
+++ sys/kern/kern_descrip.c 13 Jun 2006 12:45:26 -0000
@@ -480,9 +480,9 @@ sys_fcntl(struct lwp *l, void *v, regist
restart:
if ((fp = fd_getfile(fdp, fd)) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if ((cmd & F_FSCTL)) {
error = fcntl_forfs(fd, l, cmd, SCARG(uap, arg));
goto out;
@@ -738,9 +738,9 @@ sys___fstat30(struct lwp *l, void *v, re
if ((fp = fd_getfile(fdp, fd)) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
error = (*fp->f_ops->fo_stat)(fp, &ub, l);
FILE_UNUSE(fp, l);
if (error == 0)
@@ -774,9 +774,9 @@ sys_fpathconf(struct lwp *l, void *v, re
if ((fp = fd_getfile(fdp, fd)) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
switch (fp->f_type) {
case DTYPE_SOCKET:
@@ -1098,11 +1098,15 @@ cwdfree(struct cwdinfo *cwdi)
simple_unlock(&cwdi->cwdi_slock);
if (n > 0)
return;
+ vngate_enter(cwdi->cwdi_cdir->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_cdir);
- if (cwdi->cwdi_rdir)
+ if (cwdi->cwdi_rdir) {
+ vngate_enter(cwdi->cwdi_rdir->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_rdir);
+ }
pool_put(&cwdi_pool, cwdi);
}
/*
@@ -1362,8 +1366,13 @@ closef(struct file *fp, struct lwp *l)
if (fp == NULL)
return (0);
+ if (fp->f_type == DTYPE_VNODE) {
+ vp = (struct vnode *)fp->f_data;
+ vngate_enter(vp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
+ }
+
/*
* POSIX record locking dictates that any close releases ALL
* locks owned by this process. This is handled by setting
* a flag in the unlock to free ONLY locks obeying POSIX
@@ -1501,9 +1510,9 @@ sys_flock(struct lwp *l, void *v, regist
if ((fp = fd_getfile(fdp, fd)) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if (fp->f_type != DTYPE_VNODE) {
error = EOPNOTSUPP;
goto out;
@@ -1558,9 +1567,9 @@ sys_posix_fadvise(struct lwp *l, void *v
if (fp == NULL) {
error = EBADF;
goto out;
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if (fp->f_type != DTYPE_VNODE) {
if (fp->f_type == DTYPE_PIPE || fp->f_type == DTYPE_SOCKET) {
error = ESPIPE;
Index: sys/kern/kern_event.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_event.c,v
retrieving revision 1.28
diff -p -u -4 -r1.28 kern_event.c
--- sys/kern/kern_event.c 7 Jun 2006 22:33:39 -0000 1.28
+++ sys/kern/kern_event.c 13 Jun 2006 12:45:26 -0000
@@ -49,8 +49,9 @@ __KERNEL_RCSID(0, "$NetBSD: kern_event.c
#include <sys/socket.h>
#include <sys/socketvar.h>
#include <sys/stat.h>
#include <sys/uio.h>
+#include <sys/vnode.h>
#include <sys/mount.h>
#include <sys/filedesc.h>
#include <sys/sa.h>
#include <sys/syscallargs.h>
@@ -779,9 +780,9 @@ kqueue_register(struct kqueue *kq, struc
if (kfilter->filtops->f_isfd) {
/* monitoring a file descriptor */
if ((fp = fd_getfile(fdp, kev->ident)) == NULL)
return (EBADF); /* validate descriptor */
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if (kev->ident < fdp->fd_knlistsize) {
SLIST_FOREACH(kn, &fdp->fd_knlist[kev->ident], kn_link)
if (kq == kn->kn_kq &&
Index: sys/kern/kern_exit.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_exit.c,v
retrieving revision 1.156
diff -p -u -4 -r1.156 kern_exit.c
--- sys/kern/kern_exit.c 14 May 2006 21:15:11 -0000 1.156
+++ sys/kern/kern_exit.c 13 Jun 2006 12:45:27 -0000
@@ -291,9 +291,11 @@ exit1(struct lwp *l, int rv)
tp->t_session = NULL;
TTY_UNLOCK(tp);
splx(s);
SESSRELE(sp);
+ vngate_suspend();
(void) ttywait(tp);
+ vngate_resume();
/*
* The tty could have been revoked
* if we blocked.
*/
@@ -351,8 +353,13 @@ exit1(struct lwp *l, int rv)
#ifndef __NO_CPU_LWP_FREE
cpu_lwp_free(l, 1);
#endif
+ /*
+ * Leave and destroy vngates collected so far.
+ */
+ vngate_leave_all(NULL, 1);
+
pmap_deactivate(l);
/*
* NOTE: WE ARE NO LONGER ALLOWED TO SLEEP!
@@ -812,8 +819,12 @@ proc_free(struct proc *p)
wakeup(parent);
return;
}
+ if (p->p_textvp)
+ vngate_enter(p->p_textvp->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
+
scheduler_wait_hook(parent, p);
p->p_xstat = 0;
ruadd(&parent->p_stats->p_cru, p->p_ru);
Index: sys/kern/kern_ktrace.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_ktrace.c,v
retrieving revision 1.104
diff -p -u -4 -r1.104 kern_ktrace.c
--- sys/kern/kern_ktrace.c 7 Jun 2006 22:33:39 -0000 1.104
+++ sys/kern/kern_ktrace.c 13 Jun 2006 12:45:27 -0000
@@ -934,9 +934,9 @@ sys_fktrace(struct lwp *l, void *v, regi
fdp = curp->p_fd;
if ((fp = fd_getfile(fdp, SCARG(uap, fd))) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if ((fp->f_flag & FWRITE) == 0)
error = EBADF;
else
@@ -1145,9 +1145,9 @@ next:
auio.uio_iovcnt < sizeof(aiov) / sizeof(aiov[0]) - 1);
again:
simple_lock(&fp->f_slock);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
error = (*fp->f_ops->fo_write)(fp, &fp->f_offset, &auio,
fp->f_cred, FOF_UPDATE_OFFSET);
FILE_UNUSE(fp, NULL);
switch (error) {
Index: sys/kern/sys_generic.c
===================================================================
RCS file: /cvsroot/src/sys/kern/sys_generic.c,v
retrieving revision 1.86
diff -p -u -4 -r1.86 sys_generic.c
--- sys/kern/sys_generic.c 7 Jun 2006 22:33:40 -0000 1.86
+++ sys/kern/sys_generic.c 13 Jun 2006 12:45:27 -0000
@@ -53,8 +53,9 @@ __KERNEL_RCSID(0, "$NetBSD: sys_generic.
#include <sys/kernel.h>
#include <sys/stat.h>
#include <sys/malloc.h>
#include <sys/poll.h>
+#include <sys/vnode.h>
#ifdef KTRACE
#include <sys/ktrace.h>
#endif
@@ -96,9 +97,9 @@ sys_read(struct lwp *l, void *v, registe
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
/* dofileread() will unuse the descriptor for us */
return (dofileread(l, fd, fp, SCARG(uap, buf), SCARG(uap, nbyte),
&fp->f_offset, FOF_UPDATE_OFFSET, retval));
@@ -194,9 +195,9 @@ sys_readv(struct lwp *l, void *v, regist
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
/* dofilereadv() will unuse the descriptor for us */
return (dofilereadv(l, fd, fp, SCARG(uap, iovp), SCARG(uap, iovcnt),
&fp->f_offset, FOF_UPDATE_OFFSET, retval));
@@ -324,9 +325,9 @@ sys_write(struct lwp *l, void *v, regist
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
/* dofilewrite() will unuse the descriptor for us */
return (dofilewrite(l, fd, fp, SCARG(uap, buf), SCARG(uap, nbyte),
&fp->f_offset, FOF_UPDATE_OFFSET, retval));
@@ -424,9 +425,9 @@ sys_writev(struct lwp *l, void *v, regis
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
/* dofilewritev() will unuse the descriptor for us */
return (dofilewritev(l, fd, fp, SCARG(uap, iovp), SCARG(uap, iovcnt),
&fp->f_offset, FOF_UPDATE_OFFSET, retval));
@@ -557,9 +558,9 @@ sys_ioctl(struct lwp *l, void *v, regist
if ((fp = fd_getfile(fdp, SCARG(uap, fd))) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if ((fp->f_flag & (FREAD | FWRITE)) == 0) {
error = EBADF;
com = 0;
@@ -801,16 +802,19 @@ selcommon(struct lwp *l, register_t *ret
timo = tvtohz(tv);
if (timo <= 0)
goto done;
}
+ vngate_suspend();
s = splsched();
if ((l->l_flag & L_SELECT) == 0 || nselcoll != ncoll) {
splx(s);
+ vngate_resume();
goto retry;
}
l->l_flag &= ~L_SELECT;
error = tsleep((caddr_t)&selwait, PSOCK | PCATCH, "select", timo);
splx(s);
+ vngate_resume();
if (error == 0)
goto retry;
done:
if (mask)
@@ -983,16 +987,19 @@ pollcommon(struct lwp *l, register_t *re
timo = tvtohz(tv);
if (timo <= 0)
goto done;
}
+ vngate_suspend();
s = splsched();
if ((l->l_flag & L_SELECT) == 0 || nselcoll != ncoll) {
splx(s);
+ vngate_resume();
goto retry;
}
l->l_flag &= ~L_SELECT;
error = tsleep((caddr_t)&selwait, PSOCK | PCATCH, "poll", timo);
splx(s);
+ vngate_resume();
if (error == 0)
goto retry;
done:
if (mask != NULL)
Index: sys/kern/vfs_getcwd.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_getcwd.c,v
retrieving revision 1.31
diff -p -u -4 -r1.31 vfs_getcwd.c
--- sys/kern/vfs_getcwd.c 14 May 2006 21:15:12 -0000 1.31
+++ sys/kern/vfs_getcwd.c 13 Jun 2006 12:45:27 -0000
@@ -379,8 +379,9 @@ getcwd_common(struct vnode *lvp, struct
* lvp is either NULL, or locked and held.
* uvp is either NULL, or locked and held.
*/
+ vngate_enter(lvp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
error = vn_lock(lvp, LK_EXCLUSIVE | LK_RETRY);
if (error) {
vrele(lvp);
lvp = NULL;
@@ -435,8 +436,10 @@ getcwd_common(struct vnode *lvp, struct
error = ENOENT;
goto out;
}
VREF(lvp);
+ vngate_enter(lvp->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
error = vn_lock(lvp, LK_EXCLUSIVE | LK_RETRY);
if (error != 0) {
vrele(lvp);
lvp = NULL;
Index: sys/kern/vfs_lookup.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_lookup.c,v
retrieving revision 1.70
diff -p -u -4 -r1.70 vfs_lookup.c
--- sys/kern/vfs_lookup.c 14 May 2006 21:15:12 -0000 1.70
+++ sys/kern/vfs_lookup.c 13 Jun 2006 12:45:28 -0000
@@ -464,8 +464,9 @@ lookup(struct nameidata *ndp)
ndp->ni_dvp = NULL;
cnp->cn_flags &= ~ISSYMLINK;
dp = ndp->ni_startdir;
ndp->ni_startdir = NULLVP;
+ vngate_enter(dp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
vn_lock(dp, LK_EXCLUSIVE | LK_RETRY);
/*
* If we have a leading string of slashes, remove them, and just make
@@ -612,8 +613,10 @@ dirloop:
vput(dp);
dp = ndp->ni_rootdir;
ndp->ni_dvp = dp;
ndp->ni_vp = dp;
+ vngate_enter(dp->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
VREF(dp);
VREF(dp);
vn_lock(dp, LK_EXCLUSIVE | LK_RETRY);
goto nextname;
@@ -624,8 +627,9 @@ dirloop:
break;
tdp = dp;
dp = dp->v_mount->mnt_vnodecovered;
vput(tdp);
+ vngate_enter(dp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
VREF(dp);
vn_lock(dp, LK_EXCLUSIVE | LK_RETRY);
}
}
@@ -653,8 +657,9 @@ unionlookup:
if (cnp->cn_flags & PDIRUNLOCK)
vrele(tdp);
else
vput(tdp);
+ vngate_enter(dp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
VREF(dp);
vn_lock(dp, LK_EXCLUSIVE | LK_RETRY);
goto unionlookup;
}
@@ -718,8 +723,9 @@ unionlookup:
(cnp->cn_flags & NOCROSSMOUNT) == 0) {
if (vfs_busy(mp, 0, 0))
continue;
VOP_UNLOCK(dp, 0);
+ vngate_enter(mp, V_WAIT|V_PERMANENT|V_NOERROR);
error = VFS_ROOT(mp, &tdp);
vfs_unbusy(mp);
if (error) {
dpunlocked = 1;
Index: sys/kern/vfs_syscalls.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_syscalls.c,v
retrieving revision 1.242
diff -p -u -4 -r1.242 vfs_syscalls.c
--- sys/kern/vfs_syscalls.c 14 May 2006 21:15:12 -0000 1.242
+++ sys/kern/vfs_syscalls.c 13 Jun 2006 12:45:28 -0000
@@ -173,8 +173,9 @@ sys_mount(struct lwp *l, void *v, regist
/*
* A lookup in VFS_MOUNT might result in an attempt to
* lock this vnode again, so make the lock recursive.
*/
+ vngate_enter(vp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY | LK_SETRECURSE);
if (SCARG(uap, flags) & (MNT_UPDATE | MNT_GETARGS)) {
if ((vp->v_flag & VROOT) == 0) {
vput(vp);
@@ -611,8 +616,14 @@ dounmount(struct mount *mp, int flags, s
if ((coveredvp = mp->mnt_vnodecovered) != NULLVP) {
coveredvp->v_mountedhere = NULL;
vrele(coveredvp);
}
+ vngate_leave_all(mp, 1);
+#ifdef VNODE_GATEDEBUG
+ if (mp->mnt_vngate_count != 0)
+ printf("unmounting %s with vngate count %d\n",
+ mp->mnt_stat.f_mntonname, mp->mnt_vngate_count);
+#endif
mp->mnt_op->vfs_refcount--;
if (LIST_FIRST(&mp->mnt_vnodelist) != NULL)
panic("unmount: dangling vnode");
mp->mnt_iflag |= IMNT_GONE;
@@ -652,8 +664,9 @@ sys_sync(struct lwp *l, void *v, registe
nmp = mp->mnt_list.cqe_prev;
continue;
}
if ((mp->mnt_flag & MNT_RDONLY) == 0 &&
+ vngate_enter(mp, V_NOWAIT|V_PERMANENT) == 0 &&
vn_start_write(NULL, &mp, V_NOWAIT) == 0) {
asyncflag = mp->mnt_flag & MNT_ASYNC;
mp->mnt_flag &= ~MNT_ASYNC;
VFS_SYNC(mp, MNT_NOWAIT, p->p_cred, l);
@@ -937,8 +950,10 @@ sys_fchdir(struct lwp *l, void *v, regis
error = ENOTDIR;
else
error = VOP_ACCESS(vp, VEXEC, p->p_cred, l);
while (!error && (mp = vp->v_mountedhere) != NULL) {
+ vngate_enter(mp, V_WAIT|V_PERMANENT|V_NOERROR);
+
if (vfs_busy(mp, 0, 0))
continue;
error = VFS_ROOT(mp, &tdp);
vfs_unbusy(mp);
@@ -962,8 +977,9 @@ sys_fchdir(struct lwp *l, void *v, regis
error = EPERM; /* operation not permitted */
goto out;
}
+ vngate_enter(cwdi->cwdi_cdir->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_cdir);
cwdi->cwdi_cdir = vp;
out:
FILE_UNUSE(fp, l);
@@ -1043,8 +1059,9 @@ sys_chdir(struct lwp *l, void *v, regist
NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF, UIO_USERSPACE,
SCARG(uap, path), l);
if ((error = change_dir(&nd, l)) != 0)
return (error);
+ vngate_enter(cwdi->cwdi_cdir->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_cdir);
cwdi->cwdi_cdir = nd.ni_vp;
return (0);
}
@@ -1071,10 +1088,13 @@ sys_chroot(struct lwp *l, void *v, regis
NDINIT(&nd, LOOKUP, FOLLOW | LOCKLEAF, UIO_USERSPACE,
SCARG(uap, path), l);
if ((error = change_dir(&nd, l)) != 0)
return (error);
- if (cwdi->cwdi_rdir != NULL)
+ if (cwdi->cwdi_rdir != NULL) {
+ vngate_enter(cwdi->cwdi_rdir->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_rdir);
+ }
vp = nd.ni_vp;
cwdi->cwdi_rdir = vp;
/*
@@ -1086,8 +1106,10 @@ sys_chroot(struct lwp *l, void *v, regis
/*
* XXX would be more failsafe to change directory to a
* deadfs node here instead
*/
+ vngate_enter(cwdi->cwdi_cdir->v_mount,
+ V_WAIT|V_PERMANENT|V_NOERROR);
vrele(cwdi->cwdi_cdir);
VREF(vp);
cwdi->cwdi_cdir = vp;
}
@@ -1874,9 +1896,9 @@ sys_lseek(struct lwp *l, void *v, regist
if ((fp = fd_getfile(fdp, SCARG(uap, fd))) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
vp = (struct vnode *)fp->f_data;
if (fp->f_type != DTYPE_VNODE || vp->v_type == VFIFO) {
error = ESPIPE;
@@ -1935,9 +1957,9 @@ sys_pread(struct lwp *l, void *v, regist
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
vp = (struct vnode *)fp->f_data;
if (fp->f_type != DTYPE_VNODE || vp->v_type == VFIFO) {
error = ESPIPE;
@@ -1988,9 +2010,9 @@ sys_preadv(struct lwp *l, void *v, regis
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
vp = (struct vnode *)fp->f_data;
if (fp->f_type != DTYPE_VNODE || vp->v_type == VFIFO) {
error = ESPIPE;
@@ -2041,9 +2063,9 @@ sys_pwrite(struct lwp *l, void *v, regis
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
vp = (struct vnode *)fp->f_data;
if (fp->f_type != DTYPE_VNODE || vp->v_type == VFIFO) {
error = ESPIPE;
@@ -2094,9 +2116,9 @@ sys_pwritev(struct lwp *l, void *v, regi
simple_unlock(&fp->f_slock);
return (EBADF);
}
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
vp = (struct vnode *)fp->f_data;
if (fp->f_type != DTYPE_VNODE || vp->v_type == VFIFO) {
error = ESPIPE;
@@ -3414,9 +3436,9 @@ getvnode(struct filedesc *fdp, int fd, s
if ((fp = fd_getfile(fdp, fd)) == NULL)
return (EBADF);
- FILE_USE(fp);
+ FILE_USE_GATED(fp);
if (fp->f_type != DTYPE_VNODE) {
FILE_UNUSE(fp, NULL);
return (EINVAL);
Index: sys/miscfs/fifofs/fifo_vnops.c
===================================================================
RCS file: /cvsroot/src/sys/miscfs/fifofs/fifo_vnops.c,v
retrieving revision 1.55
diff -p -u -4 -r1.55 fifo_vnops.c
--- sys/miscfs/fifofs/fifo_vnops.c 14 May 2006 21:31:52 -0000 1.55
+++ sys/miscfs/fifofs/fifo_vnops.c 13 Jun 2006 12:45:33 -0000
@@ -201,10 +201,12 @@ fifo_open(void *v)
if (ap->a_mode & O_NONBLOCK) {
} else {
while (!soreadable(fip->fi_readsock) && fip->fi_writers == 0) {
VOP_UNLOCK(vp, 0);
+ vngate_suspend();
error = tsleep(&fip->fi_readers,
PCATCH | PSOCK, "fifor", 0);
+ vngate_resume();
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
if (error)
goto bad;
}
@@ -218,10 +220,12 @@ fifo_open(void *v)
}
} else {
while (fip->fi_readers == 0) {
VOP_UNLOCK(vp, 0);
+ vngate_suspend();
error = tsleep(&fip->fi_writers,
PCATCH | PSOCK, "fifow", 0);
+ vngate_resume();
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
if (error)
goto bad;
}
@@ -262,10 +266,12 @@ fifo_read(void *v)
if (ap->a_ioflag & IO_NDELAY)
rso->so_state |= SS_NBIO;
startresid = uio->uio_resid;
VOP_UNLOCK(ap->a_vp, 0);
+ vngate_suspend();
error = (*rso->so_receive)(rso, (struct mbuf **)0, uio,
(struct mbuf **)0, (struct mbuf **)0, (int *)0);
+ vngate_resume();
vn_lock(ap->a_vp, LK_EXCLUSIVE | LK_RETRY);
/*
* Clear EOF indication after first such return.
*/
@@ -303,10 +309,12 @@ fifo_write(void *v)
#endif
if (ap->a_ioflag & IO_NDELAY)
wso->so_state |= SS_NBIO;
VOP_UNLOCK(ap->a_vp, 0);
+ vngate_suspend();
error = (*wso->so_send)(wso, (struct mbuf *)0, ap->a_uio, 0,
(struct mbuf *)0, 0, curlwp /*XXX*/);
+ vngate_resume();
vn_lock(ap->a_vp, LK_EXCLUSIVE | LK_RETRY);
if (ap->a_ioflag & IO_NDELAY)
wso->so_state &= ~SS_NBIO;
return (error);
Index: sys/miscfs/specfs/spec_vnops.c
===================================================================
RCS file: /cvsroot/src/sys/miscfs/specfs/spec_vnops.c,v
retrieving revision 1.87
diff -p -u -4 -r1.87 spec_vnops.c
--- sys/miscfs/specfs/spec_vnops.c 14 May 2006 21:32:21 -0000 1.87
+++ sys/miscfs/specfs/spec_vnops.c 13 Jun 2006 12:45:33 -0000
@@ -216,9 +216,13 @@ spec_open(v)
}
if (cdev->d_type == D_TTY)
vp->v_flag |= VISTTY;
VOP_UNLOCK(vp, 0);
+ if (cdev->d_type != D_DISK)
+ vngate_suspend();
error = (*cdev->d_open)(dev, ap->a_mode, S_IFCHR, l);
+ if (cdev->d_type != D_DISK)
+ vngate_resume();
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
if (cdev->d_type != D_DISK)
return error;
d_ioctl = cdev->d_ioctl;
@@ -303,11 +307,15 @@ spec_read(v)
case VCHR:
VOP_UNLOCK(vp, 0);
cdev = cdevsw_lookup(vp->v_rdev);
- if (cdev != NULL)
+ if (cdev != NULL) {
+ if (cdev->d_type != D_DISK)
+ vngate_suspend();
error = (*cdev->d_read)(vp->v_rdev, uio, ap->a_ioflag);
- else
+ if (cdev->d_type != D_DISK)
+ vngate_resume();
+ } else
error = ENXIO;
vn_lock(vp, LK_SHARED | LK_RETRY);
return (error);
@@ -384,11 +392,15 @@ spec_write(v)
case VCHR:
VOP_UNLOCK(vp, 0);
cdev = cdevsw_lookup(vp->v_rdev);
- if (cdev != NULL)
+ if (cdev != NULL) {
+ if (cdev->d_type != D_DISK)
+ vngate_suspend();
error = (*cdev->d_write)(vp->v_rdev, uio, ap->a_ioflag);
- else
+ if (cdev->d_type != D_DISK)
+ vngate_resume();
+ } else
error = ENXIO;
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
return (error);
@@ -460,8 +472,9 @@ spec_ioctl(v)
const struct bdevsw *bdev;
const struct cdevsw *cdev;
struct vnode *vp;
dev_t dev;
+ int error;
/*
* Extract all the info we need from the vnode, taking care to
* avoid a race with VOP_REVOKE().
@@ -483,10 +496,13 @@ spec_ioctl(v)
case VCHR:
cdev = cdevsw_lookup(dev);
if (cdev == NULL)
return (ENXIO);
- return ((*cdev->d_ioctl)(dev, ap->a_command, ap->a_data,
- ap->a_fflag, ap->a_l));
+ vngate_suspend();
+ error = (*cdev->d_ioctl)(dev, ap->a_command, ap->a_data,
+ ap->a_fflag, ap->a_l);
+ vngate_resume();
+ return error;
case VBLK:
bdev = bdevsw_lookup(dev);
if (bdev == NULL)
@@ -695,12 +711,13 @@ spec_close(v)
const struct cdevsw *cdev;
struct session *sess;
dev_t dev = vp->v_rdev;
int (*devclose)(dev_t, int, int, struct lwp *);
- int mode, error, count, flags, flags1;
+ int mode, error, count, flags, flags1, needsuspend;
count = vcount(vp);
flags = vp->v_flag;
+ needsuspend = 0;
switch (vp->v_type) {
case VCHR:
@@ -736,11 +753,12 @@ spec_close(v)
*/
if (count > 1 && (flags & VXLOCK) == 0)
return (0);
cdev = cdevsw_lookup(dev);
- if (cdev != NULL)
+ if (cdev != NULL) {
devclose = cdev->d_close;
- else
+ needsuspend = (cdev->d_type != D_DISK);
+ } else
devclose = NULL;
mode = S_IFCHR;
break;
@@ -793,11 +811,15 @@ spec_close(v)
*/
if (!(flags1 & FNONBLOCK))
VOP_UNLOCK(vp, 0);
- if (devclose != NULL)
+ if (devclose != NULL) {
+ if (needsuspend)
+ vngate_suspend();
error = (*devclose)(dev, flags1, mode, ap->a_l);
- else
+ if (needsuspend)
+ vngate_resume();
+ } else
error = ENXIO;
if (!(flags1 & FNONBLOCK))
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
Index: sys/miscfs/syncfs/sync_subr.c
===================================================================
RCS file: /cvsroot/src/sys/miscfs/syncfs/sync_subr.c,v
retrieving revision 1.22
diff -p -u -4 -r1.22 sync_subr.c
--- sys/miscfs/syncfs/sync_subr.c 7 Jun 2006 22:33:42 -0000 1.22
+++ sys/miscfs/syncfs/sync_subr.c 13 Jun 2006 12:45:33 -0000
@@ -185,15 +185,17 @@ sched_sync(v)
lockmgr(&syncer_lock, LK_EXCLUSIVE, NULL);
while ((vp = LIST_FIRST(slp)) != NULL) {
- if (vn_start_write(vp, &mp, V_NOWAIT) == 0) {
+ if (vngate_enter(vp->v_mount, V_NOWAIT) == 0 &&
+ vn_start_write(vp, &mp, V_NOWAIT) == 0) {
if (vn_lock(vp, LK_EXCLUSIVE | LK_NOWAIT)
== 0) {
(void) VOP_FSYNC(vp, curproc->p_cred,
FSYNC_LAZY, 0, 0, curlwp);
VOP_UNLOCK(vp, 0);
}
+ vngate_leave(vp->v_mount);
vn_finished_write(mp, 0);
}
s = splbio();
if (LIST_FIRST(slp) == vp) {
@@ -240,10 +242,12 @@ sched_sync(v)
* takes more than two seconds, but it does not really
* matter as we are just trying to generally pace the
* filesystem activity.
*/
- if (time_second == starttime)
+ if (time_second == starttime) {
tsleep(&rushjob, PPAUSE, "syncer", hz);
+ vngate_leave_all(NULL, 0);
+ }
}
}
/*
Index: sys/nfs/nfs_subs.c
===================================================================
RCS file: /cvsroot/src/sys/nfs/nfs_subs.c,v
retrieving revision 1.165
diff -p -u -4 -r1.165 nfs_subs.c
--- sys/nfs/nfs_subs.c 7 Jun 2006 22:34:17 -0000 1.165
+++ sys/nfs/nfs_subs.c 13 Jun 2006 12:45:34 -0000
@@ -2550,8 +2550,10 @@ nfsrv_fhtovp(fhp, lockflag, vpp, cred, s
if (error) {
return error;
}
+ vngate_enter(mp, V_WAIT|V_PERMANENT|V_NOERROR);
+
error = VFS_FHTOVP(mp, &fhp->fh_fid, vpp);
if (error)
return (error);
Index: sys/nfs/nfs_syscalls.c
===================================================================
RCS file: /cvsroot/src/sys/nfs/nfs_syscalls.c,v
retrieving revision 1.95
diff -p -u -4 -r1.95 nfs_syscalls.c
--- sys/nfs/nfs_syscalls.c 7 Jun 2006 22:34:17 -0000 1.95
+++ sys/nfs/nfs_syscalls.c 13 Jun 2006 12:45:34 -0000
@@ -833,8 +833,11 @@ nfssvc_nfsd(nsd, argp, l)
} else
writes_todo = 0;
splx(s);
} while (writes_todo);
+
+ vngate_leave_all(NULL, 0);
+
s = splsoftnet();
if (nfsrv_dorec(slp, nfsd, &nd)) {
nfsd->nfsd_slp = NULL;
nfsrv_slpderef(slp);
Index: sys/ufs/ffs/ffs_snapshot.c
===================================================================
RCS file: /cvsroot/src/sys/ufs/ffs/ffs_snapshot.c,v
retrieving revision 1.30
diff -p -u -4 -r1.30 ffs_snapshot.c
--- sys/ufs/ffs/ffs_snapshot.c 7 Jun 2006 22:34:19 -0000 1.30
+++ sys/ufs/ffs/ffs_snapshot.c 13 Jun 2006 12:45:35 -0000
@@ -284,9 +284,9 @@ ffs_snapshot(struct mount *mp, struct vn
* All allocations are done, so we can now snapshot the system.
*
* Suspend operation on filesystem.
*/
- if ((error = vfs_write_suspend(vp->v_mount, PUSER|PCATCH, 0)) != 0) {
+ if ((error = vfs_suspend(vp->v_mount, 1)) != 0) {
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
goto out;
}
vn_lock(vp, LK_EXCLUSIVE | LK_RETRY);
@@ -507,9 +507,9 @@ loop:
out1:
/*
* Resume operation on filesystem.
*/
- vfs_write_resume(vp->v_mount);
+ vfs_resume(vp->v_mount);
/*
* Set the mtime to the time the snapshot has been taken.
*/
TIMEVAL_TO_TIMESPEC(&starttime, &ts);
Index: sys/uvm/uvm_fault.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_fault.c,v
retrieving revision 1.111
diff -p -u -4 -r1.111 uvm_fault.c
--- sys/uvm/uvm_fault.c 11 Apr 2006 09:28:14 -0000 1.111
+++ sys/uvm/uvm_fault.c 13 Jun 2006 12:45:36 -0000
@@ -708,8 +708,9 @@ uvm_fault_internal(struct vm_map *orig_m
struct uvm_object *uobj;
struct vm_anon *anons_store[UVM_MAXRANGE], **anons, *anon, *oanon;
struct vm_anon *anon_spare;
struct vm_page *pages[UVM_MAXRANGE], *pg, *uobjpage;
+ struct mount *mp = NULL;
UVMHIST_FUNC("uvm_fault"); UVMHIST_CALLED(maphist);
UVMHIST_LOG(maphist, "(map=0x%x, vaddr=0x%x, at=%d, ff=%d)",
orig_map, vaddr, access_type, fault_flag);
@@ -758,8 +759,20 @@ ReFault:
panic("uvm_fault: (ufi.map->flags & VM_MAP_PAGEABLE) == 0");
}
#endif
+ uobj = ufi.entry->object.uvm_obj;
+ if (mp == NULL && uobj != NULL && UVM_OBJ_IS_VNODE(uobj)) {
+ struct vnode *vp = (struct vnode *)uobj;
+
+ if (vngate_enter(vp->v_mount, V_NOWAIT) != 0) {
+ uvmfault_unlockmaps(&ufi, FALSE);
+ vngate_sleep(vp->v_mount);
+ goto ReFault;
+ }
+ mp = vp->v_mount;
+ }
+
/*
* check protection
*/
@@ -1836,8 +1849,9 @@ Case2:
pmap_update(ufi.orig_map->pmap);
UVMHIST_LOG(maphist, "<- done (SUCCESS!)",0,0,0,0);
error = 0;
done:
+ vngate_leave(mp);
if (anon_spare != NULL) {
anon_spare->an_ref--;
uvm_anfree(anon_spare);
}
Index: sys/uvm/uvm_map.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_map.c,v
retrieving revision 1.226
diff -p -u -4 -r1.226 uvm_map.c
--- sys/uvm/uvm_map.c 25 May 2006 14:27:28 -0000 1.226
+++ sys/uvm/uvm_map.c 13 Jun 2006 12:45:37 -0000
@@ -2165,10 +2165,14 @@ uvm_unmap_remove(struct vm_map *map, vad
void
uvm_unmap_detach(struct vm_map_entry *first_entry, int flags)
{
struct vm_map_entry *next_entry;
+ struct uvm_object *uobj;
+ struct mount *mp, *mp2;
UVMHIST_FUNC("uvm_unmap_detach"); UVMHIST_CALLED(maphist);
+ mp = NULL;
+
while (first_entry) {
KASSERT(!VM_MAPENT_ISWIRED(first_entry));
UVMHIST_LOG(maphist,
" detach 0x%x: amap=0x%x, obj=0x%x, submap?=%d",
@@ -2189,8 +2193,16 @@ uvm_unmap_detach(struct vm_map_entry *fi
KASSERT(!UVM_ET_ISSUBMAP(first_entry));
if (UVM_ET_ISOBJ(first_entry) &&
first_entry->object.uvm_obj->pgops->pgo_detach) {
+ if (UVM_OBJ_IS_VNODE(first_entry->object.uvm_obj)) {
+ uobj = first_entry->object.uvm_obj;
+ mp2 = ((struct vnode *)(uobj))->v_mount;
+ if (mp != mp2)
+ vngate_enter(mp2,
+ V_WAIT|V_PERMANENT|V_NOERROR);
+ mp = mp2;
+ }
(*first_entry->object.uvm_obj->pgops->pgo_detach)
(first_entry->object.uvm_obj);
}
next_entry = first_entry->next;
Index: sys/uvm/uvm_mmap.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_mmap.c,v
retrieving revision 1.97
diff -p -u -4 -r1.97 uvm_mmap.c
--- sys/uvm/uvm_mmap.c 20 May 2006 15:45:38 -0000 1.97
+++ sys/uvm/uvm_mmap.c 13 Jun 2006 12:45:37 -0000
@@ -1125,8 +1125,10 @@ uvm_mmap(map, addr, size, prot, maxprot,
(vp->v_mount->mnt_flag & MNT_NOEXEC) != 0)
return (EACCES);
if (vp->v_type != VCHR) {
+ vngate_enter(vp->v_mount, V_WAIT|V_PERMANENT|V_NOERROR);
+
error = VOP_MMAP(vp, 0, curproc->p_cred, curlwp);
if (error) {
return error;
}
Index: sys/uvm/uvm_pdaemon.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_pdaemon.c,v
retrieving revision 1.76
diff -p -u -4 -r1.76 uvm_pdaemon.c
--- sys/uvm/uvm_pdaemon.c 14 Feb 2006 15:06:27 -0000 1.76
+++ sys/uvm/uvm_pdaemon.c 13 Jun 2006 12:45:39 -0000
@@ -704,12 +704,22 @@ uvmpd_scan_inactive(struct pglist *pglst
}
#endif /* defined(READAHEAD_STATS) */
if ((p->pqflags & PQ_SWAPBACKED) == 0) {
+ struct vnode *vp = (struct vnode *)uobj;
+
+ if (UVM_OBJ_IS_VNODE(uobj) &&
+ vngate_enter(vp->v_mount, V_NOWAIT) != 0) {
+ uvmexp.pdobscan--;
+ simple_unlock(slock);
+ continue;
+ }
uvm_unlock_pageq();
(void) (uobj->pgops->pgo_put)(uobj, p->offset,
p->offset + PAGE_SIZE, PGO_CLEANIT|PGO_FREE);
uvm_lock_pageq();
+ if (UVM_OBJ_IS_VNODE(uobj))
+ vngate_leave(vp->v_mount);
if (nextpg &&
(nextpg->pqflags & PQ_INACTIVE) == 0) {
nextpg = TAILQ_FIRST(pglst);
}
@@ -857,8 +867,10 @@ uvmpd_scan_inactive(struct pglist *pglst
nextpg = TAILQ_FIRST(pglst);
}
}
+ vngate_leave_all(NULL, 0);
+
#if defined(VMSWAP)
uvm_unlock_pageq();
swapcluster_flush(&swc, TRUE);
uvm_lock_pageq();
--xHFwDpU9dbj6ez1V
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="mntref.diff"
Index: sys/fs/cd9660/cd9660_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/fs/cd9660/cd9660_vfsops.c,v
retrieving revision 1.32
diff -p -u -4 -r1.32 cd9660_vfsops.c
--- sys/fs/cd9660/cd9660_vfsops.c 14 May 2006 21:31:52 -0000 1.32
+++ sys/fs/cd9660/cd9660_vfsops.c 13 Jun 2006 12:45:26 -0000
@@ -142,9 +142,10 @@ cd9660_mountroot()
args.flags = ISOFSMNT_ROOT;
if ((error = iso_mountfs(rootvp, mp, l, &args)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
Index: sys/fs/filecorefs/filecore_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/fs/filecorefs/filecore_vfsops.c,v
retrieving revision 1.25
diff -p -u -4 -r1.25 filecore_vfsops.c
--- sys/fs/filecorefs/filecore_vfsops.c 15 May 2006 01:29:02 -0000 1.25
+++ sys/fs/filecorefs/filecore_vfsops.c 13 Jun 2006 12:45:26 -0000
@@ -163,9 +163,10 @@ filecore_mountroot()
args.flags = FILECOREMNT_ROOT;
if ((error = filecore_mountfs(rootvp, mp, p, &args)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
Index: sys/fs/msdosfs/msdosfs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/fs/msdosfs/msdosfs_vfsops.c,v
retrieving revision 1.31
diff -p -u -4 -r1.31 msdosfs_vfsops.c
--- sys/fs/msdosfs/msdosfs_vfsops.c 14 May 2006 21:31:52 -0000 1.31
+++ sys/fs/msdosfs/msdosfs_vfsops.c 13 Jun 2006 12:45:26 -0000
@@ -209,16 +209,18 @@ msdosfs_mountroot()
if ((error = msdosfs_mountfs(rootvp, mp, l, &args)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
if ((error = update_mp(mp, &args)) != 0) {
(void)msdosfs_unmount(mp, 0, l);
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
vrele(rootvp);
return (error);
}
Index: sys/fs/ntfs/ntfs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/fs/ntfs/ntfs_vfsops.c,v
retrieving revision 1.41
diff -p -u -4 -r1.41 ntfs_vfsops.c
--- sys/fs/ntfs/ntfs_vfsops.c 14 May 2006 21:31:52 -0000 1.41
+++ sys/fs/ntfs/ntfs_vfsops.c 13 Jun 2006 12:45:26 -0000
@@ -157,9 +157,10 @@ ntfs_mountroot()
if ((error = ntfs_mountfs(rootvp, mp, &args, l)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
Index: sys/kern/vfs_subr.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_subr.c,v
retrieving revision 1.266
diff -p -u -4 -r1.266 vfs_subr.c
--- sys/kern/vfs_subr.c 14 May 2006 21:15:12 -0000 1.266
+++ sys/kern/vfs_subr.c 13 Jun 2006 12:45:28 -0000
@@ -369,8 +380,9 @@ vfs_rootmountalloc(const char *fstypenam
vfsp->vfs_refcount++;
strncpy(mp->mnt_stat.f_fstypename, vfsp->vfs_name, MFSNAMELEN);
mp->mnt_stat.f_mntonname[0] = '/';
(void) copystr(devname, mp->mnt_stat.f_mntfromname, MNAMELEN - 1, 0);
+ MNT_REF(mp);
*mpp = mp;
return (0);
}
Index: sys/kern/vfs_syscalls.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vfs_syscalls.c,v
retrieving revision 1.242
diff -p -u -4 -r1.242 vfs_syscalls.c
--- sys/kern/vfs_syscalls.c 14 May 2006 21:15:12 -0000 1.242
+++ sys/kern/vfs_syscalls.c 13 Jun 2006 12:45:28 -0000
@@ -327,8 +328,10 @@ sys_mount(struct lwp *l, void *v, regist
mp->mnt_vnodecovered = vp;
mp->mnt_stat.f_owner = kauth_cred_geteuid(p->p_cred);
mp->mnt_unmounter = NULL;
mp->mnt_leaf = mp;
+ MNT_REF(mp);
+ vngate_enter(mp, V_WAIT|V_PERMANENT|V_NOERROR);
/*
* The underlying file system may refuse the mount for
* various reasons. Allow the user to force it to happen.
@@ -416,9 +419,11 @@ sys_mount(struct lwp *l, void *v, regist
} else {
vp->v_mountedhere = (struct mount *)0;
vfs->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ mp->mnt_iflag |= IMNT_GONE;
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
vput(vp);
}
return (error);
}
@@ -625,9 +636,10 @@ dounmount(struct mount *mp, int flags, s
ltsleep(&mp->mnt_wcnt, PVFS, "mntwcnt2", 0, &mp->mnt_slock);
}
simple_unlock(&mp->mnt_slock);
vfs_hooks_unmount(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (0);
}
/*
Index: sys/nfs/nfs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/nfs/nfs_vfsops.c,v
retrieving revision 1.157
diff -p -u -4 -r1.157 nfs_vfsops.c
--- sys/nfs/nfs_vfsops.c 7 Jun 2006 22:34:17 -0000 1.157
+++ sys/nfs/nfs_vfsops.c 13 Jun 2006 12:45:34 -0000
@@ -426,9 +426,10 @@ nfs_mount_diskless(ndmntp, mntname, mpp,
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
printf("nfs_mountroot: mount %s failed: %d\n",
mntname, error);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
} else
*mpp = mp;
return (error);
Index: sys/sys/mount.h
===================================================================
RCS file: /cvsroot/src/sys/sys/mount.h,v
retrieving revision 1.141
diff -p -u -4 -r1.141 mount.h
--- sys/sys/mount.h 14 May 2006 21:38:18 -0000 1.141
+++ sys/sys/mount.h 13 Jun 2006 12:45:34 -0000
@@ -237,8 +242,34 @@ struct vfs_hooks {
#define VFS_HOOKS_ATTACH(hooks) __link_set_add_data(vfs_hooks, hooks)
void vfs_hooks_unmount(struct mount *);
+#ifdef NEWVNGATE
+#define MNT_REF(mp) \
+ do { \
+ (mp)->mnt_refcount++; \
+ } while (/*CONSTCOND*/0)
+#define MNT_DEREF(mp) \
+ do { \
+ if (--(mp)->mnt_refcount == 0 && \
+ ((mp)->mnt_iflag & IMNT_GONE) == IMNT_GONE) { \
+ simple_unlock(&(mp)->mnt_slock); \
+ free((mp), M_MOUNT); \
+ } else \
+ simple_unlock(&(mp)->mnt_slock); \
+ } while (/*CONSTCOND*/0)
+#else
+#define MNT_REF(mp) /* */
+#define MNT_DEREF(mp) \
+ do { \
+ if (((mp)->mnt_iflag & IMNT_GONE) == IMNT_GONE) { \
+ simple_unlock(&(mp)->mnt_slock); \
+ free((mp), M_MOUNT); \
+ } else \
+ simple_unlock(&(mp)->mnt_slock); \
+ } while (/*CONSTCOND*/0)
+#endif /* NEWVNGATE */
+
#endif /* _KERNEL */
/*
* Export arguments for local filesystem mount calls.
Index: sys/ufs/ext2fs/ext2fs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/ufs/ext2fs/ext2fs_vfsops.c,v
retrieving revision 1.98
diff -p -u -4 -r1.98 ext2fs_vfsops.c
--- sys/ufs/ext2fs/ext2fs_vfsops.c 7 Jun 2006 22:34:18 -0000 1.98
+++ sys/ufs/ext2fs/ext2fs_vfsops.c 13 Jun 2006 12:45:35 -0000
@@ -220,9 +220,10 @@ ext2fs_mountroot(void)
if ((error = ext2fs_mountfs(rootvp, mp, l)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
Index: sys/ufs/ffs/ffs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/ufs/ffs/ffs_vfsops.c,v
retrieving revision 1.182
diff -p -u -4 -r1.182 ffs_vfsops.c
--- sys/ufs/ffs/ffs_vfsops.c 7 Jun 2006 22:34:19 -0000 1.182
+++ sys/ufs/ffs/ffs_vfsops.c 13 Jun 2006 12:45:35 -0000
@@ -160,9 +160,10 @@ ffs_mountroot(void)
}
if ((error = ffs_mountfs(rootvp, mp, l)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
Index: sys/ufs/lfs/lfs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/ufs/lfs/lfs_vfsops.c,v
retrieving revision 1.213
diff -p -u -4 -r1.213 lfs_vfsops.c
--- sys/ufs/lfs/lfs_vfsops.c 24 May 2006 21:08:00 -0000 1.213
+++ sys/ufs/lfs/lfs_vfsops.c 13 Jun 2006 12:45:36 -0000
@@ -337,9 +337,10 @@ lfs_mountroot()
}
if ((error = lfs_mountfs(rootvp, mp, l))) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
return (error);
}
simple_lock(&mountlist_slock);
CIRCLEQ_INSERT_TAIL(&mountlist, mp, mnt_list);
Index: sys/ufs/mfs/mfs_vfsops.c
===================================================================
RCS file: /cvsroot/src/sys/ufs/mfs/mfs_vfsops.c,v
retrieving revision 1.72
diff -p -u -4 -r1.72 mfs_vfsops.c
--- sys/ufs/mfs/mfs_vfsops.c 15 Apr 2006 01:16:40 -0000 1.72
+++ sys/ufs/mfs/mfs_vfsops.c 13 Jun 2006 12:45:36 -0000
@@ -195,9 +195,10 @@ mfs_mountroot(void)
if ((error = ffs_mountfs(rootvp, mp, l)) != 0) {
mp->mnt_op->vfs_refcount--;
vfs_unbusy(mp);
bufq_free(mfsp->mfs_buflist);
- free(mp, M_MOUNT);
+ simple_lock(&mp->mnt_slock);
+ MNT_DEREF(mp);
free(mfsp, M_MFSNODE);
return (error);
}
simple_lock(&mountlist_slock);
--xHFwDpU9dbj6ez1V
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="debug.diff"
Index: sys/kern/kern_synch.c
===================================================================
RCS file: /cvsroot/src/sys/kern/kern_synch.c,v
retrieving revision 1.161
diff -p -u -4 -r1.161 kern_synch.c
--- sys/kern/kern_synch.c 14 May 2006 21:15:11 -0000 1.161
+++ sys/kern/kern_synch.c 13 Jun 2006 12:45:27 -0000
@@ -93,8 +93,9 @@ __KERNEL_RCSID(0, "$NetBSD: kern_synch.c
#include <sys/buf.h>
#if defined(PERFCTRS)
#include <sys/pmc.h>
#endif
+#include <sys/vnode.h> /* for vngate_debug_longsleep() */
#include <sys/signalvar.h>
#include <sys/resourcevar.h>
#include <sys/sched.h>
#include <sys/sa.h>
@@ -534,8 +535,9 @@ ltsleep(volatile const void *ident, int
* when CURSIG is called. If the wakeup happens while we're
* stopped, p->p_wchan will be 0 upon return from CURSIG.
*/
if (catch) {
+ vngate_debug_longsleep(wmesg);
l->l_flag |= L_SINTR;
if (((sig = CURSIG(l)) != 0) ||
((p->p_flag & P_WEXIT) && p->p_nlwps > 1)) {
if (l->l_wchan != NULL)
Index: sys/kern/vnode_if.sh
===================================================================
RCS file: /cvsroot/src/sys/kern/vnode_if.sh,v
retrieving revision 1.42
diff -p -u -4 -r1.42 vnode_if.sh
--- sys/kern/vnode_if.sh 14 May 2006 21:15:12 -0000 1.42
+++ sys/kern/vnode_if.sh 13 Jun 2006 12:45:32 -0000
@@ -344,14 +344,17 @@ function doit() {
if (i < (argc-1)) printf(",\n ");
}
printf(")\n");
printf("{\n\tstruct %s_args a;\n", name);
+ printf("\tint error;\n");
printf("#ifdef VNODE_LOCKDEBUG\n");
for (i=0; i<argc; i++) {
if (lockstate[i] != -1)
printf("\tint islocked_%s;\n", argname[i]);
}
printf("#endif\n");
+ printf("\tvngate_debug_vop(\"%s\", %s%s, 1);\n",
+ toupper(name), argname[0], arg0special);
printf("\ta.a_desc = VDESC(%s);\n", name);
for (i=0; i<argc; i++) {
printf("\ta.a_%s = %s;\n", argname[i], argname[i]);
if (lockstate[i] != -1) {
@@ -363,10 +366,13 @@ function doit() {
printf("\t\tpanic(\"%s: %s: locked %%d, expected %%d\", islocked_%s, %d);\n", name, argname[i], argname[i], lockstate[i]);
printf("#endif\n");
}
}
- printf("\treturn (VCALL(%s%s, VOFFSET(%s), &a));\n}\n",
+ printf("\terror = VCALL(%s%s, VOFFSET(%s), &a);\n",
argname[0], arg0special, name);
+ printf("\tvngate_debug_vop(\"%s\", %s%s, -1);\n",
+ toupper(name), argname[0], arg0special);
+ printf("\treturn error;\n}\n");
}
BEGIN {
printf("\n/* Special cases: */\n");
# start from 1 (vop_default is at 0)
Index: sys/sys/vnode_if.h
===================================================================
RCS file: /cvsroot/src/sys/sys/vnode_if.h,v
retrieving revision 1.61
diff -p -u -4 -r1.61 vnode_if.h
--- sys/sys/vnode_if.h 14 May 2006 21:38:18 -0000 1.61
+++ sys/sys/vnode_if.h 13 Jun 2006 12:45:35 -0000
@@ -1,14 +1,14 @@
-/* $NetBSD: vnode_if.h,v 1.61 2006/05/14 21:38:18 elad Exp $ */
+/* $NetBSD$ */
/*
* Warning: DO NOT EDIT! This file is automatically generated!
* (Modifications made here may easily be lost!)
*
* Created from the file:
- * NetBSD: vnode_if.src,v 1.48.10.1 2006/03/08 00:53:41 elad Exp
+ * NetBSD: vnode_if.src,v 1.50 2006/05/14 21:15:12 elad Exp
* by the script:
- * NetBSD: vnode_if.sh,v 1.41.10.1 2006/03/08 00:53:41 elad Exp
+ * NetBSD: vnode_if.sh,v 1.42 2006/05/14 21:15:12 elad Exp
*/
/*
* Copyright (c) 1992, 1993, 1994, 1995
Index: sys/kern/vnode_if.c
===================================================================
RCS file: /cvsroot/src/sys/kern/vnode_if.c,v
retrieving revision 1.65
diff -p -u -4 -r1.65 vnode_if.c
--- sys/kern/vnode_if.c 14 May 2006 21:15:12 -0000 1.65
+++ sys/kern/vnode_if.c 13 Jun 2006 12:45:29 -0000
@@ -1,14 +1,14 @@
-/* $NetBSD: vnode_if.c,v 1.65 2006/05/14 21:15:12 elad Exp $ */
+/* $NetBSD$ */
/*
* Warning: DO NOT EDIT! This file is automatically generated!
* (Modifications made here may easily be lost!)
*
* Created from the file:
- * NetBSD: vnode_if.src,v 1.48.10.1 2006/03/08 00:53:41 elad Exp
+ * NetBSD: vnode_if.src,v 1.50 2006/05/14 21:15:12 elad Exp
* by the script:
- * NetBSD: vnode_if.sh,v 1.41.10.1 2006/03/08 00:53:41 elad Exp
+ * NetBSD: vnode_if.sh,v 1.42 2006/05/14 21:15:12 elad Exp
*/
/*
* Copyright (c) 1992, 1993, 1994, 1995
@@ -39,9 +39,9 @@
* SUCH DAMAGE.
*/
#include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: vnode_if.c,v 1.65 2006/05/14 21:15:12 elad Exp $");
+__KERNEL_RCSID(0, "$NetBSD$");
/*
* If we have LKM support, always include the non-inline versions for
@@ -86,13 +86,17 @@ const struct vnodeop_desc vop_bwrite_des
int
VOP_BWRITE(struct buf *bp)
{
struct vop_bwrite_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_BWRITE", bp->b_vp, 1);
a.a_desc = VDESC(vop_bwrite);
a.a_bp = bp;
- return (VCALL(bp->b_vp, VOFFSET(vop_bwrite), &a));
+ error = VCALL(bp->b_vp, VOFFSET(vop_bwrite), &a);
+ vngate_debug_vop("VOP_BWRITE", bp->b_vp, -1);
+ return error;
}
/* End of special cases */
@@ -116,15 +120,19 @@ VOP_LOOKUP(struct vnode *dvp,
struct vnode **vpp,
struct componentname *cnp)
{
struct vop_lookup_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_LOOKUP", dvp, 1);
a.a_desc = VDESC(vop_lookup);
a.a_dvp = dvp;
a.a_vpp = vpp;
a.a_cnp = cnp;
- return (VCALL(dvp, VOFFSET(vop_lookup), &a));
+ error = VCALL(dvp, VOFFSET(vop_lookup), &a);
+ vngate_debug_vop("VOP_LOOKUP", dvp, -1);
+ return error;
}
const int vop_create_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_create_args,a_dvp),
@@ -147,11 +155,13 @@ VOP_CREATE(struct vnode *dvp,
struct componentname *cnp,
struct vattr *vap)
{
struct vop_create_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
#endif
+ vngate_debug_vop("VOP_CREATE", dvp, 1);
a.a_desc = VDESC(vop_create);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -160,9 +170,11 @@ VOP_CREATE(struct vnode *dvp,
#endif
a.a_vpp = vpp;
a.a_cnp = cnp;
a.a_vap = vap;
- return (VCALL(dvp, VOFFSET(vop_create), &a));
+ error = VCALL(dvp, VOFFSET(vop_create), &a);
+ vngate_debug_vop("VOP_CREATE", dvp, -1);
+ return error;
}
const int vop_mknod_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_mknod_args,a_dvp),
@@ -185,11 +197,13 @@ VOP_MKNOD(struct vnode *dvp,
struct componentname *cnp,
struct vattr *vap)
{
struct vop_mknod_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
#endif
+ vngate_debug_vop("VOP_MKNOD", dvp, 1);
a.a_desc = VDESC(vop_mknod);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -198,9 +212,11 @@ VOP_MKNOD(struct vnode *dvp,
#endif
a.a_vpp = vpp;
a.a_cnp = cnp;
a.a_vap = vap;
- return (VCALL(dvp, VOFFSET(vop_mknod), &a));
+ error = VCALL(dvp, VOFFSET(vop_mknod), &a);
+ vngate_debug_vop("VOP_MKNOD", dvp, -1);
+ return error;
}
const int vop_open_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_open_args,a_vp),
@@ -223,11 +239,13 @@ VOP_OPEN(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_open_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_OPEN", vp, 1);
a.a_desc = VDESC(vop_open);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -236,9 +254,11 @@ VOP_OPEN(struct vnode *vp,
#endif
a.a_mode = mode;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_open), &a));
+ error = VCALL(vp, VOFFSET(vop_open), &a);
+ vngate_debug_vop("VOP_OPEN", vp, -1);
+ return error;
}
const int vop_close_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_close_args,a_vp),
@@ -261,11 +281,13 @@ VOP_CLOSE(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_close_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_CLOSE", vp, 1);
a.a_desc = VDESC(vop_close);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -274,9 +296,11 @@ VOP_CLOSE(struct vnode *vp,
#endif
a.a_fflag = fflag;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_close), &a));
+ error = VCALL(vp, VOFFSET(vop_close), &a);
+ vngate_debug_vop("VOP_CLOSE", vp, -1);
+ return error;
}
const int vop_access_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_access_args,a_vp),
@@ -299,11 +323,13 @@ VOP_ACCESS(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_access_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_ACCESS", vp, 1);
a.a_desc = VDESC(vop_access);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -312,9 +338,11 @@ VOP_ACCESS(struct vnode *vp,
#endif
a.a_mode = mode;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_access), &a));
+ error = VCALL(vp, VOFFSET(vop_access), &a);
+ vngate_debug_vop("VOP_ACCESS", vp, -1);
+ return error;
}
const int vop_getattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_getattr_args,a_vp),
@@ -337,16 +365,20 @@ VOP_GETATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_getattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_GETATTR", vp, 1);
a.a_desc = VDESC(vop_getattr);
a.a_vp = vp;
a.a_vap = vap;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_getattr), &a));
+ error = VCALL(vp, VOFFSET(vop_getattr), &a);
+ vngate_debug_vop("VOP_GETATTR", vp, -1);
+ return error;
}
const int vop_setattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_setattr_args,a_vp),
@@ -369,11 +401,13 @@ VOP_SETATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_setattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_SETATTR", vp, 1);
a.a_desc = VDESC(vop_setattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -382,9 +416,11 @@ VOP_SETATTR(struct vnode *vp,
#endif
a.a_vap = vap;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_setattr), &a));
+ error = VCALL(vp, VOFFSET(vop_setattr), &a);
+ vngate_debug_vop("VOP_SETATTR", vp, -1);
+ return error;
}
const int vop_read_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_read_args,a_vp),
@@ -407,11 +443,13 @@ VOP_READ(struct vnode *vp,
int ioflag,
kauth_cred_t cred)
{
struct vop_read_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_READ", vp, 1);
a.a_desc = VDESC(vop_read);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -420,9 +458,11 @@ VOP_READ(struct vnode *vp,
#endif
a.a_uio = uio;
a.a_ioflag = ioflag;
a.a_cred = cred;
- return (VCALL(vp, VOFFSET(vop_read), &a));
+ error = VCALL(vp, VOFFSET(vop_read), &a);
+ vngate_debug_vop("VOP_READ", vp, -1);
+ return error;
}
const int vop_write_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_write_args,a_vp),
@@ -445,11 +485,13 @@ VOP_WRITE(struct vnode *vp,
int ioflag,
kauth_cred_t cred)
{
struct vop_write_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_WRITE", vp, 1);
a.a_desc = VDESC(vop_write);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -458,9 +500,11 @@ VOP_WRITE(struct vnode *vp,
#endif
a.a_uio = uio;
a.a_ioflag = ioflag;
a.a_cred = cred;
- return (VCALL(vp, VOFFSET(vop_write), &a));
+ error = VCALL(vp, VOFFSET(vop_write), &a);
+ vngate_debug_vop("VOP_WRITE", vp, -1);
+ return error;
}
const int vop_ioctl_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_ioctl_args,a_vp),
@@ -485,11 +529,13 @@ VOP_IOCTL(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_ioctl_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_IOCTL", vp, 1);
a.a_desc = VDESC(vop_ioctl);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
@@ -500,9 +546,11 @@ VOP_IOCTL(struct vnode *vp,
a.a_data = data;
a.a_fflag = fflag;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_ioctl), &a));
+ error = VCALL(vp, VOFFSET(vop_ioctl), &a);
+ vngate_debug_vop("VOP_IOCTL", vp, -1);
+ return error;
}
const int vop_fcntl_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_fcntl_args,a_vp),
@@ -527,11 +575,13 @@ VOP_FCNTL(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_fcntl_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_FCNTL", vp, 1);
a.a_desc = VDESC(vop_fcntl);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
@@ -542,9 +592,11 @@ VOP_FCNTL(struct vnode *vp,
a.a_data = data;
a.a_fflag = fflag;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_fcntl), &a));
+ error = VCALL(vp, VOFFSET(vop_fcntl), &a);
+ vngate_debug_vop("VOP_FCNTL", vp, -1);
+ return error;
}
const int vop_poll_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_poll_args,a_vp),
@@ -566,11 +618,13 @@ VOP_POLL(struct vnode *vp,
int events,
struct lwp *l)
{
struct vop_poll_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_POLL", vp, 1);
a.a_desc = VDESC(vop_poll);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
@@ -578,9 +632,11 @@ VOP_POLL(struct vnode *vp,
panic("vop_poll: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_events = events;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_poll), &a));
+ error = VCALL(vp, VOFFSET(vop_poll), &a);
+ vngate_debug_vop("VOP_POLL", vp, -1);
+ return error;
}
const int vop_kqfilter_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_kqfilter_args,a_vp),
@@ -601,20 +657,24 @@ int
VOP_KQFILTER(struct vnode *vp,
struct knote *kn)
{
struct vop_kqfilter_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_KQFILTER", vp, 1);
a.a_desc = VDESC(vop_kqfilter);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
if (islocked_vp != 0)
panic("vop_kqfilter: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_kn = kn;
- return (VCALL(vp, VOFFSET(vop_kqfilter), &a));
+ error = VCALL(vp, VOFFSET(vop_kqfilter), &a);
+ vngate_debug_vop("VOP_KQFILTER", vp, -1);
+ return error;
}
const int vop_revoke_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_revoke_args,a_vp),
@@ -635,20 +695,24 @@ int
VOP_REVOKE(struct vnode *vp,
int flags)
{
struct vop_revoke_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_REVOKE", vp, 1);
a.a_desc = VDESC(vop_revoke);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
if (islocked_vp != 0)
panic("vop_revoke: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_revoke), &a));
+ error = VCALL(vp, VOFFSET(vop_revoke), &a);
+ vngate_debug_vop("VOP_REVOKE", vp, -1);
+ return error;
}
const int vop_mmap_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_mmap_args,a_vp),
@@ -671,16 +735,20 @@ VOP_MMAP(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_mmap_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_MMAP", vp, 1);
a.a_desc = VDESC(vop_mmap);
a.a_vp = vp;
a.a_fflags = fflags;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_mmap), &a));
+ error = VCALL(vp, VOFFSET(vop_mmap), &a);
+ vngate_debug_vop("VOP_MMAP", vp, -1);
+ return error;
}
const int vop_fsync_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_fsync_args,a_vp),
@@ -705,11 +773,13 @@ VOP_FSYNC(struct vnode *vp,
off_t offhi,
struct lwp *l)
{
struct vop_fsync_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_FSYNC", vp, 1);
a.a_desc = VDESC(vop_fsync);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -720,9 +790,11 @@ VOP_FSYNC(struct vnode *vp,
a.a_flags = flags;
a.a_offlo = offlo;
a.a_offhi = offhi;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_fsync), &a));
+ error = VCALL(vp, VOFFSET(vop_fsync), &a);
+ vngate_debug_vop("VOP_FSYNC", vp, -1);
+ return error;
}
const int vop_seek_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_seek_args,a_vp),
@@ -745,16 +817,20 @@ VOP_SEEK(struct vnode *vp,
off_t newoff,
kauth_cred_t cred)
{
struct vop_seek_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_SEEK", vp, 1);
a.a_desc = VDESC(vop_seek);
a.a_vp = vp;
a.a_oldoff = oldoff;
a.a_newoff = newoff;
a.a_cred = cred;
- return (VCALL(vp, VOFFSET(vop_seek), &a));
+ error = VCALL(vp, VOFFSET(vop_seek), &a);
+ vngate_debug_vop("VOP_SEEK", vp, -1);
+ return error;
}
const int vop_remove_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_remove_args,a_dvp),
@@ -777,12 +853,14 @@ VOP_REMOVE(struct vnode *dvp,
struct vnode *vp,
struct componentname *cnp)
{
struct vop_remove_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_REMOVE", dvp, 1);
a.a_desc = VDESC(vop_remove);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -795,9 +873,11 @@ VOP_REMOVE(struct vnode *dvp,
if (islocked_vp != 1)
panic("vop_remove: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_cnp = cnp;
- return (VCALL(dvp, VOFFSET(vop_remove), &a));
+ error = VCALL(dvp, VOFFSET(vop_remove), &a);
+ vngate_debug_vop("VOP_REMOVE", dvp, -1);
+ return error;
}
const int vop_link_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_link_args,a_dvp),
@@ -820,12 +900,14 @@ VOP_LINK(struct vnode *dvp,
struct vnode *vp,
struct componentname *cnp)
{
struct vop_link_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_LINK", dvp, 1);
a.a_desc = VDESC(vop_link);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -838,9 +920,11 @@ VOP_LINK(struct vnode *dvp,
if (islocked_vp != 0)
panic("vop_link: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_cnp = cnp;
- return (VCALL(dvp, VOFFSET(vop_link), &a));
+ error = VCALL(dvp, VOFFSET(vop_link), &a);
+ vngate_debug_vop("VOP_LINK", dvp, -1);
+ return error;
}
const int vop_rename_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_rename_args,a_fdvp),
@@ -868,13 +952,15 @@ VOP_RENAME(struct vnode *fdvp,
struct vnode *tvp,
struct componentname *tcnp)
{
struct vop_rename_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_fdvp;
int islocked_fvp;
int islocked_tdvp;
#endif
+ vngate_debug_vop("VOP_RENAME", fdvp, 1);
a.a_desc = VDESC(vop_rename);
a.a_fdvp = fdvp;
#ifdef VNODE_LOCKDEBUG
islocked_fdvp = (fdvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(fdvp) == LK_EXCLUSIVE) : 0;
@@ -895,9 +981,11 @@ VOP_RENAME(struct vnode *fdvp,
panic("vop_rename: tdvp: locked %d, expected %d", islocked_tdvp, 1);
#endif
a.a_tvp = tvp;
a.a_tcnp = tcnp;
- return (VCALL(fdvp, VOFFSET(vop_rename), &a));
+ error = VCALL(fdvp, VOFFSET(vop_rename), &a);
+ vngate_debug_vop("VOP_RENAME", fdvp, -1);
+ return error;
}
const int vop_mkdir_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_mkdir_args,a_dvp),
@@ -920,11 +1008,13 @@ VOP_MKDIR(struct vnode *dvp,
struct componentname *cnp,
struct vattr *vap)
{
struct vop_mkdir_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
#endif
+ vngate_debug_vop("VOP_MKDIR", dvp, 1);
a.a_desc = VDESC(vop_mkdir);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -933,9 +1023,11 @@ VOP_MKDIR(struct vnode *dvp,
#endif
a.a_vpp = vpp;
a.a_cnp = cnp;
a.a_vap = vap;
- return (VCALL(dvp, VOFFSET(vop_mkdir), &a));
+ error = VCALL(dvp, VOFFSET(vop_mkdir), &a);
+ vngate_debug_vop("VOP_MKDIR", dvp, -1);
+ return error;
}
const int vop_rmdir_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_rmdir_args,a_dvp),
@@ -958,12 +1050,14 @@ VOP_RMDIR(struct vnode *dvp,
struct vnode *vp,
struct componentname *cnp)
{
struct vop_rmdir_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_RMDIR", dvp, 1);
a.a_desc = VDESC(vop_rmdir);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -976,9 +1070,11 @@ VOP_RMDIR(struct vnode *dvp,
if (islocked_vp != 1)
panic("vop_rmdir: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_cnp = cnp;
- return (VCALL(dvp, VOFFSET(vop_rmdir), &a));
+ error = VCALL(dvp, VOFFSET(vop_rmdir), &a);
+ vngate_debug_vop("VOP_RMDIR", dvp, -1);
+ return error;
}
const int vop_symlink_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_symlink_args,a_dvp),
@@ -1002,11 +1098,13 @@ VOP_SYMLINK(struct vnode *dvp,
struct vattr *vap,
char *target)
{
struct vop_symlink_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
#endif
+ vngate_debug_vop("VOP_SYMLINK", dvp, 1);
a.a_desc = VDESC(vop_symlink);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -1016,9 +1114,11 @@ VOP_SYMLINK(struct vnode *dvp,
a.a_vpp = vpp;
a.a_cnp = cnp;
a.a_vap = vap;
a.a_target = target;
- return (VCALL(dvp, VOFFSET(vop_symlink), &a));
+ error = VCALL(dvp, VOFFSET(vop_symlink), &a);
+ vngate_debug_vop("VOP_SYMLINK", dvp, -1);
+ return error;
}
const int vop_readdir_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_readdir_args,a_vp),
@@ -1043,11 +1143,13 @@ VOP_READDIR(struct vnode *vp,
off_t **cookies,
int *ncookies)
{
struct vop_readdir_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_READDIR", vp, 1);
a.a_desc = VDESC(vop_readdir);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1058,9 +1160,11 @@ VOP_READDIR(struct vnode *vp,
a.a_cred = cred;
a.a_eofflag = eofflag;
a.a_cookies = cookies;
a.a_ncookies = ncookies;
- return (VCALL(vp, VOFFSET(vop_readdir), &a));
+ error = VCALL(vp, VOFFSET(vop_readdir), &a);
+ vngate_debug_vop("VOP_READDIR", vp, -1);
+ return error;
}
const int vop_readlink_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_readlink_args,a_vp),
@@ -1082,11 +1186,13 @@ VOP_READLINK(struct vnode *vp,
struct uio *uio,
kauth_cred_t cred)
{
struct vop_readlink_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_READLINK", vp, 1);
a.a_desc = VDESC(vop_readlink);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1094,9 +1200,11 @@ VOP_READLINK(struct vnode *vp,
panic("vop_readlink: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_uio = uio;
a.a_cred = cred;
- return (VCALL(vp, VOFFSET(vop_readlink), &a));
+ error = VCALL(vp, VOFFSET(vop_readlink), &a);
+ vngate_debug_vop("VOP_READLINK", vp, -1);
+ return error;
}
const int vop_abortop_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_abortop_args,a_dvp),
@@ -1117,14 +1225,18 @@ int
VOP_ABORTOP(struct vnode *dvp,
struct componentname *cnp)
{
struct vop_abortop_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_ABORTOP", dvp, 1);
a.a_desc = VDESC(vop_abortop);
a.a_dvp = dvp;
a.a_cnp = cnp;
- return (VCALL(dvp, VOFFSET(vop_abortop), &a));
+ error = VCALL(dvp, VOFFSET(vop_abortop), &a);
+ vngate_debug_vop("VOP_ABORTOP", dvp, -1);
+ return error;
}
const int vop_inactive_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_inactive_args,a_vp),
@@ -1145,20 +1257,24 @@ int
VOP_INACTIVE(struct vnode *vp,
struct lwp *l)
{
struct vop_inactive_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_INACTIVE", vp, 1);
a.a_desc = VDESC(vop_inactive);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
if (islocked_vp != 1)
panic("vop_inactive: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_inactive), &a));
+ error = VCALL(vp, VOFFSET(vop_inactive), &a);
+ vngate_debug_vop("VOP_INACTIVE", vp, -1);
+ return error;
}
const int vop_reclaim_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_reclaim_args,a_vp),
@@ -1179,20 +1295,24 @@ int
VOP_RECLAIM(struct vnode *vp,
struct lwp *l)
{
struct vop_reclaim_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_RECLAIM", vp, 1);
a.a_desc = VDESC(vop_reclaim);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
if (islocked_vp != 0)
panic("vop_reclaim: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_reclaim), &a));
+ error = VCALL(vp, VOFFSET(vop_reclaim), &a);
+ vngate_debug_vop("VOP_RECLAIM", vp, -1);
+ return error;
}
const int vop_lock_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_lock_args,a_vp),
@@ -1213,20 +1333,24 @@ int
VOP_LOCK(struct vnode *vp,
int flags)
{
struct vop_lock_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_LOCK", vp, 1);
a.a_desc = VDESC(vop_lock);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
if (islocked_vp != 0)
panic("vop_lock: vp: locked %d, expected %d", islocked_vp, 0);
#endif
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_lock), &a));
+ error = VCALL(vp, VOFFSET(vop_lock), &a);
+ vngate_debug_vop("VOP_LOCK", vp, -1);
+ return error;
}
const int vop_unlock_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_unlock_args,a_vp),
@@ -1247,20 +1371,24 @@ int
VOP_UNLOCK(struct vnode *vp,
int flags)
{
struct vop_unlock_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_UNLOCK", vp, 1);
a.a_desc = VDESC(vop_unlock);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
if (islocked_vp != 1)
panic("vop_unlock: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_unlock), &a));
+ error = VCALL(vp, VOFFSET(vop_unlock), &a);
+ vngate_debug_vop("VOP_UNLOCK", vp, -1);
+ return error;
}
const int vop_bmap_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_bmap_args,a_vp),
@@ -1284,17 +1412,21 @@ VOP_BMAP(struct vnode *vp,
daddr_t *bnp,
int *runp)
{
struct vop_bmap_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_BMAP", vp, 1);
a.a_desc = VDESC(vop_bmap);
a.a_vp = vp;
a.a_bn = bn;
a.a_vpp = vpp;
a.a_bnp = bnp;
a.a_runp = runp;
- return (VCALL(vp, VOFFSET(vop_bmap), &a));
+ error = VCALL(vp, VOFFSET(vop_bmap), &a);
+ vngate_debug_vop("VOP_BMAP", vp, -1);
+ return error;
}
const int vop_strategy_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_strategy_args,a_vp),
@@ -1315,14 +1447,18 @@ int
VOP_STRATEGY(struct vnode *vp,
struct buf *bp)
{
struct vop_strategy_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_STRATEGY", vp, 1);
a.a_desc = VDESC(vop_strategy);
a.a_vp = vp;
a.a_bp = bp;
- return (VCALL(vp, VOFFSET(vop_strategy), &a));
+ error = VCALL(vp, VOFFSET(vop_strategy), &a);
+ vngate_debug_vop("VOP_STRATEGY", vp, -1);
+ return error;
}
const int vop_print_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_print_args,a_vp),
@@ -1342,13 +1478,17 @@ const struct vnodeop_desc vop_print_desc
int
VOP_PRINT(struct vnode *vp)
{
struct vop_print_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_PRINT", vp, 1);
a.a_desc = VDESC(vop_print);
a.a_vp = vp;
- return (VCALL(vp, VOFFSET(vop_print), &a));
+ error = VCALL(vp, VOFFSET(vop_print), &a);
+ vngate_debug_vop("VOP_PRINT", vp, -1);
+ return error;
}
const int vop_islocked_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_islocked_args,a_vp),
@@ -1368,13 +1508,17 @@ const struct vnodeop_desc vop_islocked_d
int
VOP_ISLOCKED(struct vnode *vp)
{
struct vop_islocked_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_ISLOCKED", vp, 1);
a.a_desc = VDESC(vop_islocked);
a.a_vp = vp;
- return (VCALL(vp, VOFFSET(vop_islocked), &a));
+ error = VCALL(vp, VOFFSET(vop_islocked), &a);
+ vngate_debug_vop("VOP_ISLOCKED", vp, -1);
+ return error;
}
const int vop_pathconf_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_pathconf_args,a_vp),
@@ -1396,11 +1540,13 @@ VOP_PATHCONF(struct vnode *vp,
int name,
register_t *retval)
{
struct vop_pathconf_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_PATHCONF", vp, 1);
a.a_desc = VDESC(vop_pathconf);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1408,9 +1554,11 @@ VOP_PATHCONF(struct vnode *vp,
panic("vop_pathconf: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_name = name;
a.a_retval = retval;
- return (VCALL(vp, VOFFSET(vop_pathconf), &a));
+ error = VCALL(vp, VOFFSET(vop_pathconf), &a);
+ vngate_debug_vop("VOP_PATHCONF", vp, -1);
+ return error;
}
const int vop_advlock_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_advlock_args,a_vp),
@@ -1434,11 +1582,13 @@ VOP_ADVLOCK(struct vnode *vp,
struct flock *fl,
int flags)
{
struct vop_advlock_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_ADVLOCK", vp, 1);
a.a_desc = VDESC(vop_advlock);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 0;
@@ -1448,9 +1598,11 @@ VOP_ADVLOCK(struct vnode *vp,
a.a_id = id;
a.a_op = op;
a.a_fl = fl;
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_advlock), &a));
+ error = VCALL(vp, VOFFSET(vop_advlock), &a);
+ vngate_debug_vop("VOP_ADVLOCK", vp, -1);
+ return error;
}
const int vop_lease_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_lease_args,a_vp),
@@ -1473,16 +1625,20 @@ VOP_LEASE(struct vnode *vp,
kauth_cred_t cred,
int flag)
{
struct vop_lease_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_LEASE", vp, 1);
a.a_desc = VDESC(vop_lease);
a.a_vp = vp;
a.a_l = l;
a.a_cred = cred;
a.a_flag = flag;
- return (VCALL(vp, VOFFSET(vop_lease), &a));
+ error = VCALL(vp, VOFFSET(vop_lease), &a);
+ vngate_debug_vop("VOP_LEASE", vp, -1);
+ return error;
}
const int vop_whiteout_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_whiteout_args,a_dvp),
@@ -1504,11 +1660,13 @@ VOP_WHITEOUT(struct vnode *dvp,
struct componentname *cnp,
int flags)
{
struct vop_whiteout_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_dvp;
#endif
+ vngate_debug_vop("VOP_WHITEOUT", dvp, 1);
a.a_desc = VDESC(vop_whiteout);
a.a_dvp = dvp;
#ifdef VNODE_LOCKDEBUG
islocked_dvp = (dvp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(dvp) == LK_EXCLUSIVE) : 1;
@@ -1516,9 +1674,11 @@ VOP_WHITEOUT(struct vnode *dvp,
panic("vop_whiteout: dvp: locked %d, expected %d", islocked_dvp, 1);
#endif
a.a_cnp = cnp;
a.a_flags = flags;
- return (VCALL(dvp, VOFFSET(vop_whiteout), &a));
+ error = VCALL(dvp, VOFFSET(vop_whiteout), &a);
+ vngate_debug_vop("VOP_WHITEOUT", dvp, -1);
+ return error;
}
const int vop_getpages_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_getpages_args,a_vp),
@@ -1545,10 +1705,12 @@ VOP_GETPAGES(struct vnode *vp,
int advice,
int flags)
{
struct vop_getpages_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_GETPAGES", vp, 1);
a.a_desc = VDESC(vop_getpages);
a.a_vp = vp;
a.a_offset = offset;
a.a_m = m;
@@ -1556,9 +1718,11 @@ VOP_GETPAGES(struct vnode *vp,
a.a_centeridx = centeridx;
a.a_access_type = access_type;
a.a_advice = advice;
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_getpages), &a));
+ error = VCALL(vp, VOFFSET(vop_getpages), &a);
+ vngate_debug_vop("VOP_GETPAGES", vp, -1);
+ return error;
}
const int vop_putpages_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_putpages_args,a_vp),
@@ -1581,16 +1745,20 @@ VOP_PUTPAGES(struct vnode *vp,
voff_t offhi,
int flags)
{
struct vop_putpages_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
#endif
+ vngate_debug_vop("VOP_PUTPAGES", vp, 1);
a.a_desc = VDESC(vop_putpages);
a.a_vp = vp;
a.a_offlo = offlo;
a.a_offhi = offhi;
a.a_flags = flags;
- return (VCALL(vp, VOFFSET(vop_putpages), &a));
+ error = VCALL(vp, VOFFSET(vop_putpages), &a);
+ vngate_debug_vop("VOP_PUTPAGES", vp, -1);
+ return error;
}
const int vop_closeextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_closeextattr_args,a_vp),
@@ -1613,11 +1781,13 @@ VOP_CLOSEEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_closeextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_CLOSEEXTATTR", vp, 1);
a.a_desc = VDESC(vop_closeextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1626,9 +1796,11 @@ VOP_CLOSEEXTATTR(struct vnode *vp,
#endif
a.a_commit = commit;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_closeextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_closeextattr), &a);
+ vngate_debug_vop("VOP_CLOSEEXTATTR", vp, -1);
+ return error;
}
const int vop_getextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_getextattr_args,a_vp),
@@ -1654,11 +1826,13 @@ VOP_GETEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_getextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_GETEXTATTR", vp, 1);
a.a_desc = VDESC(vop_getextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1670,9 +1844,11 @@ VOP_GETEXTATTR(struct vnode *vp,
a.a_uio = uio;
a.a_size = size;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_getextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_getextattr), &a);
+ vngate_debug_vop("VOP_GETEXTATTR", vp, -1);
+ return error;
}
const int vop_listextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_listextattr_args,a_vp),
@@ -1697,11 +1873,13 @@ VOP_LISTEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_listextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_LISTEXTATTR", vp, 1);
a.a_desc = VDESC(vop_listextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1712,9 +1890,11 @@ VOP_LISTEXTATTR(struct vnode *vp,
a.a_uio = uio;
a.a_size = size;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_listextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_listextattr), &a);
+ vngate_debug_vop("VOP_LISTEXTATTR", vp, -1);
+ return error;
}
const int vop_openextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_openextattr_args,a_vp),
@@ -1736,11 +1916,13 @@ VOP_OPENEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_openextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_OPENEXTATTR", vp, 1);
a.a_desc = VDESC(vop_openextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1748,9 +1930,11 @@ VOP_OPENEXTATTR(struct vnode *vp,
panic("vop_openextattr: vp: locked %d, expected %d", islocked_vp, 1);
#endif
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_openextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_openextattr), &a);
+ vngate_debug_vop("VOP_OPENEXTATTR", vp, -1);
+ return error;
}
const int vop_deleteextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_deleteextattr_args,a_vp),
@@ -1774,11 +1958,13 @@ VOP_DELETEEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_deleteextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_DELETEEXTATTR", vp, 1);
a.a_desc = VDESC(vop_deleteextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1788,9 +1974,11 @@ VOP_DELETEEXTATTR(struct vnode *vp,
a.a_attrnamespace = attrnamespace;
a.a_name = name;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_deleteextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_deleteextattr), &a);
+ vngate_debug_vop("VOP_DELETEEXTATTR", vp, -1);
+ return error;
}
const int vop_setextattr_vp_offsets[] = {
VOPARG_OFFSETOF(struct vop_setextattr_args,a_vp),
@@ -1815,11 +2003,13 @@ VOP_SETEXTATTR(struct vnode *vp,
kauth_cred_t cred,
struct lwp *l)
{
struct vop_setextattr_args a;
+ int error;
#ifdef VNODE_LOCKDEBUG
int islocked_vp;
#endif
+ vngate_debug_vop("VOP_SETEXTATTR", vp, 1);
a.a_desc = VDESC(vop_setextattr);
a.a_vp = vp;
#ifdef VNODE_LOCKDEBUG
islocked_vp = (vp->v_flag & VLOCKSWORK) ? (VOP_ISLOCKED(vp) == LK_EXCLUSIVE) : 1;
@@ -1830,9 +2020,11 @@ VOP_SETEXTATTR(struct vnode *vp,
a.a_name = name;
a.a_uio = uio;
a.a_cred = cred;
a.a_l = l;
- return (VCALL(vp, VOFFSET(vop_setextattr), &a));
+ error = VCALL(vp, VOFFSET(vop_setextattr), &a);
+ vngate_debug_vop("VOP_SETEXTATTR", vp, -1);
+ return error;
}
/* End of special cases. */
--xHFwDpU9dbj6ez1V
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename="vprint.diff"
Index: sys/sys/systm.h
===================================================================
RCS file: /cvsroot/src/sys/sys/systm.h,v
retrieving revision 1.187
diff -p -u -4 -r1.187 systm.h
--- sys/sys/systm.h 7 Jun 2006 22:34:18 -0000 1.187
+++ sys/sys/systm.h 13 Jun 2006 12:45:34 -0000
@@ -202,8 +202,9 @@ void aprint_debug(const char *, ...)
int aprint_get_error_count(void);
void printf_nolog(const char *, ...)
__attribute__((__format__(__printf__,1,2)));
+void vprintf_nolog(const char *, _BSD_VA_LIST_);
void printf(const char *, ...)
__attribute__((__format__(__printf__,1,2)));
int sprintf(char *, const char *, ...)
Index: sys/kern/subr_prf.c
===================================================================
RCS file: /cvsroot/src/sys/kern/subr_prf.c,v
retrieving revision 1.102
diff -p -u -4 -r1.102 subr_prf.c
--- sys/kern/subr_prf.c 28 Jan 2006 14:37:31 -0000 1.102
+++ sys/kern/subr_prf.c 13 Jun 2006 12:45:27 -0000
@@ -724,8 +724,24 @@ printf_nolog(const char *fmt, ...)
KPRINTF_MUTEX_EXIT(s);
}
/*
+ * vprintf_nolog: Like vprintf(), but does not send message to the log.
+ */
+
+void
+vprintf_nolog(const char *fmt, va_list ap)
+{
+ int s;
+
+ KPRINTF_MUTEX_ENTER(s);
+
+ kprintf(fmt, TOCONS, NULL, NULL, ap);
+
+ KPRINTF_MUTEX_EXIT(s);
+}
+
+/*
* normal kernel printf functions: printf, vprintf, snprintf, vsnprintf
*/
/*
--xHFwDpU9dbj6ez1V--