Subject: Important changes to UVM, for MP support
To: None <tech-kern@netbsd.org>
From: Jason Thorpe <thorpej@nas.nasa.gov>
List: tech-kern
Date: 05/25/1999 10:40:22
Hi folks...

Some time ago, Chuck Cranor discovered that sometimes pmap_kremove()
was being called for mappings that were entered with pmap_enter(), which
was causing stale pv_entrys to be left in the table, causing untold
lossage.  The solution at the time was to make pmap_kremove() deal with
these mappings.  This, BTW, is what caused the skewed pmap statistics for
e.g. reaper, pagedaemon, etc.

While doing some Alpha pmap work recently, mostly geared towards solving
some problems associated with MP support, I found that I needed to find
the cause of that problem and fix it, because otherwise, the locking
protocol used in the pmap module had no way of working.

Basically, the problem stems from the fact that the kernel pmap can be
operated on in an interrupt context, e.g. by the kernel malloc() or by
allocating mbuf clusters.  This means that you need to block interrupts
from which memory allocation can occur before asserting the spin lock
on the kernel pmap.  If you don't do this, then when an interrupt occurs
that causes the kernel pmap to be locked, you have deadlock due to recursion.

This part I dealt with in the Alpha pmap module with my PMAP_LOCK() and
PMAP_UNLOCK() macros.  However, I ran into another nasty problem: things
which traverse the PV list for a page need to lock in a differnt order
(PV->pmap), so a mutex on the pmap module is used to prevent deadlock in
this case.  BUT... due to the fact that pmap_kremove() needs to be able
to handle PV list mappings, it needs to acquire the pmap->PV direction
of the mutex.  If pmap_kremove() is called in an interrupt context,
you have deadlock!  Even worse, the pmap mutex is a sleep lock, and
just can't be used in an interrupt context.

Now, unfortunately, just nuking the pmap_k* functions doesn't completely
solve the problem, because now you need to block interrupts in a LOT more
places, which could have a nasty impact on system performance.

In my mind, this was a perfect application for the pmap_k*() functions,
if the definition of them was refined a bit.  This also addresses some
of the more annoying aspects of this routine, e.g. its interaction with
modified/referenced emulation on platforms which need to do that.

So, here is my redefinion of pmap_kenter() and pmap_kremove().

First of all, I would like to remove pmap_kenter_pgs().  The only real
place it was used to map multiple pages was in uvm_pagermapin().  However,
that was a major source of the pmap_kenter/pmap_remove inconsistency, so
I changed it to use the regular old pmap_enter().

	void pmap_kenter(va, pa, prot)

	Enter a "va -> pa" mapping with protection "prot" into the
	kernel pmap.  The mapping will be wired; access may not
	cause a page fault (including a fault for modified/referenced
	emulation).  The physical page mapped by this mapping
	will not be managed; modified and referenced information will
	not be tracked, and no physical->virtual tracking will be
	performed, such that pmap_page_protect(), etc. will not affect
	this mapping.

	pmap_kenter() MUST be to enter mappings for virtual addresses
	which may be mapped and unmapped from an interrupt context,
	i.e. pages which are owned by kmem_object or mb_object (and
	are thus mapped by kmem_map or mb_map).  In addition, pages
	which are not managed by the VM system (RAM not in the managed
	set of physical memory and/or device memory) may be mapped with
	pmap_kenter().  Any other use of pmap_kenter() is an error,
	and behavior is undefined.


	void pmap_kremove(va, len)

	Remove mappings from the kernel pmap starting a "va" for "len"
	bytes.  The mappings must have been previously entered with
	pmap_kenter().  Any other use of pmap_kremove() is an error,
	and behavior is undefined.

With that definition in mind, it is possible to clean up the usage of
pmap_enter() vs. pmap_kenter()/pmap_remove() vs. pmap_kremove().

What I have done is defined a new kernel object type: "intrsafe" objects.
kmem_object and mb_object are intrsafe.  In uvm_km.c, pmap_kenter() is
used only if the object which owns the pages being mapped is an intrsafe
object.  Similrly, in uvm_unmap_remove(), pmap_kremove() is used only
for intrsafe objects.

Since intrsafe objects may never be paging (their mappings are always
wired), a new routine uvm_km_pgremove_intrsafe() has been added.  This
performs sanity checks specific to intrsafe objects, and skips all of
the steps to e.g. free the swap space associated with the page (since
there will never be any).

I've done some testing of these changes on an AlphaStation 500; everything
is working fine, and I no longer see *any* of the enter/kremove inconsistency.
These changes also allow me to fix the locking protocol problems in the
Alpha pmap, which is important to me, since I am working on MP support on
NetBSD/alpha.  (It will also be important for MP support on NetBSD/i386.)

The changes that implement the "intrsafe" part of this are appended below;
they are fairly simple and straightforward, and will be committed shortly.
I will update all of the PMAP_NEW ports (Alpha, i386, pc532, VAX), and
fixup the scraggling calls to the deprecated pmap_kenter_pgs() and then
make that pmap interface change.

There are more changes that need to be made, but I will be identifying
them and making them incrementally.  FWIW, one of them is somewhat similar
to this object locking issue, that is locking of kmem_map, mb_map, and
any other map that is usable in an interrupt context.  Expect a message
on this one within a few days :-)

        -- Jason R. Thorpe <thorpej@nas.nasa.gov>

Index: uvm_km.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_km.c,v
retrieving revision 1.23
diff -c -r1.23 uvm_km.c
*** uvm_km.c	1999/04/11 04:04:11	1.23
--- uvm_km.c	1999/05/25 01:32:21
***************
*** 164,170 ****
   */
  
  static int uvm_km_get __P((struct uvm_object *, vaddr_t, 
! 													 vm_page_t *, int *, int, vm_prot_t, int, int));
  /*
   * local data structues
   */
--- 164,171 ----
   */
  
  static int uvm_km_get __P((struct uvm_object *, vaddr_t, 
! 	vm_page_t *, int *, int, vm_prot_t, int, int));
! 
  /*
   * local data structues
   */
***************
*** 424,445 ****
  	uvm.kernel_object = uao_create(VM_MAX_KERNEL_ADDRESS -
  				 VM_MIN_KERNEL_ADDRESS, UAO_FLAG_KERNOBJ);
  
! 	/* kmem_object: for malloc'd memory (wired, protected by splimp) */
  	simple_lock_init(&kmem_object_store.vmobjlock);
  	kmem_object_store.pgops = &km_pager;
  	TAILQ_INIT(&kmem_object_store.memq);
  	kmem_object_store.uo_npages = 0;
  	/* we are special.  we never die */
! 	kmem_object_store.uo_refs = UVM_OBJ_KERN; 
  	uvmexp.kmem_object = &kmem_object_store;
  
! 	/* mb_object: for mbuf memory (always wired, protected by splimp) */
  	simple_lock_init(&mb_object_store.vmobjlock);
  	mb_object_store.pgops = &km_pager;
  	TAILQ_INIT(&mb_object_store.memq);
  	mb_object_store.uo_npages = 0;
  	/* we are special.  we never die */
! 	mb_object_store.uo_refs = UVM_OBJ_KERN; 
  	uvmexp.mb_object = &mb_object_store;
  
  	/*
--- 425,454 ----
  	uvm.kernel_object = uao_create(VM_MAX_KERNEL_ADDRESS -
  				 VM_MIN_KERNEL_ADDRESS, UAO_FLAG_KERNOBJ);
  
! 	/*
! 	 * kmem_object: for use by the kernel malloc().  Memory is always
! 	 * wired, and this object (and the kmem_map) can be accessed at
! 	 * interrupt time.
! 	 */
  	simple_lock_init(&kmem_object_store.vmobjlock);
  	kmem_object_store.pgops = &km_pager;
  	TAILQ_INIT(&kmem_object_store.memq);
  	kmem_object_store.uo_npages = 0;
  	/* we are special.  we never die */
! 	kmem_object_store.uo_refs = UVM_OBJ_KERN_INTRSAFE; 
  	uvmexp.kmem_object = &kmem_object_store;
  
! 	/*
! 	 * mb_object: for mbuf cluster pages on platforms which use the
! 	 * mb_map.  Memory is always wired, and this object (and the mb_map)
! 	 * can be accessed at interrupt time.
! 	 */
  	simple_lock_init(&mb_object_store.vmobjlock);
  	mb_object_store.pgops = &km_pager;
  	TAILQ_INIT(&mb_object_store.memq);
  	mb_object_store.uo_npages = 0;
  	/* we are special.  we never die */
! 	mb_object_store.uo_refs = UVM_OBJ_KERN_INTRSAFE; 
  	uvmexp.mb_object = &mb_object_store;
  
  	/*
***************
*** 538,552 ****
  	struct uvm_object *uobj;
  	vaddr_t start, end;
  {
! 	boolean_t by_list, is_aobj;
  	struct vm_page *pp, *ppnext;
  	vaddr_t curoff;
  	UVMHIST_FUNC("uvm_km_pgremove"); UVMHIST_CALLED(maphist);
  
  	simple_lock(&uobj->vmobjlock);		/* lock object */
  
! 	/* is uobj an aobj? */
! 	is_aobj = uobj->pgops == &aobj_pager;
  
  	/* choose cheapest traversal */
  	by_list = (uobj->uo_npages <=
--- 547,563 ----
  	struct uvm_object *uobj;
  	vaddr_t start, end;
  {
! 	boolean_t by_list;
  	struct vm_page *pp, *ppnext;
  	vaddr_t curoff;
  	UVMHIST_FUNC("uvm_km_pgremove"); UVMHIST_CALLED(maphist);
  
  	simple_lock(&uobj->vmobjlock);		/* lock object */
  
! #ifdef DIAGNOSTIC
! 	if (uobj->pgops != &aobj_pager)
! 		panic("uvm_km_pgremove: object %p not an aobj", uobj);
! #endif
  
  	/* choose cheapest traversal */
  	by_list = (uobj->uo_npages <=
***************
*** 564,589 ****
  
  		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
  		    pp->flags & PG_BUSY, 0, 0);
  		/* now do the actual work */
! 		if (pp->flags & PG_BUSY)
  			/* owner must check for this when done */
  			pp->flags |= PG_RELEASED;
! 		else {
! 			pmap_page_protect(PMAP_PGARG(pp), VM_PROT_NONE);
  
  			/*
! 			 * if this kernel object is an aobj, free the swap slot.
  			 */
- 			if (is_aobj) {
- 				uao_dropswap(uobj, curoff >> PAGE_SHIFT);
- 			}
- 
  			uvm_lock_pageq();
  			uvm_pagefree(pp);
  			uvm_unlock_pageq();
  		}
  		/* done */
- 
  	}
  	simple_unlock(&uobj->vmobjlock);
  	return;
--- 575,598 ----
  
  		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
  		    pp->flags & PG_BUSY, 0, 0);
+ 
  		/* now do the actual work */
! 		if (pp->flags & PG_BUSY) {
  			/* owner must check for this when done */
  			pp->flags |= PG_RELEASED;
! 		} else {
! 			/* free the swap slot... */
! 			uao_dropswap(uobj, curoff >> PAGE_SHIFT);
  
  			/*
! 			 * ...and free the page; note it may be on the
! 			 * active or inactive queues.
  			 */
  			uvm_lock_pageq();
  			uvm_pagefree(pp);
  			uvm_unlock_pageq();
  		}
  		/* done */
  	}
  	simple_unlock(&uobj->vmobjlock);
  	return;
***************
*** 591,597 ****
  loop_by_list:
  
  	for (pp = uobj->memq.tqh_first ; pp != NULL ; pp = ppnext) {
- 
  		ppnext = pp->listq.tqe_next;
  		if (pp->offset < start || pp->offset >= end) {
  			continue;
--- 600,605 ----
***************
*** 599,624 ****
  
  		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
  		    pp->flags & PG_BUSY, 0, 0);
  		/* now do the actual work */
! 		if (pp->flags & PG_BUSY)
  			/* owner must check for this when done */
  			pp->flags |= PG_RELEASED;
! 		else {
! 			pmap_page_protect(PMAP_PGARG(pp), VM_PROT_NONE);
  
  			/*
! 			 * if this kernel object is an aobj, free the swap slot.
  			 */
- 			if (is_aobj) {
- 				uao_dropswap(uobj, pp->offset >> PAGE_SHIFT);
- 			}
- 
  			uvm_lock_pageq();
  			uvm_pagefree(pp);
  			uvm_unlock_pageq();
  		}
  		/* done */
  
  	}
  	simple_unlock(&uobj->vmobjlock);
  	return;
--- 607,717 ----
  
  		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
  		    pp->flags & PG_BUSY, 0, 0);
+ 
  		/* now do the actual work */
! 		if (pp->flags & PG_BUSY) {
  			/* owner must check for this when done */
  			pp->flags |= PG_RELEASED;
! 		} else {
! 			/* free the swap slot... */
! 			uao_dropswap(uobj, pp->offset >> PAGE_SHIFT);
  
  			/*
! 			 * ...and free the page; note it may be on the
! 			 * active or inactive queues.
  			 */
  			uvm_lock_pageq();
  			uvm_pagefree(pp);
  			uvm_unlock_pageq();
  		}
  		/* done */
+ 	}
+ 	simple_unlock(&uobj->vmobjlock);
+ 	return;
+ }
+ 
+ 
+ /*
+  * uvm_km_pgremove_intrsafe: like uvm_km_pgremove(), but for "intrsafe"
+  *    objects
+  *
+  * => when you unmap a part of anonymous kernel memory you want to toss
+  *    the pages right away.    (this gets called from uvm_unmap_...).
+  * => none of the pages will ever be busy, and none of them will ever
+  *    be on the active or inactive queues (because these objects are
+  *    never allowed to "page").
+  */
+ 
+ void
+ uvm_km_pgremove_intrsafe(uobj, start, end)
+ 	struct uvm_object *uobj;
+ 	vaddr_t start, end;
+ {
+ 	boolean_t by_list;
+ 	struct vm_page *pp, *ppnext;
+ 	vaddr_t curoff;
+ 	UVMHIST_FUNC("uvm_km_pgremove_intrsafe"); UVMHIST_CALLED(maphist);
+ 
+ 	simple_lock(&uobj->vmobjlock);		/* lock object */
+ 
+ #ifdef DIAGNOSTIC
+ 	if (UVM_OBJ_IS_INTRSAFE_OBJECT(uobj) == 0)
+ 		panic("uvm_km_pgremove_intrsafe: object %p not intrsafe", uobj);
+ #endif
+ 
+ 	/* choose cheapest traversal */
+ 	by_list = (uobj->uo_npages <=
+ 	     ((end - start) >> PAGE_SHIFT) * UKM_HASH_PENALTY);
+  
+ 	if (by_list)
+ 		goto loop_by_list;
+ 
+ 	/* by hash */
+ 
+ 	for (curoff = start ; curoff < end ; curoff += PAGE_SIZE) {
+ 		pp = uvm_pagelookup(uobj, curoff);
+ 		if (pp == NULL)
+ 			continue;
+ 
+ 		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
+ 		    pp->flags & PG_BUSY, 0, 0);
+ #ifdef DIAGNOSTIC
+ 		if (pp->flags & PG_BUSY)
+ 			panic("uvm_km_pgremove_intrsafe: busy page");
+ 		if (pp->pqflags & PQ_ACTIVE)
+ 			panic("uvm_km_pgremove_intrsafe: active page");
+ 		if (pp->pqflags & PQ_INACTIVE)
+ 			panic("uvm_km_pgremove_intrsafe: inactive page");
+ #endif
+ 
+ 		/* free the page */
+ 		uvm_pagefree(pp);
+ 	}
+ 	simple_unlock(&uobj->vmobjlock);
+ 	return;
+ 
+ loop_by_list:
  
+ 	for (pp = uobj->memq.tqh_first ; pp != NULL ; pp = ppnext) {
+ 		ppnext = pp->listq.tqe_next;
+ 		if (pp->offset < start || pp->offset >= end) {
+ 			continue;
+ 		}
+ 
+ 		UVMHIST_LOG(maphist,"  page 0x%x, busy=%d", pp,
+ 		    pp->flags & PG_BUSY, 0, 0);
+ 
+ #ifdef DIAGNOSTIC
+ 		if (pp->flags & PG_BUSY)
+ 			panic("uvm_km_pgremove_intrsafe: busy page");
+ 		if (pp->pqflags & PQ_ACTIVE)
+ 			panic("uvm_km_pgremove_intrsafe: active page");
+ 		if (pp->pqflags & PQ_INACTIVE)
+ 			panic("uvm_km_pgremove_intrsafe: inactive page");
+ #endif
+ 
+ 		/* free the page */
+ 		uvm_pagefree(pp);
  	}
  	simple_unlock(&uobj->vmobjlock);
  	return;
***************
*** 728,739 ****
  		 * (because if pmap_enter wants to allocate out of kmem_object
  		 * it will need to lock it itself!)
  		 */
  #if defined(PMAP_NEW)
! 		pmap_kenter_pa(loopva, VM_PAGE_TO_PHYS(pg), VM_PROT_ALL);
  #else
! 		pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! 		    UVM_PROT_ALL, TRUE, 0);
  #endif
  		loopva += PAGE_SIZE;
  		offset += PAGE_SIZE;
  		size -= PAGE_SIZE;
--- 821,838 ----
  		 * (because if pmap_enter wants to allocate out of kmem_object
  		 * it will need to lock it itself!)
  		 */
+ 		if (UVM_OBJ_IS_INTRSAFE_OBJECT(obj)) {
  #if defined(PMAP_NEW)
! 			pmap_kenter_pa(loopva, VM_PAGE_TO_PHYS(pg),
! 			    VM_PROT_ALL);
  #else
! 			pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
! 			    UVM_PROT_ALL, TRUE, 0);
  #endif
+ 		} else {
+ 			pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
+ 			    UVM_PROT_ALL, TRUE, 0);
+ 		}
  		loopva += PAGE_SIZE;
  		offset += PAGE_SIZE;
  		size -= PAGE_SIZE;
***************
*** 860,872 ****
  			continue;
  		}
  		
! 		/* map it in */
! #if defined(PMAP_NEW)
! 		pmap_kenter_pa(loopva, VM_PAGE_TO_PHYS(pg), UVM_PROT_ALL);
! #else
  		pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
  		    UVM_PROT_ALL, TRUE, 0);
! #endif
  		loopva += PAGE_SIZE;
  		offset += PAGE_SIZE;
  		size -= PAGE_SIZE;
--- 959,971 ----
  			continue;
  		}
  		
! 		/*
! 		 * map it in; note we're never called with an intrsafe
! 		 * object, so we always use regular old pmap_enter().
! 		 */
  		pmap_enter(map->pmap, loopva, VM_PAGE_TO_PHYS(pg),
  		    UVM_PROT_ALL, TRUE, 0);
! 
  		loopva += PAGE_SIZE;
  		offset += PAGE_SIZE;
  		size -= PAGE_SIZE;
Index: uvm_km.h
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_km.h,v
retrieving revision 1.7
diff -c -r1.7 uvm_km.h
*** uvm_km.h	1999/03/25 18:48:52	1.7
--- uvm_km.h	1999/05/25 01:32:21
***************
*** 47,51 ****
--- 47,52 ----
  
  void uvm_km_init __P((vaddr_t, vaddr_t));
  void uvm_km_pgremove __P((struct uvm_object *, vaddr_t, vaddr_t));
+ void uvm_km_pgremove_intrsafe __P((struct uvm_object *, vaddr_t, vaddr_t));
  
  #endif /* _UVM_UVM_KM_H_ */
Index: uvm_map.c
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_map.c,v
retrieving revision 1.42
diff -c -r1.42 uvm_map.c
*** uvm_map.c	1999/05/25 00:09:00	1.42
--- uvm_map.c	1999/05/25 01:32:21
***************
*** 1012,1051 ****
  			 *
  			 * uvm_km_pgremove currently does the following: 
  			 *   for pages in the kernel object in range: 
! 			 *     - pmap_page_protect them out of all pmaps
  			 *     - uvm_pagefree the page
  			 *
! 			 * note that in case [1] the pmap_page_protect call
! 			 * in uvm_km_pgremove may very well be redundant
! 			 * because we have already removed the mappings
! 			 * beforehand with pmap_remove (or pmap_kremove).
! 			 * in the PMAP_NEW case, the pmap_page_protect call
! 			 * may not do anything, since PMAP_NEW allows the
! 			 * kernel to enter/remove kernel mappings without
! 			 * bothing to keep track of the mappings (e.g. via
! 			 * pv_entry lists).    XXX: because of this, in the
! 			 * future we should consider removing the
! 			 * pmap_page_protect from uvm_km_pgremove some time
! 			 * in the future.
  			 */
  
  			/*
! 			 * remove mappings from pmap
  			 */
  #if defined(PMAP_NEW)
! 			pmap_kremove(entry->start, len);
  #else
! 			pmap_remove(pmap_kernel(), entry->start,
! 			    entry->start+len);
  #endif
! 
! 			/*
! 			 * remove pages from a kernel object (offsets are
! 			 * always relative to vm_map_min(kernel_map)).
! 			 */
! 			uvm_km_pgremove(entry->object.uvm_obj, 
! 			entry->start - vm_map_min(kernel_map),
! 			entry->end - vm_map_min(kernel_map));
  
  			/*
  			 * null out kernel_object reference, we've just
--- 1012,1046 ----
  			 *
  			 * uvm_km_pgremove currently does the following: 
  			 *   for pages in the kernel object in range: 
! 			 *     - drops the swap slot
  			 *     - uvm_pagefree the page
  			 *
! 			 * note there is version of uvm_km_pgremove() that
! 			 * is used for "intrsafe" objects.
  			 */
  
  			/*
! 			 * remove mappings from pmap and drop the pages
! 			 * from the object.  offsets are always relative
! 			 * to vm_map_min(kernel_map).
  			 */
+ 			if (UVM_OBJ_IS_INTRSAFE_OBJECT(entry->object.uvm_obj)) {
  #if defined(PMAP_NEW)
! 				pmap_kremove(entry->start, len);
  #else
! 				pmap_remove(pmap_kernel(), entry->start,
! 				    entry->start + len);
  #endif
! 				uvm_km_pgremove_intrsafe(entry->object.uvm_obj,
! 				    entry->start - vm_map_min(kernel_map),
! 				    entry->end - vm_map_min(kernel_map));
! 			} else {
! 				pmap_remove(pmap_kernel(), entry->start,
! 				    entry->start + len);
! 				uvm_km_pgremove(entry->object.uvm_obj,
! 				    entry->start - vm_map_min(kernel_map),
! 				    entry->end - vm_map_min(kernel_map));
! 			}
  
  			/*
  			 * null out kernel_object reference, we've just
Index: uvm_object.h
===================================================================
RCS file: /cvsroot/src/sys/uvm/uvm_object.h,v
retrieving revision 1.7
diff -c -r1.7 uvm_object.h
*** uvm_object.h	1999/05/25 00:09:01	1.7
--- uvm_object.h	1999/05/25 01:32:21
***************
*** 64,72 ****
   * for kernel objects... when a kernel object is unmapped we always want
   * to free the resources associated with the mapping.   UVM_OBJ_KERN
   * allows us to decide which type of unmapping we want to do.
   */
! #define UVM_OBJ_KERN	(-2)
  
! #define	UVM_OBJ_IS_KERN_OBJECT(uobj)	((uobj)->uo_refs == UVM_OBJ_KERN)
  
  #endif /* _UVM_UVM_OBJECT_H_ */
--- 64,85 ----
   * for kernel objects... when a kernel object is unmapped we always want
   * to free the resources associated with the mapping.   UVM_OBJ_KERN
   * allows us to decide which type of unmapping we want to do.
+  *
+  * in addition, we have kernel objects which may be used in an
+  * interrupt context.  these objects get their mappings entered
+  * with pmap_kenter*() and removed with pmap_kremove(), which
+  * are safe to call in interrupt context, and must be used ONLY
+  * for wired kernel mappings in these objects and their associated
+  * maps.
   */
! #define UVM_OBJ_KERN		(-2)
! #define	UVM_OBJ_KERN_INTRSAFE	(-3)
  
! #define	UVM_OBJ_IS_KERN_OBJECT(uobj)					\
! 	((uobj)->uo_refs == UVM_OBJ_KERN ||				\
! 	 (uobj)->uo_refs == UVM_OBJ_KERN_INTRSAFE)
! 
! #define	UVM_OBJ_IS_INTRSAFE_OBJECT(uobj)				\
! 	((uobj)->uo_refs == UVM_OBJ_KERN_INTRSAFE)
  
  #endif /* _UVM_UVM_OBJECT_H_ */