Subject: Annoying bug in pmap fixed.
To: None <port-arm32@netbsd.org>
From: Jason Thorpe <thorpej@nas.nasa.gov>
List: port-arm32
Date: 01/26/1999 01:15:33
For any of you out there who have encountered the following panic
message on more than one occasion:

    panic: pmap_enter: No more physical pages

I have just committed a fix for it.  Instead of rolling over to die
unconditionally, I changed the code to wait for the pagedaemon to
indicate that more pages are available, if the pmap is a user pmap.

This fixed the annoying condition that happened on my diskless Shark
every time I did a "make build".  This panic would occur, but a
quick examination showed that, while there were only 1 or 2 free
pages, there were over 4100 inactive pages.  Many of these pages
could have been put onto the free list if they could be cleaned
by the pagedaemon.  However, the pagedaemon wasn't able to keep up
with demand (i/o bottleneck, paging over NFS).

To test the fix, I continued doing a "make build" while doing an
8 job parallel make of the kernel.  I quickly depleted all available
RAM, and the kernel spewed its diagnostics this fact.  However, the
kernel did not die, and once I ^C'd the kernel build, my "make build"
continued on like nothing had happened.  I was satisfied.  (That,
and I saw the diagnostic message I had put in the pmap change fly by
several times :-)

The fix will appear in tomorrow's SUP scan, but I'll include it here
below for the impatient.

        -- Jason R. Thorpe <thorpej@nas.nasa.gov>


Index: pmap.c
===================================================================
RCS file: /cvsroot/src/sys/arch/arm32/arm32/pmap.c,v
retrieving revision 1.39
diff -c -r1.39 pmap.c
*** pmap.c	1999/01/17 06:58:16	1.39
--- pmap.c	1999/01/26 08:59:20
***************
*** 2057,2070 ****
  		vm_offset_t l2pa;
  
  		/* Allocate a page table */
  #if defined(UVM)
! 		page = uvm_pagealloc(NULL, 0, NULL);
  #else
! 		page = vm_page_alloc1();
  #endif
! 		/* XXX should try and free up memory if alloc fails */
! 		if (page == NULL)
! 			panic("pmap_enter: No more physical pages\n");
  
  		/* Wire this page table into the L1 */
  		l2pa = VM_PAGE_TO_PHYS(page);
--- 2057,2095 ----
  		vm_offset_t l2pa;
  
  		/* Allocate a page table */
+ 		for (;;) {
  #if defined(UVM)
! 			page = uvm_pagealloc(NULL, 0, NULL);
  #else
! 			page = vm_page_alloc1();
  #endif
! 			if (page != NULL)
! 				break;
! 			
! 			/*
! 			 * No page available.  If we're the kernel
! 			 * pmap, we die, since we might not have
! 			 * a valid thread context.  For user pmaps,
! 			 * we assume that we _do_ have a valid thread
! 			 * context, so we wait here for the pagedaemon
! 			 * to free up some pages.
! 			 *
! 			 * XXX THE VM CODE IS PROBABLY HOLDING LOCKS
! 			 * XXX RIGHT NOW, BUT ONLY ON OUR PARENT VM_MAP
! 			 * XXX SO THIS IS PROBABLY SAFE.  In any case,
! 			 * XXX other pmap modules claim it is safe to
! 			 * XXX sleep here if it's a user pmap.
! 			 */
! 			if (pmap == pmap_kernel())
! 				panic("pmap_enter: kernel pmap and no more free pages");
! 			else {
! #if defined(UVM)
! 				uvm_wait("pmap_enter");
! #else
! 				vm_wait("pmap_enter");
! #endif
! 			}
! 		}
  
  		/* Wire this page table into the L1 */
  		l2pa = VM_PAGE_TO_PHYS(page);