Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/uvm - Move the comment, which describes that calling the...



details:   https://anonhg.NetBSD.org/src/rev/57ca3c1ccf3c
branches:  trunk
changeset: 486477:57ca3c1ccf3c
user:      enami <enami%NetBSD.org@localhost>
date:      Tue May 23 02:19:20 2000 +0000

description:
- Move the comment, which describes that calling the function
  uvm_map_pageable(map, ...) implies unlocking passed map, just before the
  function call.
- If we bail out before calling the uvm_map_pageable, unlock the map
  by ourself to prevent a panic ``locking against myself''.  The panic is,
  for example, caused when cdrecord is invoked with too large fifo size.

diffstat:

 sys/uvm/uvm_mmap.c |  11 ++++++-----
 1 files changed, 6 insertions(+), 5 deletions(-)

diffs (36 lines):

diff -r 0358dbb577c7 -r 57ca3c1ccf3c sys/uvm/uvm_mmap.c
--- a/sys/uvm/uvm_mmap.c        Tue May 23 02:04:28 2000 +0000
+++ b/sys/uvm/uvm_mmap.c        Tue May 23 02:19:20 2000 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: uvm_mmap.c,v 1.40 2000/03/30 12:31:50 augustss Exp $   */
+/*     $NetBSD: uvm_mmap.c,v 1.41 2000/05/23 02:19:20 enami Exp $      */
 
 /*
  * Copyright (c) 1997 Charles D. Cranor and Washington University.
@@ -1234,10 +1234,6 @@
                vm_map_lock(map);
 
                if (map->flags & VM_MAP_WIREFUTURE) {
-                       /*
-                        * uvm_map_pageable() always returns the map
-                        * unlocked.
-                        */
                        if ((atop(size) + uvmexp.wired) > uvmexp.wiredmax
 #ifdef pmap_wired_count
                            || (locklimit != 0 && (size +
@@ -1246,10 +1242,15 @@
 #endif
                        ) {
                                retval = KERN_RESOURCE_SHORTAGE;
+                               vm_map_unlock(map);
                                /* unmap the region! */
                                (void) uvm_unmap(map, *addr, *addr + size);
                                goto bad;
                        }
+                       /*
+                        * uvm_map_pageable() always returns the map
+                        * unlocked.
+                        */
                        retval = uvm_map_pageable(map, *addr, *addr + size,
                            FALSE, UVM_LK_ENTER);
                        if (retval != KERN_SUCCESS) {



Home | Main Index | Thread Index | Old Index