Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/sommerfeld_i386mp_1]: src/sys/arch/i386/i386 splimp() -> splvm()



details:   https://anonhg.NetBSD.org/src/rev/e1df10c404e1
branches:  sommerfeld_i386mp_1
changeset: 482367:e1df10c404e1
user:      thorpej <thorpej%NetBSD.org@localhost>
date:      Sun Jan 14 23:25:49 2001 +0000

description:
splimp() -> splvm()

diffstat:

 sys/arch/i386/i386/pmap.c |  12 ++++++------
 1 files changed, 6 insertions(+), 6 deletions(-)

diffs (54 lines):

diff -r 6c5c85ffe660 -r e1df10c404e1 sys/arch/i386/i386/pmap.c
--- a/sys/arch/i386/i386/pmap.c Sun Jan 14 23:18:51 2001 +0000
+++ b/sys/arch/i386/i386/pmap.c Sun Jan 14 23:25:49 2001 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: pmap.c,v 1.83.2.28 2001/01/09 03:19:49 sommerfeld Exp $        */
+/*     $NetBSD: pmap.c,v 1.83.2.29 2001/01/14 23:25:49 thorpej Exp $   */
 
 /*
  *
@@ -1293,7 +1293,7 @@
         * if not, try to allocate one.
         */
 
-       s = splimp();   /* must protect kmem_map/kmem_object with splimp! */
+       s = splvm();   /* must protect kmem_map/kmem_object with splvm! */
        if (pv_cachedva == 0) {
                pv_cachedva = uvm_km_kmemalloc(kmem_map, uvmexp.kmem_object,
                    PAGE_SIZE, UVM_KMF_TRYLOCK|UVM_KMF_VALLOC);
@@ -1305,7 +1305,7 @@
 
        /*
         * we have a VA, now let's try and allocate a page in the object
-        * note: we are still holding splimp to protect kmem_object
+        * note: we are still holding splvm to protect kmem_object
         */
 
        if (!simple_lock_try(&uvmexp.kmem_object->vmobjlock)) {
@@ -1321,7 +1321,7 @@
 
        simple_unlock(&uvmexp.kmem_object->vmobjlock);
        splx(s);
-       /* splimp now dropped */
+       /* splvm now dropped */
 
        if (pg == NULL)
                return (NULL);
@@ -1484,7 +1484,7 @@
        vm_map_entry_t dead_entries;
        struct pv_page *pvp;
 
-       s = splimp(); /* protect kmem_map */
+       s = splvm(); /* protect kmem_map */
 
        pvp = pv_unusedpgs.tqh_first;
 
@@ -1731,7 +1731,7 @@
 
        /*
         * we need to lock pmaps_lock to prevent nkpde from changing on
-        * us.  note that there is no need to splimp to protect us from
+        * us.  note that there is no need to splvm to protect us from
         * malloc since malloc allocates out of a submap and we should
         * have already allocated kernel PTPs to cover the range...
         *



Home | Main Index | Thread Index | Old Index