Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/uvm Only need a read lock for uvm_pagelookup().



details:   https://anonhg.NetBSD.org/src/rev/80fa392915cd
branches:  trunk
changeset: 1008037:80fa392915cd
user:      ad <ad%NetBSD.org@localhost>
date:      Sun Mar 08 18:40:29 2020 +0000

description:
Only need a read lock for uvm_pagelookup().

diffstat:

 sys/uvm/uvm_readahead.c |  6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diffs (27 lines):

diff -r 7c573a2f627e -r 80fa392915cd sys/uvm/uvm_readahead.c
--- a/sys/uvm/uvm_readahead.c   Sun Mar 08 18:26:59 2020 +0000
+++ b/sys/uvm/uvm_readahead.c   Sun Mar 08 18:40:29 2020 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: uvm_readahead.c,v 1.11 2020/02/23 15:46:43 ad Exp $    */
+/*     $NetBSD: uvm_readahead.c,v 1.12 2020/03/08 18:40:29 ad Exp $    */
 
 /*-
  * Copyright (c)2003, 2005, 2009 YAMAMOTO Takashi,
@@ -40,7 +40,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: uvm_readahead.c,v 1.11 2020/02/23 15:46:43 ad Exp $");
+__KERNEL_RCSID(0, "$NetBSD: uvm_readahead.c,v 1.12 2020/03/08 18:40:29 ad Exp $");
 
 #include <sys/param.h>
 #include <sys/pool.h>
@@ -133,7 +133,7 @@
         * too. This speeds up I/O using cache, since it avoids lookups and temporary
         * allocations done by full pgo_get.
         */
-       rw_enter(uobj->vmobjlock, RW_WRITER);
+       rw_enter(uobj->vmobjlock, RW_READER);
        struct vm_page *pg = uvm_pagelookup(uobj, trunc_page(endoff - 1));
        rw_exit(uobj->vmobjlock);
        if (pg != NULL) {



Home | Main Index | Thread Index | Old Index