Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/uvm PR kern/55658



details:   https://anonhg.NetBSD.org/src/rev/bfafc749e9c9
branches:  trunk
changeset: 944621:bfafc749e9c9
user:      rin <rin%NetBSD.org@localhost>
date:      Mon Oct 05 04:48:23 2020 +0000

description:
PR kern/55658

ubc_fault_page(): Ignore PG_RDONLY flag and always pmap_enter() the page
with the permissions of the original access_type.

It is the file system's responsibility to allocate blocks that is being
modified by write(), before calling into UBC to fill the pages for that
range. KASSERT() is added there to confirm that no clean page is mapped
writable.

Fix infinite loop in uvm_fault_internal(), observed on 16KB-page systems,
where it continues to try to make a partially-backed page writable.

No regression in ATF and KASSERT() does not fire on several architectures,
as far as I can see.

Fix suggested by chs. Thanks!

diffstat:

 sys/uvm/uvm_bio.c |  12 +++++-------
 1 files changed, 5 insertions(+), 7 deletions(-)

diffs (43 lines):

diff -r fa4e0bc33ea8 -r bfafc749e9c9 sys/uvm/uvm_bio.c
--- a/sys/uvm/uvm_bio.c Sun Oct 04 23:50:59 2020 +0000
+++ b/sys/uvm/uvm_bio.c Mon Oct 05 04:48:23 2020 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: uvm_bio.c,v 1.121 2020/07/09 09:24:32 rin Exp $        */
+/*     $NetBSD: uvm_bio.c,v 1.122 2020/10/05 04:48:23 rin Exp $        */
 
 /*
  * Copyright (c) 1998 Chuck Silvers.
@@ -34,7 +34,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: uvm_bio.c,v 1.121 2020/07/09 09:24:32 rin Exp $");
+__KERNEL_RCSID(0, "$NetBSD: uvm_bio.c,v 1.122 2020/10/05 04:48:23 rin Exp $");
 
 #include "opt_uvmhist.h"
 #include "opt_ubc.h"
@@ -235,9 +235,7 @@
 ubc_fault_page(const struct uvm_faultinfo *ufi, const struct ubc_map *umap,
     struct vm_page *pg, vm_prot_t prot, vm_prot_t access_type, vaddr_t va)
 {
-       vm_prot_t mask;
        int error;
-       bool rdonly;
 
        KASSERT(rw_write_held(pg->uobject->vmobjlock));
 
@@ -280,11 +278,11 @@
            pg->offset < umap->writeoff ||
            pg->offset + PAGE_SIZE > umap->writeoff + umap->writelen);
 
-       rdonly = uvm_pagereadonly_p(pg);
-       mask = rdonly ? ~VM_PROT_WRITE : VM_PROT_ALL;
+       KASSERT((access_type & VM_PROT_WRITE) == 0 ||
+           uvm_pagegetdirty(pg) != UVM_PAGE_STATUS_CLEAN);
 
        error = pmap_enter(ufi->orig_map->pmap, va, VM_PAGE_TO_PHYS(pg),
-           prot & mask, PMAP_CANFAIL | (access_type & mask));
+           prot, PMAP_CANFAIL | access_type);
 
        uvm_pagelock(pg);
        uvm_pageactivate(pg);



Home | Main Index | Thread Index | Old Index