Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/arch/x86/x86 Fix two bugs in pmap_write_protect():



details:   https://anonhg.NetBSD.org/src/rev/5f6eb56472cb
branches:  trunk
changeset: 456926:5f6eb56472cb
user:      maxv <maxv%NetBSD.org@localhost>
date:      Sat Jun 01 08:12:26 2019 +0000

description:
Fix two bugs in pmap_write_protect():

 * The mask should be ~PAGE_MASK, not PTE_FRAME. PTE_FRAME eliminates the
   higher bits, and that's not wanted.
 * The computation of tva is incorrect: if the VA is in kernel space we
   must take the canonical hole into account, and here we were not.

We've had these bugs basically forever. It meant that uvm_km_protect()
would never flush the correct VA, and a stale TLB entry would persist.

Fixes PR/54257. Since I added PCID support we execute invpcid in invlpg(),
and invpcid triggers a #GP if the address is non canonical, contrary to
invlpg. The wrong computation of the VA during a modload happened to hit
the canonical hole.

diffstat:

 sys/arch/x86/x86/pmap.c |  14 +++++++-------
 1 files changed, 7 insertions(+), 7 deletions(-)

diffs (56 lines):

diff -r 395c62c26a60 -r 5f6eb56472cb sys/arch/x86/x86/pmap.c
--- a/sys/arch/x86/x86/pmap.c   Sat Jun 01 07:55:31 2019 +0000
+++ b/sys/arch/x86/x86/pmap.c   Sat Jun 01 08:12:26 2019 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: pmap.c,v 1.333 2019/05/27 18:36:37 maxv Exp $  */
+/*     $NetBSD: pmap.c,v 1.334 2019/06/01 08:12:26 maxv Exp $  */
 
 /*
  * Copyright (c) 2008, 2010, 2016, 2017 The NetBSD Foundation, Inc.
@@ -130,7 +130,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.333 2019/05/27 18:36:37 maxv Exp $");
+__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.334 2019/06/01 08:12:26 maxv Exp $");
 
 #include "opt_user_ldt.h"
 #include "opt_lockdebug.h"
@@ -4017,7 +4017,7 @@
        pt_entry_t * const *pdes;
        struct pmap *pmap2;
        vaddr_t blockend, va;
-       int lvl;
+       int lvl, i;
 
        KASSERT(curlwp->l_md.md_gc_pmap != pmap);
 
@@ -4034,8 +4034,8 @@
        if (!(prot & VM_PROT_EXECUTE))
                bit_put = pmap_pg_nx;
 
-       sva &= PTE_FRAME;
-       eva &= PTE_FRAME;
+       sva &= ~PAGE_MASK;
+       eva &= ~PAGE_MASK;
 
        /* Acquire pmap. */
        kpreempt_disable();
@@ -4058,7 +4058,7 @@
                spte = &ptes[pl1_i(va)];
                epte = &ptes[pl1_i(blockend)];
 
-               for (/* */; spte < epte; spte++) {
+               for (i = 0; spte < epte; spte++, i++) {
                        pt_entry_t opte, npte;
 
                        do {
@@ -4070,7 +4070,7 @@
                        } while (pmap_pte_cas(spte, opte, npte) != opte);
 
                        if ((opte & PTE_D) != 0) {
-                               vaddr_t tva = x86_ptob(spte - ptes);
+                               vaddr_t tva = va + x86_ptob(i);
                                pmap_tlb_shootdown(pmap, tva, opte,
                                    TLBSHOOT_WRITE_PROTECT);
                        }



Home | Main Index | Thread Index | Old Index