Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/arch/alpha/alpha Fix a couple of latent MP issues in the...



details:   https://anonhg.NetBSD.org/src/rev/e165c2c27eb7
branches:  trunk
changeset: 840232:e165c2c27eb7
user:      thorpej <thorpej%NetBSD.org@localhost>
date:      Fri Mar 29 03:11:32 2019 +0000

description:
Fix a couple of latent MP issues in the Alpha pmap:
- In pmap_activate(), even though we manipulate the active mask
  with atomic ops, the lev1map pointer needs to stay consistent,
  so we do, in fact, have to take the pmap lock there.
- In pmap_emulate_reference(), some of the DEBUG checks done here
  are race-prone, so don't do them.  (Leave them #if 0'd out for
  documentary purposes.)

diffstat:

 sys/arch/alpha/alpha/pmap.c |  23 +++++++++++++++--------
 1 files changed, 15 insertions(+), 8 deletions(-)

diffs (72 lines):

diff -r 003e9a337d6e -r e165c2c27eb7 sys/arch/alpha/alpha/pmap.c
--- a/sys/arch/alpha/alpha/pmap.c       Fri Mar 29 02:09:14 2019 +0000
+++ b/sys/arch/alpha/alpha/pmap.c       Fri Mar 29 03:11:32 2019 +0000
@@ -1,4 +1,4 @@
-/* $NetBSD: pmap.c,v 1.263 2019/03/01 04:29:20 mrg Exp $ */
+/* $NetBSD: pmap.c,v 1.264 2019/03/29 03:11:32 thorpej Exp $ */
 
 /*-
  * Copyright (c) 1998, 1999, 2000, 2001, 2007, 2008 The NetBSD Foundation, Inc.
@@ -140,7 +140,7 @@
 
 #include <sys/cdefs.h>                 /* RCS ID & Copyright macro defns */
 
-__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.263 2019/03/01 04:29:20 mrg Exp $");
+__KERNEL_RCSID(0, "$NetBSD: pmap.c,v 1.264 2019/03/29 03:11:32 thorpej Exp $");
 
 #include <sys/param.h>
 #include <sys/systm.h>
@@ -2126,6 +2126,14 @@
                printf("pmap_activate(%p)\n", l);
 #endif
 
+       /*
+        * Lock the pmap across the work we do here; although the
+        * in-use mask is manipulated with an atomic op and the
+        * ASN info is per-cpu, the lev1map pointer needs to remain
+        * consistent across the entire call.
+        */
+       PMAP_LOCK(pmap);
+
        /* Mark the pmap in use by this processor. */
        atomic_or_ulong(&pmap->pm_cpus, (1UL << cpu_id));
 
@@ -2133,6 +2141,8 @@
        pmap_asn_alloc(pmap, cpu_id);
 
        PMAP_ACTIVATE(pmap, l, cpu_id);
+
+       PMAP_UNLOCK(pmap);
 }
 
 /*
@@ -2140,10 +2150,6 @@
  *
  *     Mark that the pmap used by the specified process is no longer
  *     in use by the processor.
- *
- *     The comment above pmap_activate() wrt. locking applies here,
- *     as well.  Note that we use only a single `atomic' operation,
- *     so no locking is necessary.
  */
 void
 pmap_deactivate(struct lwp *l)
@@ -2156,7 +2162,8 @@
 #endif
 
        /*
-        * Mark the pmap no longer in use by this processor.
+        * Mark the pmap no longer in use by this processor.  Because
+        * this is all we're doing, no need to take the pmap lock.
         */
        atomic_and_ulong(&pmap->pm_cpus, ~(1UL << cpu_number()));
 }
@@ -2635,7 +2642,7 @@
                printf("*pte = 0x%lx\n", *pte);
        }
 #endif
-#ifdef DEBUG                           /* These checks are more expensive */
+#if 0/*DEBUG*/ /* These checks are, expensive, racy, and unreliable. */
        if (!pmap_pte_v(pte))
                panic("pmap_emulate_reference: invalid pte");
        if (type == ALPHA_MMCSR_FOW) {



Home | Main Index | Thread Index | Old Index