Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern entropy(9): Bind to CPU temporarily to avoid race w...



details:   https://anonhg.NetBSD.org/src/rev/cdd8704afbe9
branches:  trunk
changeset: 364415:cdd8704afbe9
user:      riastradh <riastradh%NetBSD.org@localhost>
date:      Wed Mar 23 23:18:17 2022 +0000

description:
entropy(9): Bind to CPU temporarily to avoid race with lwp migration.

More fallout from the IPL_VM->IPL_SOFTSERIAL change.

In entropy_enter, there is a window when the lwp can be migrated to
another CPU:

        ec = entropy_cpu_get();
        ...
        pending = ec->ec_pending + ...;
        ...
        entropy_cpu_put();

        /* lwp migration possible here */

        if (pending)
                entropy_account_cpu(ec);

If this happens, we may trip over any of several problems in
entropy_account_cpu because it assumes ec is the current CPU's state
in order to decide whether we have anything to contribute from the
local pool to the global pool.

No need to do this in entropy_softintr because softints are bound to
the CPU anyway.

diffstat:

 sys/kern/kern_entropy.c |  16 ++++++++++++++--
 1 files changed, 14 insertions(+), 2 deletions(-)

diffs (58 lines):

diff -r 8601a601144a -r cdd8704afbe9 sys/kern/kern_entropy.c
--- a/sys/kern/kern_entropy.c   Wed Mar 23 17:35:41 2022 +0000
+++ b/sys/kern/kern_entropy.c   Wed Mar 23 23:18:17 2022 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_entropy.c,v 1.51 2022/03/21 00:25:04 riastradh Exp $      */
+/*     $NetBSD: kern_entropy.c,v 1.52 2022/03/23 23:18:17 riastradh Exp $      */
 
 /*-
  * Copyright (c) 2019 The NetBSD Foundation, Inc.
@@ -75,7 +75,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_entropy.c,v 1.51 2022/03/21 00:25:04 riastradh Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_entropy.c,v 1.52 2022/03/23 23:18:17 riastradh Exp $");
 
 #include <sys/param.h>
 #include <sys/types.h>
@@ -735,6 +735,7 @@
        unsigned diff;
 
        KASSERT(E->stage >= ENTROPY_WARM);
+       KASSERT(curlwp->l_pflag & LP_BOUND);
 
        /*
         * If there's no entropy needed, and entropy has been
@@ -869,6 +870,7 @@
        struct entropy_cpu_lock lock;
        struct entropy_cpu *ec;
        unsigned pending;
+       int bound;
 
        KASSERTMSG(!cpu_intr_p(),
            "use entropy_enter_intr from interrupt context");
@@ -882,6 +884,14 @@
        }
 
        /*
+        * Bind ourselves to the current CPU so we don't switch CPUs
+        * between entering data into the current CPU's pool (and
+        * updating the pending count) and transferring it to the
+        * global pool in entropy_account_cpu.
+        */
+       bound = curlwp_bind();
+
+       /*
         * With the per-CPU state locked, enter into the per-CPU pool
         * and count up what we can add.
         */
@@ -895,6 +905,8 @@
        /* Consolidate globally if appropriate based on what we added.  */
        if (pending)
                entropy_account_cpu(ec);
+
+       curlwp_bindx(bound);
 }
 
 /*



Home | Main Index | Thread Index | Old Index