Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern entropy(9): Lock the per-CPU state in entropy_accou...



details:   https://anonhg.NetBSD.org/src/rev/40b4c94a2b4a
branches:  trunk
changeset: 364385:40b4c94a2b4a
user:      riastradh <riastradh%NetBSD.org@localhost>
date:      Sun Mar 20 13:17:32 2022 +0000

description:
entropy(9): Lock the per-CPU state in entropy_account_cpu.

This was previously called with the per-CPU state locked, which
worked fine as long as the global entropy lock was a spin lock so
acquiring it would never sleep.  Now it's an adaptive lock, so it's
not safe to take with the per-CPU state lock -- but we still need to
prevent reentrant access to the per-CPU entropy pool by interrupt
handlers while we're extracting from it.  So now the logic for
entering a sample is:

- lock per-CPU state
- entpool_enter
- unlock per-CPU state
- if anything pending on this CPU and it's time to consolidate:
  - lock global entropy state
  - lock per-CPU state
  - transfer
  - unlock per-CPU state
  - unlock global entropy state

diffstat:

 sys/kern/kern_entropy.c |  17 +++++++++++------
 1 files changed, 11 insertions(+), 6 deletions(-)

diffs (54 lines):

diff -r df82f3b7894d -r 40b4c94a2b4a sys/kern/kern_entropy.c
--- a/sys/kern/kern_entropy.c   Sun Mar 20 13:17:09 2022 +0000
+++ b/sys/kern/kern_entropy.c   Sun Mar 20 13:17:32 2022 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_entropy.c,v 1.43 2022/03/20 13:17:09 riastradh Exp $      */
+/*     $NetBSD: kern_entropy.c,v 1.44 2022/03/20 13:17:32 riastradh Exp $      */
 
 /*-
  * Copyright (c) 2019 The NetBSD Foundation, Inc.
@@ -75,7 +75,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_entropy.c,v 1.43 2022/03/20 13:17:09 riastradh Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_entropy.c,v 1.44 2022/03/20 13:17:32 riastradh Exp $");
 
 #include <sys/param.h>
 #include <sys/types.h>
@@ -728,8 +728,9 @@
 static void
 entropy_account_cpu(struct entropy_cpu *ec)
 {
+       struct entropy_cpu_lock lock;
+       struct entropy_cpu *ec0;
        unsigned diff;
-       int s;
 
        KASSERT(E->stage >= ENTROPY_WARM);
 
@@ -742,9 +743,13 @@
            __predict_true((time_uptime - E->timestamp) <= 60))
                return;
 
-       /* Consider consolidation, under the lock.  */
+       /*
+        * Consider consolidation, under the global lock and with the
+        * per-CPU state locked.
+        */
        mutex_enter(&E->lock);
-       s = splsoftserial();
+       ec0 = entropy_cpu_get(&lock);
+       KASSERT(ec0 == ec);
        if (E->needed != 0 && E->needed <= ec->ec_pending) {
                /*
                 * If we have not yet attained full entropy but we can
@@ -799,7 +804,7 @@
                        entropy_partial_evcnt.ev_count++;
                }
        }
-       splx(s);
+       entropy_cpu_put(&lock, ec);
        mutex_exit(&E->lock);
 }
 



Home | Main Index | Thread Index | Old Index