Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern callout_halt():



details:   https://anonhg.NetBSD.org/src/rev/885a1fbd9d84
branches:  trunk
changeset: 1006692:885a1fbd9d84
user:      ad <ad%NetBSD.org@localhost>
date:      Thu Jan 23 20:44:15 2020 +0000

description:
callout_halt():

- It's a common design pattern for callouts to re-schedule themselves, so
  check after waiting and put a stop to it again if needed.
- Add comments.

diffstat:

 sys/kern/kern_timeout.c |  26 +++++++++++++++++++++++---
 1 files changed, 23 insertions(+), 3 deletions(-)

diffs (62 lines):

diff -r 1a4df596c3b1 -r 885a1fbd9d84 sys/kern/kern_timeout.c
--- a/sys/kern/kern_timeout.c   Thu Jan 23 17:37:03 2020 +0000
+++ b/sys/kern/kern_timeout.c   Thu Jan 23 20:44:15 2020 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_timeout.c,v 1.57 2019/11/21 17:57:40 ad Exp $     */
+/*     $NetBSD: kern_timeout.c,v 1.58 2020/01/23 20:44:15 ad Exp $     */
 
 /*-
  * Copyright (c) 2003, 2006, 2007, 2008, 2009, 2019 The NetBSD Foundation, Inc.
@@ -59,7 +59,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_timeout.c,v 1.57 2019/11/21 17:57:40 ad Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_timeout.c,v 1.58 2020/01/23 20:44:15 ad Exp $");
 
 /*
  * Timeouts are kept in a hierarchical timing wheel.  The c_time is the
@@ -505,14 +505,25 @@
        l = curlwp;
        relock = NULL;
        for (;;) {
+               /*
+                * At this point we know the callout is not pending, but it
+                * could be running on a CPU somewhere.  That can be curcpu
+                * in a few cases:
+                *
+                * - curlwp is a higher priority soft interrupt
+                * - the callout blocked on a lock and is currently asleep
+                * - the callout itself has called callout_halt() (nice!)
+                */
                cc = c->c_cpu;
                if (__predict_true(cc->cc_active != c || cc->cc_lwp == l))
                        break;
+
+               /* It's running - need to wait for it to complete. */
                if (interlock != NULL) {
                        /*
                         * Avoid potential scheduler lock order problems by
                         * dropping the interlock without the callout lock
-                        * held.
+                        * held; then retry.
                         */
                        mutex_spin_exit(lock);
                        mutex_exit(interlock);
@@ -529,7 +540,16 @@
                            &sleep_syncobj);
                        sleepq_block(0, false);
                }
+
+               /*
+                * Re-lock the callout and check the state of play again. 
+                * It's a common design pattern for callouts to re-schedule
+                * themselves so put a stop to it again if needed.
+                */
                lock = callout_lock(c);
+               if ((c->c_flags & CALLOUT_PENDING) != 0)
+                       CIRCQ_REMOVE(&c->c_list);
+               c->c_flags &= ~(CALLOUT_PENDING|CALLOUT_FIRED);
        }
 
        mutex_spin_exit(lock);



Home | Main Index | Thread Index | Old Index