Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/kern threadpool(9): Fix synchronization between cancel a...



details:   https://anonhg.NetBSD.org/src/rev/12e6b30e90cd
branches:  trunk
changeset: 950262:12e6b30e90cd
user:      riastradh <riastradh%NetBSD.org@localhost>
date:      Sat Jan 23 16:33:49 2021 +0000

description:
threadpool(9): Fix synchronization between cancel and dispatch.

- threadpool_cancel_job_async tried to prevent
  threadpool_dispatcher_thread from taking the job by setting
  job->job_thread = NULL and then removing the job from the queue.

- But threadpool_cancel_job_async didn't notice job->job_thread is
  null until after it also removes the job from the queue =>
  double-remove, *boom*.

The solution is to teach threadpool_dispatcher_thread to wait until
it has acquired the job lock to test whether job->job_thread is still
valid before it decides to remove the job from the queue.

Fixes PR kern/55948.

XXX pullup-9

diffstat:

 sys/kern/kern_threadpool.c |  7 ++++---
 1 files changed, 4 insertions(+), 3 deletions(-)

diffs (35 lines):

diff -r 348c5f9d5a8f -r 12e6b30e90cd sys/kern/kern_threadpool.c
--- a/sys/kern/kern_threadpool.c        Sat Jan 23 15:00:33 2021 +0000
+++ b/sys/kern/kern_threadpool.c        Sat Jan 23 16:33:49 2021 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: kern_threadpool.c,v 1.22 2021/01/13 07:34:37 skrll Exp $       */
+/*     $NetBSD: kern_threadpool.c,v 1.23 2021/01/23 16:33:49 riastradh Exp $   */
 
 /*-
  * Copyright (c) 2014, 2018 The NetBSD Foundation, Inc.
@@ -81,7 +81,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: kern_threadpool.c,v 1.22 2021/01/13 07:34:37 skrll Exp $");
+__KERNEL_RCSID(0, "$NetBSD: kern_threadpool.c,v 1.23 2021/01/23 16:33:49 riastradh Exp $");
 
 #include <sys/types.h>
 #include <sys/param.h>
@@ -1041,7 +1041,7 @@
 
                /* There are idle threads, so try giving one a job.  */
                struct threadpool_job *const job = TAILQ_FIRST(&pool->tp_jobs);
-               TAILQ_REMOVE(&pool->tp_jobs, job, job_entry);
+
                /*
                 * Take an extra reference on the job temporarily so that
                 * it won't disappear on us while we have both locks dropped.
@@ -1053,6 +1053,7 @@
                /* If the job was cancelled, we'll no longer be its thread.  */
                if (__predict_true(job->job_thread == dispatcher)) {
                        mutex_spin_enter(&pool->tp_lock);
+                       TAILQ_REMOVE(&pool->tp_jobs, job, job_entry);
                        if (__predict_false(
                                    TAILQ_EMPTY(&pool->tp_idle_threads))) {
                                /*



Home | Main Index | Thread Index | Old Index