Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/trunk]: src/sys/net In pktq_flush():



details:   https://anonhg.NetBSD.org/src/rev/5ac2759b460d
branches:  trunk
changeset: 369862:5ac2759b460d
user:      thorpej <thorpej%NetBSD.org@localhost>
date:      Sun Sep 04 17:34:43 2022 +0000

description:
In pktq_flush():
- Run a dummy softint at IPL_SOFTNET on all CPUs to ensure that the
  ISR for this pktqueue is not running (addresses a pre-existing XXX).
- Hold the barrier lock around the critical section to ensure that
  implicit pktq_barrier() calls via pktq_ifdetach() are held off during
  the critical section.
- Ensure the critical section completes in minimal time by not freeing
  memory during the critical section; instead, just build a list of the
  packets pulled out of the per-CPU queues and free them after the critical
  section is over.

diffstat:

 sys/net/pktqueue.c |  37 +++++++++++++++++++++++++++++++------
 1 files changed, 31 insertions(+), 6 deletions(-)

diffs (70 lines):

diff -r 77864b5b5eb0 -r 5ac2759b460d sys/net/pktqueue.c
--- a/sys/net/pktqueue.c        Sun Sep 04 16:01:25 2022 +0000
+++ b/sys/net/pktqueue.c        Sun Sep 04 17:34:43 2022 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: pktqueue.c,v 1.20 2022/09/02 05:50:36 thorpej Exp $    */
+/*     $NetBSD: pktqueue.c,v 1.21 2022/09/04 17:34:43 thorpej Exp $    */
 
 /*-
  * Copyright (c) 2014 The NetBSD Foundation, Inc.
@@ -36,7 +36,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: pktqueue.c,v 1.20 2022/09/02 05:50:36 thorpej Exp $");
+__KERNEL_RCSID(0, "$NetBSD: pktqueue.c,v 1.21 2022/09/04 17:34:43 thorpej Exp $");
 
 #ifdef _KERNEL_OPT
 #include "opt_net_mpsafe.h"
@@ -540,7 +540,23 @@
 {
        CPU_INFO_ITERATOR cii;
        struct cpu_info *ci;
-       struct mbuf *m;
+       struct mbuf *m, *m0 = NULL;
+
+       ASSERT_SLEEPABLE();
+
+       /*
+        * Run a dummy softint at IPL_SOFTNET on all CPUs to ensure that any
+        * already running handler for this pktqueue is no longer running.
+        */
+       xc_barrier(XC_HIGHPRI_IPL(IPL_SOFTNET));
+
+       /*
+        * Acquire the barrier lock.  While the caller ensures that
+        * no explcit pktq_barrier() calls will be issued, this holds
+        * off any implicit pktq_barrier() calls that would happen
+        * as the result of pktq_ifdetach().
+        */
+       mutex_enter(&pq->pq_lock);
 
        for (CPU_INFO_FOREACH(cii, ci)) {
                struct pcq *q;
@@ -550,14 +566,23 @@
                kpreempt_enable();
 
                /*
-                * XXX This can't be right -- if the softint is running
-                * then pcq_get isn't safe here.
+                * Pull the packets off the pcq and chain them into
+                * a list to be freed later.
                 */
                while ((m = pcq_get(q)) != NULL) {
                        pktq_inc_count(pq, PQCNT_DEQUEUE);
-                       m_freem(m);
+                       m->m_nextpkt = m0;
+                       m0 = m;
                }
        }
+
+       mutex_exit(&pq->pq_lock);
+
+       /* Free the packets now that the critical section is over. */
+       while ((m = m0) != NULL) {
+               m0 = m->m_nextpkt;
+               m_freem(m);
+       }
 }
 
 static void



Home | Main Index | Thread Index | Old Index