NetBSD-Bugs archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: kern/52111: wmX i82574L inoperative in monoprocessor mode (i386)
The following reply was made to PR kern/52111; it has been noted by GNATS.
From: Kengo NAKAHARA <k-nakahara%iij.ad.jp@localhost>
To: martin%duskware.de@localhost
Cc: gnats-bugs%NetBSD.org@localhost
Subject: Re: kern/52111: wmX i82574L inoperative in monoprocessor mode (i386)
Date: Mon, 27 Mar 2017 18:20:20 +0900
Hi,
On 2017/03/27 17:42, Martin Husemann wrote:
> On Mon, Mar 27, 2017 at 11:27:17AM +0900, Kengo NAKAHARA wrote:
>> Hmm...., It is strange to me that wm(4) use two TX and RX interrupts on
>> uniprocessor system.
>
> He means using "boot -1" on a multiprocessor system.
Yes. I also use "boot -1" on multiprocessor system as my reproduction
environment, however my wm(4) use one TX and Rx interrupt.
So, I wonder that. Sorry lack of talk.
> I don't know if on x86 that makes ncpu be 1 or if wm should better use
> ncpuonline instead. The pool code seems to use
>
> if (ncpu < 2 || !mp_online)
>
> instead, maybe we should provide a simple inline function and make it
> all the same?
Thank you for your comments. Hmm, I think wm(4) should use ncpuonline
rather than "(ncpu < 2 || !mp_online)" code to reduce modification
as wm(4) use some codes such as "ncpu < nqueues". :)
I create that patch below.
====================
diff --git a/sys/dev/pci/if_wm.c b/sys/dev/pci/if_wm.c
index f8f10dc3..89bf451 100644
--- a/sys/dev/pci/if_wm.c
+++ b/sys/dev/pci/if_wm.c
@@ -332,7 +332,7 @@ struct wm_txqueue {
int txq_fifo_stall; /* Tx FIFO is stalled */
/*
- * When ncpu > number of Tx queues, a Tx queue is shared by multiple
+ * When ncpuonline > number of Tx queues, a Tx queue is shared by multiple
* CPUs. This queue intermediate them without block.
*/
pcq_t *txq_interq;
@@ -4520,7 +4520,7 @@ wm_init_rss(struct wm_softc *sc)
* The numbers are affected by below parameters.
* - The nubmer of hardware queues
* - The number of MSI-X vectors (= "nvectors" argument)
- * - ncpu
+ * - ncpuonline
*/
static void
wm_adjust_qnum(struct wm_softc *sc, int nvectors)
@@ -4596,8 +4596,8 @@ wm_adjust_qnum(struct wm_softc *sc, int nvectors)
* As queues more then cpus cannot improve scaling, we limit
* the number of queues used actually.
*/
- if (ncpu < sc->sc_nqueues)
- sc->sc_nqueues = ncpu;
+ if (ncpuonline < sc->sc_nqueues)
+ sc->sc_nqueues = ncpuonline;
}
static inline bool
@@ -4684,7 +4684,7 @@ wm_setup_msix(struct wm_softc *sc)
char intrbuf[PCI_INTRSTR_LEN];
char intr_xname[INTRDEVNAMEBUF];
- if (sc->sc_nqueues < ncpu) {
+ if (sc->sc_nqueues < ncpuonline) {
/*
* To avoid other devices' interrupts, the affinity of Tx/Rx
* interrupts start from CPU#1.
@@ -4714,7 +4714,7 @@ wm_setup_msix(struct wm_softc *sc)
txrx_established = 0;
for (qidx = 0; qidx < sc->sc_nqueues; qidx++) {
struct wm_queue *wmq = &sc->sc_queue[qidx];
- int affinity_to = (sc->sc_affinity_offset + intr_idx) % ncpu;
+ int affinity_to = (sc->sc_affinity_offset + intr_idx) % ncpuonline;
intrstr = pci_intr_string(pc, sc->sc_intrs[intr_idx], intrbuf,
sizeof(intrbuf));
@@ -6639,7 +6639,7 @@ wm_select_txqueue(struct ifnet *ifp, struct mbuf *m)
* TODO:
* distribute by flowid(RSS has value).
*/
- return (cpuid + ncpu - sc->sc_affinity_offset) % sc->sc_nqueues;
+ return (cpuid + ncpuonline - sc->sc_affinity_offset) % sc->sc_nqueues;
}
/*
====================
However, I am not sure this patch will fix kardel@n.o's problem...
Thanks,
--
//////////////////////////////////////////////////////////////////////
Internet Initiative Japan Inc.
Device Engineering Section,
IoT Platform Development Department,
Network Division,
Technology Unit
Kengo NAKAHARA <k-nakahara%iij.ad.jp@localhost>
Home |
Main Index |
Thread Index |
Old Index