Source-Changes-HG archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

[src/netbsd-8]: src Pull up following revision(s) (requested by knakahara in ...



details:   https://anonhg.NetBSD.org/src/rev/3a464ea6ab16
branches:  netbsd-8
changeset: 434608:3a464ea6ab16
user:      martin <martin%NetBSD.org@localhost>
date:      Mon Feb 05 15:07:30 2018 +0000

description:
Pull up following revision(s) (requested by knakahara in ticket #529):
        sys/dev/pci/if_wm.c: revision 1.560
        sys/dev/pci/if_wm.c: revision 1.561
        sys/dev/pci/if_wm.c: revision 1.562
        share/man/man4/wm.4: revision 1.37
        share/man/man4/wm.4: revision 1.38
        sys/dev/pci/if_wm.c: revision 1.551
        sys/dev/pci/if_wm.c: revision 1.553
        sys/dev/pci/if_wm.c: revision 1.554
        sys/dev/pci/if_wm.c: revision 1.555
        sys/dev/pci/if_wm.c: revision 1.556
        sys/dev/pci/if_wm.c: revision 1.557
        sys/dev/pci/if_wm.c: revision 1.558
        sys/dev/pci/if_wm.c: revision 1.559
PR/52885 - Shinichi Doyashiki -- typo in comment
Fix legacy Tx descriptors printing when WM_DEBUG is enabled.
improve comments
Fix wm_watchdog_txq() lock region.
Not only wm_txeof() but also wm_watchdog_txq() itself requires txq_lock
as it reads Tx descriptor management variables such as "txq_free".
There is almost no influence on performance.
Fix duplicated "rxintr" evcnt counting. Pointed out by ozaki-r@n.o, thanks.
wm_txeof() can limit the loop count the same as wm_rxeof() now.
add WM_TX_PROCESS_LIMIT_DEFAULT and WM_TX_INTR_PROCESS_LIMIT_DEFAULT man.
More markup.
CID-1427779: Fix uninitialized variables
Fix 82574 MSI-X mode cannot receive packets after 82574 receives high rate traffic.
In short, 82574 MSI-X mode does not cause RXQ MSI-X vector when 82574's
phys FIFO overflows. I don't know why but 82574 causes not RXQ MSI-X vector
but OTHER MSI-X vector at the situation.
see:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?h=v4.15-rc9&id=4aea7a5c5e940c1723add439f4088844cd26196d
advised by msaitoh@n.o, thanks.
Fix if_wm.c:r1.557 merge miss, sorry.
Fix unmatched return type. The return value of wm_txeof() is not useded yet.
Make wm(4) watchdog MP-safe. There is almost no influence on performance.
wm(4) does not use ifp->if_watchdog now, that is, it does not touch
ifp->if_timer.
It also uses own callout(wm_tick) as watchdog now. The watchdog uses
per-queue counter to check timeout. So, global lock is not required.

diffstat:

 share/man/man4/wm.4 |   10 +-
 sys/dev/pci/if_wm.c |  242 ++++++++++++++++++++++++++++++++++++++-------------
 2 files changed, 185 insertions(+), 67 deletions(-)

diffs (truncated from 586 to 300 lines):

diff -r 3b9d5a163ee9 -r 3a464ea6ab16 share/man/man4/wm.4
--- a/share/man/man4/wm.4       Mon Feb 05 14:55:15 2018 +0000
+++ b/share/man/man4/wm.4       Mon Feb 05 15:07:30 2018 +0000
@@ -1,4 +1,4 @@
-.\"    $NetBSD: wm.4,v 1.36 2017/04/13 10:37:36 knakahara Exp $
+.\"    $NetBSD: wm.4,v 1.36.4.1 2018/02/05 15:07:30 martin Exp $
 .\"
 .\" Copyright 2002, 2003 Wasabi Systems, Inc.
 .\" All rights reserved.
@@ -33,7 +33,7 @@
 .\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
 .\" POSSIBILITY OF SUCH DAMAGE.
 .\"
-.Dd March 22, 2017
+.Dd January 18, 2018
 .Dt WM 4
 .Os
 .Sh NAME
@@ -182,6 +182,9 @@
 The default value is 100.
 When you increase this value, both the receive latency and
 the receive throughput will increase.
+.It Dv WM_TX_PROCESS_LIMIT_DEFAULT
+Transmit side of
+.Dv WM_RX_PROCESS_LIMIT_DEFAULT .
 .It Dv WM_RX_INTR_PROCESS_LIMIT_DEFAULT
 The maximum number of received packets processed in each
 hardware interrupt context.
@@ -191,6 +194,9 @@
 The default value is 0.
 When you increase this value, both the receive latency and
 the receive throughput will decrease.
+.It Dv WM_TX_INTR_PROCESS_LIMIT_DEFAULT
+Transmit side of
+.Dv WM_RX_INTR_PROCESS_LIMIT_DEFAULT .
 .It Dv WM_EVENT_COUNTERS
 Enable many event counters such as each Tx drop counter and Rx interrupt
 counter.
diff -r 3b9d5a163ee9 -r 3a464ea6ab16 sys/dev/pci/if_wm.c
--- a/sys/dev/pci/if_wm.c       Mon Feb 05 14:55:15 2018 +0000
+++ b/sys/dev/pci/if_wm.c       Mon Feb 05 15:07:30 2018 +0000
@@ -1,4 +1,4 @@
-/*     $NetBSD: if_wm.c,v 1.508.4.12 2018/01/13 21:42:45 snj Exp $     */
+/*     $NetBSD: if_wm.c,v 1.508.4.13 2018/02/05 15:07:30 martin Exp $  */
 
 /*
  * Copyright (c) 2001, 2002, 2003, 2004 Wasabi Systems, Inc.
@@ -83,7 +83,7 @@
  */
 
 #include <sys/cdefs.h>
-__KERNEL_RCSID(0, "$NetBSD: if_wm.c,v 1.508.4.12 2018/01/13 21:42:45 snj Exp $");
+__KERNEL_RCSID(0, "$NetBSD: if_wm.c,v 1.508.4.13 2018/02/05 15:07:30 martin Exp $");
 
 #ifdef _KERNEL_OPT
 #include "opt_net_mpsafe.h"
@@ -183,6 +183,11 @@
 int wm_disable_msi = WM_DISABLE_MSI;
 int wm_disable_msix = WM_DISABLE_MSIX;
 
+#ifndef WM_WATCHDOG_TIMEOUT
+#define WM_WATCHDOG_TIMEOUT 5
+#endif
+static int wm_watchdog_timeout = WM_WATCHDOG_TIMEOUT;
+
 /*
  * Transmit descriptor list size.  Due to errata, we can only have
  * 256 hardware descriptors in the ring on < 82544, but we use 4096
@@ -213,6 +218,13 @@
 
 #define        WM_TXINTERQSIZE         256
 
+#ifndef WM_TX_PROCESS_LIMIT_DEFAULT
+#define        WM_TX_PROCESS_LIMIT_DEFAULT             100U
+#endif
+#ifndef WM_TX_INTR_PROCESS_LIMIT_DEFAULT
+#define        WM_TX_INTR_PROCESS_LIMIT_DEFAULT        0U
+#endif
+
 /*
  * Receive descriptor list size.  We have one Rx buffer for normal
  * sized packets.  Jumbo packets consume 5 Rx buffers for a full-sized
@@ -356,6 +368,9 @@
 
        bool txq_stopping;
 
+       bool txq_watchdog;
+       time_t txq_lastsent;
+
        uint32_t txq_packets;           /* for AIM */
        uint32_t txq_bytes;             /* for AIM */
 #ifdef WM_EVENT_COUNTERS
@@ -417,6 +432,7 @@
        uint32_t rxq_bytes;             /* for AIM */
 #ifdef WM_EVENT_COUNTERS
        WM_Q_EVCNT_DEFINE(rxq, rxintr);         /* Rx interrupts */
+       WM_Q_EVCNT_DEFINE(rxq, rxdefer);        /* Rx deferred processing */
 
        WM_Q_EVCNT_DEFINE(rxq, rxipsum);        /* IP checksums checked in-bound */
        WM_Q_EVCNT_DEFINE(rxq, rxtusum);        /* TCP/UDP cksums checked in-bound */
@@ -518,6 +534,8 @@
 
        int sc_nqueues;
        struct wm_queue *sc_queue;
+       u_int sc_tx_process_limit;      /* Tx processing repeat limit in softint */
+       u_int sc_tx_intr_process_limit; /* Tx processing repeat limit in H/W intr */
        u_int sc_rx_process_limit;      /* Rx processing repeat limit in softint */
        u_int sc_rx_intr_process_limit; /* Rx processing repeat limit in H/W intr */
 
@@ -670,7 +688,8 @@
 static bool    wm_suspend(device_t, const pmf_qual_t *);
 static bool    wm_resume(device_t, const pmf_qual_t *);
 static void    wm_watchdog(struct ifnet *);
-static void    wm_watchdog_txq(struct ifnet *, struct wm_txqueue *);
+static void    wm_watchdog_txq(struct ifnet *, struct wm_txqueue *, uint16_t *);
+static void    wm_watchdog_txq_locked(struct ifnet *, struct wm_txqueue *, uint16_t *);
 static void    wm_tick(void *);
 static int     wm_ifflags_cb(struct ethercom *);
 static int     wm_ioctl(struct ifnet *, u_long, void *);
@@ -756,7 +775,7 @@
 static void    wm_deferred_start_locked(struct wm_txqueue *);
 static void    wm_handle_queue(void *);
 /* Interrupt */
-static int     wm_txeof(struct wm_softc *, struct wm_txqueue *);
+static int     wm_txeof(struct wm_txqueue *, u_int);
 static void    wm_rxeof(struct wm_rxqueue *, u_int);
 static void    wm_linkintr_gmii(struct wm_softc *, uint32_t);
 static void    wm_linkintr_tbi(struct wm_softc *, uint32_t);
@@ -2672,7 +2691,7 @@
                if (wm_is_using_multiqueue(sc))
                        ifp->if_transmit = wm_transmit;
        }
-       ifp->if_watchdog = wm_watchdog;
+       /* wm(4) doest not use ifp->if_watchdog, use wm_tick as watchdog. */
        ifp->if_init = wm_init;
        ifp->if_stop = wm_stop;
        IFQ_SET_MAXLEN(&ifp->if_snd, max(WM_IFQUEUELEN, IFQ_MAXLEN));
@@ -2762,6 +2781,8 @@
                ifp->if_capabilities |= IFCAP_TSOv6;
        }
 
+       sc->sc_tx_process_limit = WM_TX_PROCESS_LIMIT_DEFAULT;
+       sc->sc_tx_intr_process_limit = WM_TX_INTR_PROCESS_LIMIT_DEFAULT;
        sc->sc_rx_process_limit = WM_RX_PROCESS_LIMIT_DEFAULT;
        sc->sc_rx_intr_process_limit = WM_RX_INTR_PROCESS_LIMIT_DEFAULT;
 
@@ -2932,36 +2953,57 @@
 {
        int qid;
        struct wm_softc *sc = ifp->if_softc;
+       uint16_t hang_queue = 0; /* Max queue number of wm(4) is 82576's 16. */
 
        for (qid = 0; qid < sc->sc_nqueues; qid++) {
                struct wm_txqueue *txq = &sc->sc_queue[qid].wmq_txq;
 
-               wm_watchdog_txq(ifp, txq);
-       }
-
-       /* Reset the interface. */
-       (void) wm_init(ifp);
-
-       /*
-        * There are still some upper layer processing which call
-        * ifp->if_start(). e.g. ALTQ or one CPU system
-        */
-       /* Try to get more packets going. */
-       ifp->if_start(ifp);
-}
-
-static void
-wm_watchdog_txq(struct ifnet *ifp, struct wm_txqueue *txq)
+               wm_watchdog_txq(ifp, txq, &hang_queue);
+       }
+
+       /*
+        * IF any of queues hanged up, reset the interface.
+        */
+       if (hang_queue != 0) {
+               (void) wm_init(ifp);
+
+               /*
+                * There are still some upper layer processing which call
+                * ifp->if_start(). e.g. ALTQ or one CPU system
+                */
+               /* Try to get more packets going. */
+               ifp->if_start(ifp);
+       }
+}
+
+
+static void
+wm_watchdog_txq(struct ifnet *ifp, struct wm_txqueue *txq, uint16_t *hang)
+{
+
+       mutex_enter(txq->txq_lock);
+       if (txq->txq_watchdog &&
+           time_uptime - txq->txq_lastsent > wm_watchdog_timeout) {
+               wm_watchdog_txq_locked(ifp, txq, hang);
+       }
+       mutex_exit(txq->txq_lock);
+}
+
+static void
+wm_watchdog_txq_locked(struct ifnet *ifp, struct wm_txqueue *txq, uint16_t *hang)
 {
        struct wm_softc *sc = ifp->if_softc;
+       struct wm_queue *wmq = container_of(txq, struct wm_queue, wmq_txq);
+
+       KASSERT(mutex_owned(txq->txq_lock));
 
        /*
         * Since we're using delayed interrupts, sweep up
         * before we report an error.
         */
-       mutex_enter(txq->txq_lock);
-       wm_txeof(sc, txq);
-       mutex_exit(txq->txq_lock);
+       wm_txeof(txq, UINT_MAX);
+       if (txq->txq_watchdog)
+               *hang |= __BIT(wmq->wmq_id);
 
        if (txq->txq_free != WM_NTXDESC(txq)) {
 #ifdef WM_DEBUG
@@ -2981,11 +3023,22 @@
                        i, txs->txs_firstdesc, txs->txs_lastdesc);
                    for (j = txs->txs_firstdesc; ;
                        j = WM_NEXTTX(txq, j)) {
-                       printf("\tdesc %d: 0x%" PRIx64 "\n", j,
-                           txq->txq_nq_descs[j].nqtx_data.nqtxd_addr);
-                       printf("\t %#08x%08x\n",
-                           txq->txq_nq_descs[j].nqtx_data.nqtxd_fields,
-                           txq->txq_nq_descs[j].nqtx_data.nqtxd_cmdlen);
+                           if ((sc->sc_flags & WM_F_NEWQUEUE) != 0) {
+                                   printf("\tdesc %d: 0x%" PRIx64 "\n", j,
+                                       txq->txq_nq_descs[j].nqtx_data.nqtxd_addr);
+                                   printf("\t %#08x%08x\n",
+                                       txq->txq_nq_descs[j].nqtx_data.nqtxd_fields,
+                                       txq->txq_nq_descs[j].nqtx_data.nqtxd_cmdlen);
+                           } else {
+                                   printf("\tdesc %d: 0x%" PRIx64 "\n", j,
+                                       (uint64_t)txq->txq_descs[j].wtx_addr.wa_high << 32 |
+                                       txq->txq_descs[j].wtx_addr.wa_low);
+                                   printf("\t %#04x%02x%02x%08x\n",
+                                       txq->txq_descs[j].wtx_fields.wtxu_vlan,
+                                       txq->txq_descs[j].wtx_fields.wtxu_options,
+                                       txq->txq_descs[j].wtx_fields.wtxu_status,
+                                       txq->txq_descs[j].wtx_cmdlen);
+                           }
                        if (j == txs->txs_lastdesc)
                                break;
                        }
@@ -3011,8 +3064,13 @@
 
        WM_CORE_LOCK(sc);
 
-       if (sc->sc_core_stopping)
-               goto out;
+       if (sc->sc_core_stopping) {
+               WM_CORE_UNLOCK(sc);
+#ifndef WM_MPSAFE
+               splx(s);
+#endif
+               return;
+       }
 
        if (sc->sc_type >= WM_T_82542_2_1) {
                WM_EVCNT_ADD(&sc->sc_ev_rx_xon, CSR_READ(sc, WMREG_XONRXC));
@@ -3050,12 +3108,11 @@
        else
                wm_tbi_tick(sc);
 
+       WM_CORE_UNLOCK(sc);
+
+       wm_watchdog(ifp);
+
        callout_reset(&sc->sc_tick_ch, hz, wm_tick, sc);
-out:
-       WM_CORE_UNLOCK(sc);
-#ifndef WM_MPSAFE
-       splx(s);
-#endif
 }
 
 static int
@@ -4199,6 +4256,10 @@
        wm_phy_post_reset(sc);
 }
 
+/*
+ * Only used by WM_T_PCH_SPT which does not use multiqueue,
+ * so it is enough to check sc->sc_queue[0] only.
+ */
 static void
 wm_flush_desc_rings(struct wm_softc *sc)
 {
@@ -5939,6 +6000,7 @@
                struct wm_queue *wmq = &sc->sc_queue[qidx];
                struct wm_txqueue *txq = &wmq->wmq_txq;
                mutex_enter(txq->txq_lock);
+               txq->txq_watchdog = false; /* ensure watchdog disabled */



Home | Main Index | Thread Index | Old Index