Port-xen archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: dom0 -> domU packet loss



> did you get any more details about this ? Any patch ?

The patch I'm currently using is below. It does prevent the drops in the
(Linux) domU, but I'm no longer able to test it on the one machine which
eventually slowed to a crawl with the earlier version of the patch. All
other machines are doing fine.

It's been difficult for me to determine if this is the right thing or
not. I've not yet found documentation or discussion on how the network
drivers are supposed to work.

I tried out iperf. The interesting thing is that my results were
largely the same against a Xen domU and hardware. I can saturate a 100 mb
ethernet with TCP, but with UDP half the packets are dropped somewhere.

Increasing net.inet.udp.recvspace/sendspace helps.

- Brian

--- sys/arch/xen/xen/if_xennet_xenbus.c.orig    2009-03-07 22:12:50.000000000 
-0500
+++ sys/arch/xen/xen/if_xennet_xenbus.c 2009-04-17 16:06:32.000000000 -0400
@@ -460,6 +460,14 @@
                errmsg = "writing rx ring-ref";
                goto abort_transaction;
        }
+#ifdef FEATURERXNOTIFY
+       error = xenbus_printf(xbt, sc->sc_xbusd->xbusd_path,
+           "feature-rx-notify", "%u", 1);
+       if (error) {
+               errmsg = "writing feature-rx-notify";
+               goto abort_transaction;
+       }
+#endif
        error = xenbus_printf(xbt, sc->sc_xbusd->xbusd_path,
            "event-channel", "%u", sc->sc_evtchn);
        if (error) {
@@ -520,6 +528,9 @@
        struct xen_memory_reservation reservation;
        int s1, s2;
        paddr_t pfn;
+#ifdef FEATURERXNOTIFY
+       int notify;
+#endif
 
        s1 = splnet();
        for (i = 0; sc->sc_free_rxreql != 0; i++) {
@@ -576,7 +587,13 @@
                panic("xennet_alloc_rx_buffer: XENMEM_decrease_reservation");
        }
        sc->sc_rx_ring.req_prod_pvt = req_prod + i;
+#ifdef FEATURERXNOTIFY
+       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&sc->sc_rx_ring, notify);
+       if (notify)
+               hypervisor_notify_via_evtchn(sc->sc_evtchn);
+#else
        RING_PUSH_REQUESTS(&sc->sc_rx_ring);
+#endif
 
        splx(s1);
        return;
@@ -744,6 +761,9 @@
        struct mbuf *m;
        void *pktp;
        int more_to_do;
+#ifdef FEATURERXNOTIFY
+       int notify;
+#endif
 
        if (sc->sc_backend_status != BEST_CONNECTED)
                return 1;
@@ -779,7 +799,13 @@
                            sc->sc_rx_ring.req_prod_pvt)->gref =
                                req->rxreq_gntref;
                        sc->sc_rx_ring.req_prod_pvt++;
+#ifdef FEATURERXNOTIFY
+                       RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(&sc->sc_rx_ring, 
notify);
+                       if (notify)
+                               hypervisor_notify_via_evtchn(sc->sc_evtchn);
+#else
                        RING_PUSH_REQUESTS(&sc->sc_rx_ring);
+#endif
                        continue;
                }
                req->rxreq_gntref = GRANT_INVALID_REF;


Home | Main Index | Thread Index | Old Index