tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: wm(4) performance issues



On Sat, 8 Mar 2008 09:07:20 -0500
Thor Lancelot Simon <tls%rek.tjls.com@localhost> wrote:

> On Wed, Mar 05, 2008 at 03:46:06PM -0600, Jonathan A. Kollasch wrote:
> > Hi,
> > 
> > I recently picked up a Intel Pro/1000 PT Desktop wm(4).
> > (Because nfe(4) was rather unhappy for me. But that's another
> > story.)
> > 
> > I was surprised to find it has performance issues under NetBSD.
> > 
> > On a amd64 4.99.54 box (Socket 754, nforce4) I couldn't get it to
> > source or sink much more than 25 Mbyte/s.  On another instance of
> > the same model of motherboard running 4.99.31, 57 Mbyte/s was
> > obtainable.  Both of these
> 
> This is a single-stream test?  What are your send and receive socket
> buffer sizes at each end?
> 
> Tuning the driver (almost any driver, really) for 1Gbit/sec
> throughput with our tiny default socket buffer sizes requires an
> unacceptably high interrupt rate limit and CPU consumption.  With
> reasonable socket buffer sizes for gigabit networking, the driver
> seems to perform quite well for me (though I think Simon is going to
> check in some more adjustments to the interrupt timer code soon).
> 
> Thor
> 

Performance issues are quite complex.  I ran a few tests with ttcp on
my home gigE network, on a variety of machines.  The results are
certainly not intuitive.  The machines differ widely in CPU speed; most
have some variety of wm.  For example, a 1.5 Ghz AMD running 4.0rc4 can
receive from a 1.667 Ghz AMD running 4.0 at 480M bps, but it can only
send at 230M bps.  Both have i82541PI chips with the IGP01E1000 phy.

Talking to a dual-core, 2.2 Ghz amd64-current laptop with a i82801H
chip, the faster of those two machines can send at 267M bps and receive
at 323M bps.  That makes it seem as if -current can't send that fast,
compared to 4.0.  However, that very same laptop can send at 670M bps
to a fast -current desktop with a bge card -- but it only receives from
it at 311M bps.  (Both -current machines have tcp.sendbuf_auto and
tcp.recvbuf_auto set to 1.)


                --Steve Bellovin, http://www.cs.columbia.edu/~smb


Home | Main Index | Thread Index | Old Index