Subject: pci/fxp performance quirks
To: None <port-alpha@netbsd.org>
From: Hal Murray <murray@pa.dec.com>
List: port-alpha
Date: 08/12/1999 23:28:43
> It could be, but we at pdq.com also tried fxp cards, and had
> the *same* problems.  It would then be true only if the fxp
> also played with the same parameters.

Speaking of fxp...  [I gave up on Tulips a while ago.]

I have two pairs of machines.  The first pair has 400 MHz Intel/Celeron 
CPUs.  The second pair has 600 MHz Alpha/EV56 CPUs (Miata). 

Each machine has a quad ethernet card: 21154 PCI-PCI bridge and Intel 
82558 Ethernet chips.  All are running vanilla 1.4. 

If I run some quick tests that send a lot of network traffic, everything 
looks reasonable on the Intel boxes. 

On the Alphas, I see strange performance quirks.  It never hangs, 
just sometimes/often doesn't go as fast as I expect.  I see things 
like this in /var/log/messages: 

  Aug 12 19:12:47 mckinley /netbsd: fxp1: device timeout
  Aug 12 19:13:28 mckinley last message repeated 5 times
  Aug 12 19:13:43 mckinley last message repeated 2 times
  Aug 12 19:15:39 mckinley /netbsd: fxp2: device timeout
  Aug 12 19:17:50 mckinley /netbsd: fxp2: device timeout

An occasional delay of a second or three would explain the numbers 
my test program is printing. 

This happens much more often when actually transfering data in both 
directions at the same time (on a full duplex link).  It does happen 
occasionally on a half duplex link. 

One difference that might be interesting is that the quad ethernet 
cards on the Alpha are plugged into a 64 bit PCI slot. 



I've been trying on and off for the past hour or three to provoke 
something similar on the Intel boxes.  I haven't been able to do 
it.  That's not to say there isn't a problem, just that it's either 
a lot harder to provoke or I don't know the secret yet.