Subject: Performance of Alteon/Tigon Gigabit Ethernet cards
To: None <tech-net@netbsd.org, port-alpha@netbsd.org>
From: Hal Murray <murray@pa.dec.com>
List: port-alpha
Date: 01/29/2001 22:49:49
I'm running my collection of network testing/bashing scritps.  Performance 
is "strange" but I can't put my finger on it yet. 

I'm running 1.5 on Miatas (600au) and XP1000s.

Are there are any known problems/quirks?  Or parameters I should 
tweak?  Anybody managed to get good results on this gear?

Any good/bad results with jumbo packets?


The symptoms are that sometimes things are reasonable and then, an 
hour later when I run the same script again, it doesn't work very 
well.

For example, a TCP request-response test with short data blocks takes 
either 200 microseconds or 10 miliseconds.  It works within reason 
for blocks over 20K bytes - throughput is 230 megabits. 

I'm guessing there is some internal mode/state that I'm getting into 
that's "bad".  Obviously, I don't know what it is yet.



One of my tests is a simple UDP-blast test that often overloads the 
receive side.  I see things like this in the /var/log/messages.

  Jan 29 18:16:16 ruby /netbsd: WARNING: mclpool limit reached; increase NMBCLUSTERS
  Jan 29 18:16:16 ruby /netbsd: ti0: cluster allocation failed -- packet dropped!
  Jan 29 18:16:36 ruby last message repeated 867 times
  Jan 29 18:16:36 ruby /netbsd: fpa0: can't alloc cluster
  Jan 29 18:16:36 ruby /netbsd: ti0: cluster allocation failed -- packet dropped!
  Jan 29 18:16:49 ruby last message repeated 800 times
  Jan 29 18:17:09 ruby last message repeated 412 times
  Jan 29 18:17:21 ruby /netbsd: WARNING: mclpool limit reached; increase NMBCLUSTERS

That could be poisioning things.

I guess I'll try some tests without that mode.


[Sorry for such a fuzzy question.]