Subject: Re: NetBSD in BSD Router / Firewall Testing
To: None <tls@rek.tjls.com>
From: Mike Tancsa <mike@sentex.net>
List: tech-net
Date: 11/30/2006 22:15:04
At 09:43 PM 11/30/2006, Thor Lancelot Simon wrote:
>On Thu, Nov 30, 2006 at 07:41:45PM -0500, Mike Tancsa wrote:
> > At 06:49 PM 11/30/2006, Thor Lancelot Simon wrote:
> >
> > > 1) The efficiency of the switch itself will differ in these
> > > configurations
> >
> > Why ? The only thing being changed from test to test is the OS.
>
>Because the switch hardware does not forward packets at the same rate
>when it is inserting and removing VLAN tags as it does when it's not.
>The effect will be small, but measurable.
But the same impact will hurt *all* the OSes tested equally, not just
NetBSD. Besides, supposedly the switch is rated to 17Mpps. No doubt
there is a bit of vendor exaggeration, but I doubt they would stretch
the number by a factor of 10. Still, even if they did, I would not
be able to push over 1Mpps on my RELENG_4 setup.
> > > 2) The difference in frame size will actually measurably
> impact the PPS.
> >
> > Framesize is always the same. UDP packet with a 10byte payload.
>
>No. The Ethernet packets with the VLAN tag on them are not, in fact,
I did both sets of tests. eg . the line
RELENG_6 UP i386 FastFWD Polling
means that em0 was in the equiv of 0/4 and em1 0/5 with port 0/4
switchport access 44
and port 0/5
switchport access 88
where as the test
RELENG_4, FastFWD, vlan44 and vlan88 off single int, em1 Polling, HZ=2000
has a switch config of
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 44,88
switchport mode trunk
on port 5.
I tested NetBSD 3.1 bge nic against config b). So I dont see how you
cant compare the results of that to
HEAD, FastFWD, vlan44 and vlan88 off single int, em1 (Nov 24th sources)
RELENG_4, FastFWD, vlan44 and vlan88 off single int, em1 Polling, HZ=2000
HEAD, FastFWD, vlan44 and vlan88 off single int, bge0 (Nov 24th sources)
RELENG_6, FastFWD, INTR_FAST, vlan44 and vlan44 off single int, em1
which had the exact same switch config.
>the same size as those without it; and for a packet as small as a 10
>byte UDP packet, this will make quite a large difference if you actually
>have a host that can inject packets at anywhere near wire speed.
Thats why I use at least 2...
> > generators are the same devices all the time. I am not using
> > different frame sizes for different setups to try and make something
> > look good and other things bad.
>
>I didn't say that you were, just to be clear. But that does not mean
>that running some tests with tagging turned on, and others not, is
>good benchmarking practice: you should run the exact same set of tests
>for all host configurations, because doing otherwise yields distorted
>results.
I did where I could. I am not saying compare the trunking performance
of NetBSD to the non trunking performance of FreeBSD. I am looking
at trunking to trunking, non trunking to non trunking. I did the
majority of my testing with the Intel PCIe dual port card which
NetBSD 3.1 does not support. So, since I had some bge tests, I ran
the bge tests in vlan mode which I dont see why you cant compare that
to vlan mode on FreeBSD using the same bge card. Its the exact same
switch config for both set of tests, and the same traffic generators
so I dont see why its not a valid comparison.
> > >3) There is a problem with autonegotiation either on your switch, on the
> > > particular wm adapter you're using, or in NetBSD -- there's not quite
> > > enough data to tell which. But look at the number of input errors on
> > > the wm adapter in your test with NetBSD-current: it's 3 million. This
> > > alone is probably responsible for most of the performance difference
> >
> > .... Or the kernel just was not able forward fast enough.
>
>No; that will simply not cause the device driver to report an input
>error, whereas your netstat output shows that it reported three *million*
>of them. Something is wrong at the link layer. It could be in the NetBSD
>driver for the Intel gigabit PHY, but there's not enough data in your
>report to be sure. FWIW, I work for a server load balancer vendor that
>ships a FreeBSD-based product, and I consequently do a lot of load testing.
>Even with tiny UDP packets, I get better forwarding performance from
>basically _every_ OS you tested than you seem to, which is why I think
>there's something that's not quite right with your test rig. I am just
>doing my best to point out the first things that come to mind when I look
>at the data you've put online.
Stock FreeBSD, or modified FreeBSD ? With RELENG_4 I can push over
1Mpps. All of the test setups I used saw input errors when I tried
to push too many packets through the box. I really dont know much
about NetBSD but it too will have some sort of limit as to how much
it can forward. Once its limit is hit, how does it report that
? Does it just silently drop the packet ? Or does it show up as an
input error ?
>I note that you snipped the text where I noted that because you're
>testing the wm card with mismatched kernel and ifconfig, you're not
>using its hardware checksum offload. That's one thing you should
>definitely fix, and if you don't have that turned on for other
>kernels you're testing, of course you should probably fix it there too.
It didnt seem to make much difference on FreeBSD (i.e. turn hardware
checksums on or off for routing performance) but I will see if I can
get the box rebuilt to sync the base with the kernel.
---Mike