NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Weird network performance problem



On Mon, 20 Jan 2020 at 19:00, Michael van Elst <mlelstv%serpens.de@localhost> wrote:
>
> ci4ic4%gmail.com@localhost (Chavdar Ivanov) writes:
>
> >> > If I revert to 32768, I get back about the third of the speed.
>
>
> NetBSD has rather small buffers as default and the auto scaling code
> isn't as aggressive as the one in Linux or FreeBSD.
>
> If you disable auto scaling, then the configured space is fixed
> unless the program asks for a value itself (iperf3 option -w).
>
> If you enable auto scaling (the default), then the configured space
> is the minimum and you have net.inet.tcp.{send,recv}buf_{inc,max}
> to give (somewhat linear) increments and a maximum. A program that
> sets the buffer sizes itself automatically disables autoscaling
> for that buffer (so don't use iperf3 -w then).
>
> If you increase buffers you may also need to bump the limits
> kern.sbmax (maximum size for a socket buffer) and kern.mbuf.nmbclusters
> (system wide number of mbuf clusters of 2kbyte each).
>
> Apparently the W10 clients add something to network latency, which means
> that larger buffers are required to get best performance.
>
>
> I suggest for a regular 64bit PC you keep autoscaling, bump read and write
> minimum to 128k, increment to 256k and maximum to 2M+128k.  Also set sbmax
> to 2MB+128k and nmbclusters to >= 65536 (128MB).

That seems very sensible, and I discovered I already had it in one of
my other laptops... Completely forgotten about that change.

Another factoid, which could be of some use to somebody. Under
XenServer/XCP-NG I have alsways configured NetBSD guests to use Intel
e1000 device emulation and not the default rlt819. Today I found,
that, at least under the latest XCP-NG (8.0), rtl819 is twice as fast:

$ iperf3 -c spare
...
[  7]   5.00-6.00   sec  9.22 MBytes  77.3 Mbits/sec    0    512 KBytes
[  7]   6.00-7.00   sec  9.00 MBytes  75.3 Mbits/sec    0    512 KBytes
[  7]   7.00-8.00   sec  9.46 MBytes  79.5 Mbits/sec    0    512 KBytes
[  7]   8.00-9.00   sec  9.27 MBytes  77.7 Mbits/sec    0    512 KBytes
[  7]   9.00-10.00  sec  9.09 MBytes  76.3 Mbits/sec    0    512 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  7]   0.00-10.00  sec  90.4 MBytes  75.8 Mbits/sec    0             sender
[  7]   0.00-10.00  sec  90.0 MBytes  75.5 Mbits/sec                  receiver

iperf Done.

#
# shutdown the spare machine, change the NIC emulation from e1000 to rtl819
#
# in both cases net.inet.tcp.[recv|send]space=131072
#
$ iperf3 -c spare
Connecting to host spare, port 5201
[  7] local 192.168.0.29 port 65296 connected to 192.168.0.10 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  7]   0.00-1.00   sec  15.9 MBytes   133 Mbits/sec    0    512 KBytes
[  7]   1.00-2.00   sec  19.8 MBytes   166 Mbits/sec    0    133 KBytes
[  7]   2.00-3.00   sec  21.2 MBytes   178 Mbits/sec    0    281 KBytes
[  7]   3.00-4.00   sec  20.7 MBytes   174 Mbits/sec    0    209 KBytes
[  7]   4.00-5.00   sec  19.5 MBytes   164 Mbits/sec    0    132 KBytes
[  7]   5.00-6.00   sec  20.7 MBytes   174 Mbits/sec    0    236 KBytes
....
#
# Unfortunately in both cases far from the performance of FreeBSD
guest on the same
# XCP-NG host - as follows:
#
$ iperf3 -c freenas
Connecting to host freenas, port 5201
[  5] local 192.168.0.29 port 65290 connected to 192.168.0.251 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   109 MBytes   912 Mbits/sec    0   4.00 MBytes
[  5]   1.00-2.00   sec   111 MBytes   933 Mbits/sec    0   4.00 MBytes
[  5]   2.00-3.00   sec   111 MBytes   932 Mbits/sec    0   4.00 MBytes
[  5]   3.00-4.00   sec   111 MBytes   933 Mbits/sec    0   4.00 MBytes
[  5]   4.00-5.00   sec   111 MBytes   933 Mbits/sec    0   4.00 MBytes

#
# In all above tests the client was a -current AMD64 physical system
# on the same segment as the XCP-NG host.
#

The above comparison with FreeBSD is not very fair though; the NetBSD
guests are pure HVM, the FreeBSD one is also reported as HVM, but with
optimised I/O, which is missing in the NetBSD case; the network
interfaces are xn0/xn1 in the system - even if in the definition of
the FreeBSD guest e1000 is shown.

Chavdar

....

>
> --
> --
>                                 Michael van Elst
> Internet: mlelstv%serpens.de@localhost
>                                 "A potential Snark may lurk in every tree."



-- 
----


Home | Main Index | Thread Index | Old Index