[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Network performance issues
Hi, I'm testing some networking code I've written that uses I/O
multiplexing with kqueue and poll. The test consists of the following:
Client opens 100 simultaneous connections and sends N bytes of data on
Server accepts connections and replies N bytes of data back to the
When the client's send and receive data counters reach N bytes for a
connection, it closes this connection.
I'm seeing some performance issues with larger data segments:
Time for client to open 100 connections, send and receive 500 bytes on
each connection ranges from 0.07 to 1.54 seconds.
Time for client to open 100 connections, send and receive 1000 bytes on
each connection gives 31.05 seconds.
The hardware is two fast x86 machines, both using hme0 network
interfaces. The machines are connected to 100Mbps Ethernet switch.
Client is running NetBSD 5.0.2, server is running NetBSD 5.1_RC3.
I've gone through my code and can't see any problems. The hardware has
plenty of bandwidth, so that shouldn't slow things down. What I find
strange is that going from 500 byte data segments to 1000 byte,
increases total time wait by so much. If I had bugs in my code, surely
that would result in identical issues with 500 and 1000 byte data
segments. I tested this with kqueue and poll and got similar results.
Can this be a kernel issue? What sysctl tunable parameters could have
influence on this?
Main Index |
Thread Index |