tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: network queue lengths


David Young wrote:
I am concerned that the lengthy Tx queues/rings in NetBSD lead
to lengthy delays and unfairness in typical home/office gateway
applications.  What do you think?

Uff, I was hoping for a moment that you want to write another packet scheduler to replace the ancient ALTQ that looks mostly unmaintained :)

If a NetBSD box is the Internet gateway for several 10-, 100-, or
1000-megabit clients, and the clients share a 1-, 2-, or 3-megabit
Internet pipe, it is easy for some outbound stream to fill both the
Tx ring (max 64 packets) and the output queues (max 256 packets) to
capacity with full-size (Ethernet MTU) packets.  Once the ring + queue
capacity is reached, every additional packet of outbound traffic that
the LAN offers will linger in the gateway between 1.3 and 3.8 seconds.

Now, suppose that we shorten the interface queue, or else we "shape"
traffic using ALTQ.  Outbound traffic nevertheless spends 1/4 to 3/4
second on the Tx ring, which may defeat ALTQ prioritization in some

It's not very clear to me how did you get this numbers. Transmitting 320 full-sized frames shouldn't take more than 50ms over 100Mbit full-duplex ethernet link. Are you talking about some other in-kernel delays ?

This is getting a bit long, so I am going to hastily draw some
conclusions.  Please tell me if I am way off base:

1 in order for ALTQ to be really effective at controlling latency for
  delay-sensitive traffic, it has to feed a very short Tx ring

64 frames sounds reasonable to me. That's a maximum jitter of about 10ms. Cisco had a default of 40 iirc for output queue. But yes, it's a good idea to have a global sysctl used to decrease this value if we use a gateway in order to transport packets for jitter-sensitive applications. I'm saying global because it's the easiest way to achieve this and because I'm thinking about MMoIP that's mostly bidirectional.

2 maximum queue/ring lengths in NetBSD are tuned for very high-speed
  networks, now; the maximums should adapt to hold down the expected delay
  while absorbing momentary overflows.

No idea for the moment how altq is interacting with ipintrq but a packet scheduler should be able to re-order it.


Home | Main Index | Thread Index | Old Index