Subject: Re: Up-stream bandwidth shaping without resorting to linux/iptables?
To: Steven M. Bellovin <smb@cs.columbia.edu>
From: Greg A. Woods <woods@weird.com>
List: netbsd-users
Date: 02/05/2005 23:49:20
[ On Saturday, February 5, 2005 at 17:53:59 (-0500), Steven M. Bellovin wrote: ]
> Subject: Re: Up-stream bandwidth shaping without resorting to linux/iptables?
>
> Actually, at least for TCP and well-behaved UDP applications, that's
> not so. Dropped packets are interpreted by the sender as an indication
> of congestion, which will cause it to slow down.
Ah ha! Yes, of course. And dropping of packets is one of the things
ALTQ can be good at! ;-)
I haven't always been very trusting of a random sender's TCP conjestion
control implementation though. Do you know off hand whether all the
more common implementations out there actually do the right thing or not?
Also, for multimedia applications using UDP, such as VoIP using IAX, the
majority of senders are not ever going to be "well behaved" by that
definition. There's a specific amount of data for them to send in every
time slot and dropped packets are only going to affect the quality of
reception, not slow them down. I suppose in a really sophisticated
multimedia application the receiver might report that "congestion" of
some sort is causing quality problems and perhaps the result might be
re-negotiation to use more compression or to drop back to a
lower-quality data stream (perhaps if the user chooses to permit it),
but that's really not going to help much if the goal is to limit the
total incoming traffic of a given kind on a given link.
This is where I think flow-based management could simply prevent too
many flows from being opened in the first place and thus help provide
better QoS for all those flows that do get established. With something
like IAX though an intermediate gateway would have to transparently
proxy every connection setup and do the right thing to effect a "system
busy" for some users when too many calls are attempted for the allocated
bandwidth. Unfortunately ALTQ can't do anything like that yet.
> Another strategy (I
> suspect, though I haven't tried it) is to delay ACKs, since the sending
> TCP will use the ACk arrival rate to clock the sending rate.
Yes, that's something I've also considered as a potential flow
management technique. I know it worked "well" (so to speak) with at
least one other windowing protocol -- kermit -- I remember diagnosing a
problem once long ago where a kermit transfer seemed abnormally slower
than the available link bandwidth and the problem was that the ACKs just
weren't getting out fast enough (due to processing overhead, IIRC) and
the transmission window was always full. In that case, IIRC, the
"solution" (to maximize throughput) was to increase the window size.
Unfortunately this kind of connection management isn't something that
ALTQ is currently of any help with either.... :-(
What happens in TCP if ACKs are dropped? IIRC that wouldn't work as
well, for rate control, as simply delaying the ACKs.
--
Greg A. Woods
H:+1 416 218-0098 W:+1 416 489-5852 x122 VE3TCP RoboHack <woods@robohack.ca>
Planix, Inc. <woods@planix.com> Secrets of the Weird <woods@weird.com>