Subject: Re: Melting down your network [Subject changed]
To: Jonathan Stone <jonathan@dsg.stanford.edu>
From: Christos Zoulas <christos@zoulas.com>
List: tech-kern
Date: 03/28/2005 22:04:47
On Mar 28,  6:54pm, jonathan@dsg.stanford.edu (Jonathan Stone) wrote:
-- Subject: Re: Melting down your network [Subject changed]

| >> But it does not curb congestion. I can keep spinning and sending.
| 
| >I think actually it does. Yes, you can keep spinning, but you will only be
| >sending if there is space in the interface queue to send. So you won't be
| >overloading the network, you will just be spinning your CPU. That's a
| >local DOS, not a network spam. :-)
| 
| Yep. You may eat a lot of local CPU by spinning, but you're not
| sustaining an unbounded overflow in the queue.

Same with sleeping. We are not making the situation worse. Except that
we are not burning cpu in userland; We are sleeping in the kernel.

| >> | To guarantee stability, I believe a stronger push-back is required
| >> | than can be given by merely sleeping until the interface queue has
| >> | space: the classic multiplicative decrease part of AIMD.
| >> =20
| >> That is the job of a higher level protocol/api not send()
| >
| >Agreed.
| 
| Bill,
| 
| No, that's incorrect; I suspect you don't understand the issue
| (or don't see it the way a networking expert will see it).
| Here is the key point again:
| 
| An ill-behaved app can *always* attempt to overflow a queue. The queue
| under discussion here as a potential victim of overflow attaks is the
| per-interface if_snd queue.
| 
| Thus, the question under discussion is: what should we do under
| sustained overload (or attempts to sustain overload) of the if_snd
| queue?  Specifically, when an app using UDP (or other unreliable
| datagram protocols) uses non-blocking I/O to persistently overflow the
| if_snd queue?
| 
| The most correct anwser is: we should drop packets.

You are not dropping packets. You are returning ENOBUFS to the application,
and you are giving it a chance to retry.

| >I'm confused. Aren't we talking about one way of the kernel avoiding a
| >flood vs another? One way is to just drop; the packet isn't sent, so no
| >flooding. The other way is to wait, so the packet isn't sent until there's
| >room on the net, so no flooding. ??? So either way the kernel is stopping
| >the app from flooding the net - one way just tries harder to avoid a UDP
| >packet drop. ??
| 
| Bill, this is where I say: please go back and read that part about the
| multiplicative decrease part of AIMD.  Dropping packets is a better
| solution than simply delaying the application for just long enough for
| the queue to drain, and then let the malformed app continue its
| denial-of- service behaviour.

Packets are not being dropped in this case... It is just a matter
of having the kernel sleep, and return upon success, or having the
application spin and retry to send the same packet again. If the
application wants to do this, it can always spin. There is no other
way for the application to achieve the maximum send rate, but spin.
But it can. If the kernel would sleep, then the application would
not have to spin.

christos