Subject: Re: Melting down your network [Subject changed]
To: Bill Studenmund <wrstuden@netbsd.org>
From: Jonathan Stone <jonathan@dsg.stanford.edu>
List: tech-kern
Date: 03/28/2005 18:54:53
I am going to reply to Bill here, because Christos is, to me, not yet
apparnetly quite up to speed in _this_ specific area.  I say nothing
at all about Christos' opiion in other areas: in all other areas,
Christos _always_ gives a well-informed opinion worth anyone's time to
consider. (And I mean that sincerely.)



In message <20050329021237.GF11361@netbsd.org>Bill Studenmund writes

>> But it does not curb congestion. I can keep spinning and sending.

>I think actually it does. Yes, you can keep spinning, but you will only be
>sending if there is space in the interface queue to send. So you won't be
>overloading the network, you will just be spinning your CPU. That's a
>local DOS, not a network spam. :-)

Yep. You may eat a lot of local CPU by spinning, but you're not
sustaining an unbounded overflow in the queue.


>> | To guarantee stability, I believe a stronger push-back is required
>> | than can be given by merely sleeping until the interface queue has
>> | space: the classic multiplicative decrease part of AIMD.
>> =20
>> That is the job of a higher level protocol/api not send()
>
>Agreed.

Bill,

No, that's incorrect; I suspect you don't understand the issue
(or don't see it the way a networking expert will see it).
Here is the key point again:

An ill-behaved app can *always* attempt to overflow a queue. The queue
under discussion here as a potential victim of overflow attaks is the
per-interface if_snd queue.

Thus, the question under discussion is: what should we do under
sustained overload (or attempts to sustain overload) of the if_snd
queue?  Specifically, when an app using UDP (or other unreliable
datagram protocols) uses non-blocking I/O to persistently overflow the
if_snd queue?

The most correct anwser is: we should drop packets.


>I'm confused. Aren't we talking about one way of the kernel avoiding a
>flood vs another? One way is to just drop; the packet isn't sent, so no
>flooding. The other way is to wait, so the packet isn't sent until there's
>room on the net, so no flooding. ??? So either way the kernel is stopping
>the app from flooding the net - one way just tries harder to avoid a UDP
>packet drop. ??

Bill, this is where I say: please go back and read that part about the
multiplicative decrease part of AIMD.  Dropping packets is a better
solution than simply delaying the application for just long enough for
the queue to drain, and then let the malformed app continue its
denial-of- service behaviour.