Subject: Melting down your network
To: Emmanuel Dreyfus <manu@netbsd.org>
From: Jonathan Stone <jonathan@dsg.stanford.edu>
List: tech-kern
Date: 03/27/2005 13:34:45
The technically informed participants seem to have reached consensus
that there's nothing actively wrong with current send(2) semantics on
NetBSD.  (I note Thor would prefer to change the ENOBUFS semantics,
but I haven't noticed him say what we have now is actually incorrect).
I have changed the Subject: line to accurately reflect the subject of
discussion, namely, Emmanuel's attempts to write a multicast application
which  achieves a network meltdown via multicast, in pursuit of an
information-dissemination applicatoin.

I intend to propagate that Subject: change on all subsequent messages
which do not directly address shortcomings in the semantics of NetBSD
send(2).


In message <1gu3e4v.15tzljv13c2ylmM%manu@netbsd.org>,
Emmanuel Dreyfus writes:


>> You're saying that one syscall takes enough time for an entire full
>> interface queue to drain?  How can you ever fill it up with sends then?


Then (as others have commented several times): Emmanuel needs a bigger
send queue.  The timins on select() may be of some interest, if they
are reproducible; I think kqueue would be more interesting, as I'd
expect it to have lower overhead.

Emmanuel, can I check if  I am following this correctly? 

1.  You are Attempting to write a multicast application for
dissemination of information.

2.  You want the application to send at wire rate.

3. You deliberately wish the application to be non-rate-adaptive:
fixed-rate, wire-speed.

4. You deliberately want the application to be non-congestion
responsive (technically subsumed in earlier points, but worth listing
in its own right).

5.  You want *reliable* dissemination of information: that is, your
application is expected to recover from packet drops. Or at least,
from some packet drops.

6.  You mention several times that the sender must detect loss between
its send() calls and the wire.  I see no mention of either ACKs or
NACKs (positive or negative acknowledgements, which leads me to infer
that the *only* loss recovery mechanism you evisage is for the sending
application to detect drops between its send() call and the network,
and retransmit those locally-dropped packets.

7. If the inference in point #6 is correct, then you are implicilty
assuming that the network never drops packets, and the receivers never
drop packets: the *only* source of packet drop is at the sender.


Am I correct so far? Or even close?