Subject: Re: Linux ip codeReply-to: (fwd)
To: Greg Hudson <ghudson@MIT.EDU>
From: Ken Hornstein <kenh@cmf.nrl.navy.mil>
List: current-users
Date: 05/06/1995 01:05:35
>>>> o Broken UDP error handling.
>
>I think this refers to how BSD networking code handles ICMP port
>unreachable and other such messages in response to UDP packets. BSD
>will only report the error to the application (interrupting select()
>and handing an error return to the next UDP operation)if the UDP
>socket is connect()ed to the address where the port unreachable came
>from, whereas Linux will forward the error even if the socket is not
>connected. Mind you, this is somewhat useless, since it's impossible
>to tell *which* address you got the port unreachable for (you could
>imagine filling this in for a recvfrom(), I guess, but Linux doesn't
>appear to do this).
I just looked at the bible (Steven's "TCPIP Illustrated Volume 2"). For those
interested, check out pages 748-749. The two reasons given for this limitation
are:
- With unconnected sockets, the only way to demultiplex where the error goes
to is via the local port number. If multiple applications have this port
open, you end up having to sending the error to all of them, which means
some processes might get an error where in fact none of the datagrams
they sent caused such an error.
- As Greg mentioned above, there's no way to tell exactly _what_ packet you
sent caused the error. You only know that you got some sort of error.
A small sideline in the text admits this is a weakness of the sockets API>
So TCPIP Volume 2 says that the concious design decision was made to implement
it such that if you wanted error handling, use multiple UDP sockets and connect
each one of them.
Judging from that code segment that Greg showed, it seems that the Linux
networking guy's idea of "BSD UDP error semantics" was that _nothing_ got
returned. That is very definately broken, but it's not fair to blame BSD for
that. Personally, I think returning an error where there wasn't one is much
worse than not making errors available in all cases, but what the hell do
I know? :-)
I am amused by the comment that said (paraphrased) "BSD error semantics caused
DNS queries to dead nameservers to slow down", when there is code in the
BIND client code that specifically does a connect on a UDP socket when only
one nameserver is being queried.
--Ken