[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: bad interaction between TCP delayed ack and RSTs
So, the summary so far appears to be this:
- It is okay for NetBSD to change TCP behaviors but if others change
different behaviors and the two conflict, the other guy is the
- Firewalls that drop packets without sending RST are broken.
I am not aware of any TCP standard that says sending ACK in response
to RST and not tearing down the connection is correct. What I see in
RFC 793 says the connection must be closed upon receiving an
acceptable RST in the window. See the end of section 3.4 and "Event
Processing" page 70. I couldn't find the mentioned NISCC
The Linux behavior of sending RST when a socket is close()d without
reading all the data is described as a MAY in RFC 1122, section
220.127.116.11. A perfectly conforming TCP can thus receive data, send
data, close() without reading the data, and send a legitimate RST.
I understand why NetBSD is filtering the RSTs, but in light of these
RFCs, claiming that NetBSD is not the broken one and the TCP sending
the RST is the broken one seems wrong. The thing is, I didn't come
here to have a silly argument about whose TCP stack is most like the
specification. I don't even care much about my application.
I *was* just trying to help NetBSD behave in the best possible way by
pointing out this problem.
I also find it difficult to believe that reducing the number of
sequence numbers a forged RST will fail with from (2^32)-1 to
(2^32)-2 is a noteworthy weakening of resistance to such attacks. And
in my code, it even requires that there's currently a delayed ack
before "accepting" that second sequence number. I did think about
this; in my first message I explicitly mentioned that letting through
RSTs in the window would be a reduction in resistance, because with
window scaling that could actually start to add up.
As for the firewall, my personal observations are that there are an
awful lot of firewalls, "home networking" NAT boxes, whatever, out
there configured to *not* send RSTs when dropping packets, despite
what TCP wants. So, all told, I am no fan of Windows but you will not
convince me this is a "Windows is busted, end of story" situation.
At any rate, my problem appears to be not considered a bug.
Well... that's why I asked. For now I'll just add my solution to
my ever-growing set of private kernel and userland patches.
But this illuminating exchange will definitely inform my decisions if
I ever run a server more important than the current two-bit server
that a few friends and I use. Since I do not presume to "fix" all the
firewalls and Windows machines on the 'net, I would have to find a
Main Index |
Thread Index |