tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

bad interaction between TCP delayed ack and RSTs



Hi,

I've recently been poking at a problem I've been having running a server
on NetBSD. The problem occurs with a Windows (XP) client but I don't
think the problem is restricted to Windows TCP peers.

The client writes a few bytes of data to a TCP connection. No response
is expected from the server. Then the client exits. The server is a
basic select loop reading the data from the client. The problem is:
sometimes, when the client exits, NetBSD does not tear down the
connection. So from my server's point of view, the connection is still
open, and it's happily waiting for more data that's not coming.

This is what is happening:

- The client sends some data.
- Delayed ack causes NetBSD to send no ack for this packet.
- The client exits (or closes the socket).
- Windows sends an RST/ACK to close the TCP connection (it does this a
  lot, if not most of the time -- I do not know when it uses a FIN).
- When the RST message arrives, NetBSD responds to the RST with an ACK
  and then drops the RST.
- The intention must be that the peer will respond to the ACK with
  another RST, at which point all the segments have been acked and the
  connection will be properly shut down.
- The *reality* is that the Windows firewall drops the post-RST ack.
  (If I disable Windows firewall, a second RST does arrive at the server
  side and everything is fine.)

The result is that NetBSD believes the connection is still open, and it
will never believe otherwise unless the connection is set up to time
out. And of course my server never gets an ECONNRESET and cannot act
appropriately as a result.

This seems to be the code that is giving me trouble:

        if (tiflags & TH_RST) {
                if (th->th_seq != tp->last_ack_sent)
                        goto dropafterack_ratelim;

(this is tcp_input.c revision 1.291 lines 2225-2227 (from NetBSD 5.0)).

I assume this code is intended as defense against RST spoofing? The log
message (revision 1.194) for this says: "respond to RST by ACK, as
suggested in NISCC recommendation". I can't find any recommendation like
this; advisory 236929 seems likely but it makes no such recommendations
(its primary recommendation seems to be to use IPsec!).

So I'm left assuming that the theory is that if it's an attack RST, the
connection will not be torn down, and if it's a genuine RST, the peer
will respond to the ACK with another RST. But here I have genuine RSTs,
in a situation that is not really all that strange, and the connection
isn't being torn down.

I just don't think it's right for a connection to not close down solely
due to the fact that it's in the middle of doing a delayed ack. (If
there's no delayed ack, everything is fine with the same firewall and
peer.) Does this sound like a bug to anyone else?

I thought about it for a while and it seems like in the case where
there's a delayed ack, the RST should be able to close down the
connection as if the ack hadn't been delayed in the first place. I
tested that out (more below) and it did fix my problem.

But upon further thought I'm not sure if that is even correct. What if
the client sent data and that packet was lost, and then the client
exits? In that case delayed ack isn't the problem, but it would still
leave the connection open on the server... Maybe any RST that arrived
within the window should be allowed to close the connection? That would
reduce the resistance to RST attacks a lot more than just allowing for
the delayed ack case, though.

I am somewhat surprised it's been over 5 years since this code was added
and I haven't been able to find any complaint about this! This problem
leaves my server using up resources indefinitely if nothing else. And it
continues to be annoying even if the server detects the situation by
other means and closes the socket, because the connection still hangs
around for a long time forlornly sending FINs to Windows which drops
them all.

Fortunately I am not trying to run a big server talking to lots of
clients that send RST to close the connection down after sending data
that doesn't expect a response, and apparently neither is anyone
else. :-)


This is the code change I tried out:

        if (tiflags & TH_RST) {
                if (th->th_seq != tp->last_ack_sent
                    && !((tp->t_flags & TF_DELACK)
                         && th->th_seq == tp->rcv_nxt))
                        goto dropafterack_ratelim;

I'm not positive this is exactly correct because I am not familiar with
all the details of this code, but the idea was to drop the packet only
if the sequence number doesn't match the last ack *and* the sequence
number doesn't match what would be the last ack if everything had been
acked.

I wrote simple programs to test this out: a server which selects on and
reads from a socket, and a client to run on Windows which sends 150
bytes of data and then immediately closes the socket. (I can't run the
client on NetBSD because FINs are sent.) With these programs it is
trivial to reproduce the problem. The above code *does* make my server
see the connection reset when it didn't before. (I have the code and
packet traces if they are needed.)


A workaround for me is to set net.inet.tcp.ack_on_push=1 since
empirically I see there is a PSH flag on the packet with the data, but
can we assume that in the general case? If it's agreed that this is a
bug that should be fixed, I can put in a PR if necessary for
it. Hopefully if so it can be fixed by someone who knows the code better
than me; I'm unsure, for example, whether a similar change should be
done at lines 2099-2105...

Joanne


Home | Main Index | Thread Index | Old Index