Subject: Re: perhaps time to check our TCP against spec?
To: Erik E. Fair <email@example.com>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
Date: 04/06/1998 15:55:16
>Sorry I missed seeing you there. I'm glad to see that NetBSD is up
>on what's going on in the IETF.
It's not that simple.
NetBSD may be up with the letter of the relevant internet-drafts on
stretch acks, but (as I've demonstrated beyond a reasonable doubt) we
have real problems with the spirit behind it.
A big part of the reason people are beating up on stretch acks is that
they increase the burstiness of TCP. This is discussed at some length
in the recent TCP literature (Bramko and Peterson's paper on bsd44 tcp
performance bugs; Fall and Floyd's SACK simulation paper; and
mentioned as `needing further work' in Hoe's SIGCOMM 96 paper).
Most Reno-based TCPs are, at worst, limited to one ack every three
segments. NetBSD does seem to now have the required number of acks.
Thanks to Jason for nailing that at last.
However, the problem of poorly-timed ACKs causing huge bursts
(just as with stretched ACKs) still remains in NetBSD.
(as, again, I have shown beyond a reasonable doubt) And as I
understand it, that _is_ the problem the IETF is trying to address.
In that respect we are *not* ``up on what's going on in the IETF''.
I am, BTW, getting really tired of this ``if I dont' see it in front
of my nose, it doesn't exist'' attitude to performance problems in
NetBSD. The problems are there, they're real, they've already been
shown to various NetBSD developers in the past. Including, as it
happens, Jaosn Thorpe.
I was *trying* to ask Jason what was going on with this, since I'd
discussed this specific problem of bogus MTU calculations with him
before. If Jason forgot about that, then that's his bad.
Fortunately, we do have some developers who will actually discuss
areas where NetBSD has poor performance, rather than flaming and
pretending the problems don't exist. Kevin Lahey is fixing some of
these bugs already.
If anyone else wanted to look at why NetBSD's TCP stack exhibits this
convoying performance bug (the sender sending an entire window of data
packets, close together, which go to the receiver, which recives and
swallows the entire convoy before sending back _any_ ACKs), I'd be