Subject: Re: perhaps time to check our TCP against spec?
To: Jason Thorpe <thorpej@nas.nasa.gov>
From: Jonathan Stone <jonathan@DSG.Stanford.EDU>
List: tech-net
Date: 04/07/1998 00:33:55
>
> No, not as long as the router's MTU is at least as big as the smaller
> MTU of the end-stations in the connection.
>Ah yes, because the Ethernet-connected host is going to be bounded by
>ETHERMTU. In any case, the FDDI-connected hsot is still going to advertise
>FDDIMTU MSS to the peer. Sorry, post-meal brain-mellow while I digest
>the tasty mushrooms that were on my dinner salad.
Yep. Apology accepted, and a sincere thank you.
Yes, the FDDI-connected host is going to advertise its FDDIMTU-derived
MSS. But the existing, pre-PMTU ``standard behaviour'' is that both
hosts advertise an MSS that fits within the MTU of the interface they
are using for the connection. And the other host has a smaller MTU,
so they both agree on the smaller MSS.
The initial MSS negotitation means this setup Just Works, with no
fragmentation. Same with Metricom radios.
When I use a laptop around Stanford, I use it with a Metricom radio,
and the STRIP driver. That "just works", and it relies on this behaviour.
It works with same-subnet hosts which have SUBNETSARELOCAL turned on
and which don't (and may never) do PMTU.
And the frequency-hopping behaviour of the Metricom radios meansthat
the overheads of doing fragmentation here are just unacceptable. It
halves peak throughput.
And the Mosquitonet group built a mobile-IP system which relies on
this same behaviour, again to avoid unecessary fragmentation over
low-bandwidth links.
People *have* built systems which rely on the existing standard.
(certainly defacto, per BSD (I think it's even a requirement, but it's
late and i might be wrong.) That's a hard fact.
the in_maxmtu scheme will break such setups. It would break my own
setup. That's a hard fact.
And iirc, RFC1191 says hosts _MAY_ do this, not MUST.
In my book, the in_maxmtu conneciton-establish semantics is a
reasonable option for people who do expect their peers will be doing
PMTU, and who want to enable it. I'm not saying it has to go away.
But (and this is not subject to debate;) it also breaks existing
practice in a way which _I_, personally, don't find acceptable.
I think the right decision here is to make the in_maxmtu behaviour a
configurable option. If turned on, we get the in_maxmtu behaviour.
If its' turned off, we get the previous behaviour, where the
initially-advertised MSS is clipped against the MTU of the interface
which the conneciton is using.
Personally I think that, given the change to the previous
(pre-in_maxmtu) default behaviour, that the default should be `off' if
Path-MTU is off, and the default should stay off, until PMTU is
ubiquitous. At that point it can go on.
I haven't thought hard about what the default should be if PMTU
is on; it might be reasonable for the default setting of
in_maxmtu to be the same as the `startup' PMTU setting.
And please dont ask me about IPV6 ;).
[PPP stuff]
OK, maybe it does work well there. But it doesn't work for people
using metricom radios on mobile machines that also have Ethernets.
The effects there are just horrible. And there're other setups
where in_maxmtu would fail in the same way, like a dialup
PPP from an intermittently-connected machine which is connected
to an isolated LAN, say in someone's hotel room at a conference.
> Looks to me like Jason's idea does in fact break some existing setups,
> but Jason is now trying to claim that those setups are really broken
> in the first place.
>Well, if they rely on a quirk of a particular implementation of TCP,
>then I would assert that THEY ARE!
But I don't think these are quirks. If it works on Linux and BSD and
Ultrix, I wouldn't call it a quirk. In the pre-PMTU world, these are
long-established defacto-standard behaviour, if not RFC-level
`required' behaviour. (I'm tired and It's too late to go and check.)
Heck, if they were just quirks, I woudln't kick up such a fuss!
>However, unless I missed something, I don't see how "my idea" actually
>breaks the A->Ether->Router->FDDI->B scenario....
The breakage is the fragmentation in the B-> A direction. In the
pre- in_maxmtu world, that never happened. In the analagous topolgy
with wireless-to-ether instead of ether-to-FDDI, that's more than
enough breakage to convince me.