tech-kern archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

TCP vs urgent data [was Re: poll(): IN/OUT vs {RD,WR}NORM]



Should we maybe move this to tech-net?  It's no longer about poll().

>> I question whether it actually works except by accident; see RFC
>> 6093.
> I hadn't seen that one before,

Neither had I until Johnny Billquist mentioned it upthread.  (I tend to
share your reaction to the modern IETF, though I have additional
reasons.)

>> But the facility it provides is of little-to-no use.  I can't recall
>> anything other than TELNET that actually uses it,
> TELNET and those protocols based upon it (SMTP and FTP command at
> least).

FTP command, yes.  SMTP I'm moderately sure doesn't do TELNET in any
meaningful sense; for example, I'm fairly sure octet 0xff is not
special.  I find no mention of TELNET in 5321.

> SMTP has no actual use for urgent data, and never sends any, but FTP
> can in some circumstances I believe (very ancient unreliable memory).

Yes.  It should, according to the spec, be done when sending an ABOR to
abort a transfer in progress.  But, unlike TELNET's specification that
data is to be dropped while looking for IAC DM, the urgent bit can be
completely ignored by an FTP server which is capable of paying
attention to the control channel while a data transfer is in progress.

>> then botched it further by pointing the urgent sequence number to
>> the wrong place,
> In fairness, when that was done, it wasn't clear it was wrong - that
> all long predated anyone even being aware that there were two
> different meanings in the TCP spec, people just used whichever of
> them was most convenient (in terms of how it was expressed, not which
> is easier to implement) and ignored the other completely.   That's
> why it took decades to get fixed - no-one knew that the spec was
> broken for a long time.

So...I guess next to nothing depended on it even then, or someone would
have noticed the interoperability fail sooner than decades.

> Further, if used properly, it really doesn't matter much, the
> application is intended to recognise the urgent data by its content
> in the data stream, all the U bit (& urgent pointer) should be doing
> is giving it a boot up the read stream to suggest that it should
> consume more quickly than it otherwise would.

Right.  But...

> Whether than indication stops one byte earlier or later should not
> really matter.

That depends.  Consider TELNET, which is defined to drop data while
searching for IAC DM.  If the sender consider the urgent pointer to
point _after_ the last urgent octet but the receiver considers it to
point _to_ the last urgent octet, the receiver will get the IAC DM and
notice the urgent pointer points past it and continue reading and
dropping, looking for another IAC DM, dropping at least one data octet
the sender didn't expect.

> The text in that RFC about multiple urgent sequences also misses that
> I think -

I thought that was probably there for clarity, clarifying what
logically follows from the rest.

> all that matters is that as long as there is urgent data coming, the
> application should be aware of that and modify its behaviour to read
> more rapidly than it otherwise might (if it never delays reading from
> the network, always receives & processes packets as soon as they
> arrive, which for example, systems which do remote end echo need to
> do) then it doesn't need to pay attention to the U bit at all).

Well, there are correctness issues in some cases.  For example, in
TELNET's case, it is defined to drop data while sarching for the IAC DM
that makes up part of a synch; ignoring the urgent bit means that
dropping won't happen.  (Does that matter in practice?  Probably not,
especially given how little TELNET is used outside walled gardens.  But
it still is a correctness issue.)

/~\ The ASCII				  Mouse
\ / Ribbon Campaign
 X  Against HTML		mouse%rodents-montreal.org@localhost
/ \ Email!	     7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B


Home | Main Index | Thread Index | Old Index