tech-kern archive
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: poll(): IN/OUT vs {RD,WR}NORM
Date: Tue, 28 May 2024 11:03:02 +0200
From: Johnny Billquist <bqt%softjar.se@localhost>
Message-ID: <3853e930-4e77-4f6d-8a73-ec826a067b14%softjar.se@localhost>
| This is a bit offtopic, but anyway...
So it is, but anyway...
[Quoting Mouse:]
| > TCP's urgent pointer is well defined. It is not, however, an
| > out-of-band data stream,
That's correct.
| > However, the urgent pointer is close to useless in today's network, in
| > that there are few-to-no use cases that it is actually useful for.
That's probably correct too. It is however still used (and still works)
in telnet - though that is not a frequently used application any more.
[end Mouse quotes]
| It was always useless. The original design clearly had an idea that they
| wanted to get something, but it was never clear exactly what that
| something was, and even less clear how the urgent pointer would provide it.
That's incorrect. It is quite clear what was wanted, and aside from a
possible off by one in the original wording, was quite clear in how it
worked, and it did work.
The U bit in the header simply tells the receiver that there is some
data in the data stream (which is not sent out of band) that it probably
should see as soon as it can, and (perhaps, this depends upon the application)
that temporarily suspending any time consuming processing of the intervening
data (such as passing commands to a shell to be executed) would be a good
idea, until the "urgent" data has been processed.
The urgent pointer simply indicates where in the data stream the receiver
needs to have processed to have encountered the urgent data. It does not
(and never did) "point to" the urgent data. [That's where the off by one
occurred, there were two references to it, one suggested that the urgent
pointer would reference the final byte of what is considered urgent, the
other that it would reference one beyond that, that is, the first byte beyond
the urgent data. This was corrected in the Hosts Requirements RFCs, somewhere
in the mid 80's if I remember them, roughly.] The actual data considered
as urgent could be any number of bytes leading up to that, depending upon
the application protocol. The application was expected to be able to
detect that, provided it actually saw it in the stream - the U bit (which
would remain set in every packet until one was sent containing no data
that included or preceded any of the urgent data) just allows the receiver
to know that something is coming which it might want to look for - but it
is entirely up to the application protocol design to decide how it is to
be recognised, and what should be done because of it ("nothing" could be
a reasonable answer in some cases).
That is all very simple, and works very well, particularly on high
latency or lossy networks, as long as you're not expecting "urgent"
to mean "out of band" or "arrive quickly" or anything else like that.
It is (was) mostly use with telnet to handle things like interrupts,
where the telnet server would have received a command line, sent that
to the shell (command interpreter) to be processed, and is now waiting
for that to be complete before reading the next command - essentially
using the network, and the sender, as buffering so that it does not need
to grow indefinitely big buffers if the sender just keeps on sending
more and more.
In this situation, if the sender tries to abort a command, when someone
or something realises that it will never finish by itself, then (given that
TCP has no out of band data, which vastly decreases its complexity, and
by so doing increases its reliability) there's no way for the sender to
communicate with the server to convey a "stop that now" message. And do
remember that all this was designed before unix existed (before RFC's existed,
you need to go back to the original IEN's) when operating systems didn't
work like unix does - it was possible that only one telnet connection
could be made to a destination host (not a TCP or telnet restriction, but
imposed by the OS not providing any kind of parallel processing or
multi-tasking), so simply connecting again and killing the errant process
wasn't necessarily possible. Character echo was often done by the
client, not by sending the echoed characters back from the server.
A very different world to the one we're used to.
The U bit (and the urgent pointer which is just a necessary accessory,
not the principle feature) allowed this to be handled. When the client
had something that needed attention to send, it would send that as "urgent"
data. But that would just go in sequence with previously sent data (which
in the case of telnet, where the receive window doesn't often fill, was
probably already in the network somewhere) - however the U bit can be set
in the header of every packet transmitted, including retransmits of earlier
data, or even in an in sequence, no data, packet, and will be - with the
sender sending a duplicate, or empty, packet if needed to get that bit to
the recipient. Once a packet with the U bit was (properly) received,
even if that carried no new data, the server process would be notified
that urgent data was coming. At that point it would stop just waiting
for some previous command to complete (if that is what it had been doing
and instead read, and scan, the data stream, until it reached where the
urgent pointer says it has read enough. While doing that it is expected
to notice (using some application protocol mechanism - telnet has one)
that something like an interrupt has been sent, and when it sees that,
interrupt the (presumably) running process. The other data it read,
up to the urgent pointer, is typically simply discarded.
If an application needs a mechanism like this, it works well. Still.
It just is not often needed any more, as networks are faster, parallel
connections are the norm, and servers rarely simply ignore the network
while waiting for work being done to complete.
kre
Home |
Main Index |
Thread Index |
Old Index