Subject: Re: vi lossage is caused by TCP delayed acks
To: Matthias Drochner <email@example.com>
From: Bill Sommerfeld <firstname.lastname@example.org>
Date: 05/12/1998 07:59:07
Actually, this is the "intended" behavior of TCP for interactive
traffic; it was engineered to behave that way..
When you're doing char-at-a-time telnet with remote echoing,
a) you want to piggyback the ACK of your transmitted character
with the echo of that character..
b) on slow connections, you want to put as many characters as
possible into a single packet..
So, you don't transmit if you have a small amount of outstanding
unack'ed data (waiting for the ack first) to let you accumulate as
many characters as you can..
This minimises the number of packets sent, which improves performance
when you've got a slow link in the middle..
Several kluges come to mind..
There are a couple ways of getting around this; the best would be for
the telnet/rlogin client to dally for 100ms after an ESC just like vi
does before sending off the who escape sequence...
Another possible kludge would be for vi to send a NUL to `echo' the
escape, but if the rtt is more than ~2/3's the escapetime value you'll
You could tweak `vi' to use a different `escapetime' when it thinks
it's running over telnet or rlogin..