Subject: Re: NFS problem.
To: Johnny Billquist <>
From: Steven M. Bellovin <>
List: current-users
Date: 12/10/2005 13:05:04
In message <>, Johnny Billquist writes:
>Steven M. Bellovin wrote:
>> In message <dne4vq$qbc$>, Michael van Elst writes:
>>> (Matthias Scheler) writes:
>>>>Yes, but it hasn't changed and never will. Large UDP packets are sent
>>>>as IP fragments. If you lose one of the IP fragments the whole UDP
>>>>packet is lost because there is no selective retransmit. When a machine
>>>>e.g. loses 5% of incoming packets at least one of the IP fragments
>>>>of a 32KB UDP packet will always get lost. Retries will not help because
>>>>another single lost packet will prevent the reception of the UDP packet.
>>>On the other hand, TCP isn't exactly fast with 5% packet loss either.
>> Right, but TCP adapts its sending rate to the level that avoids packet 
>> loss.
>Exactly how does it do that in this instance?
>We're talking about the fact that if we send back to back packet on the 
>net, we have a limit of (in this case I believe) 2 packets. All other 
>situations will work fine.

TCP's congestion control works by using the arrival of ACK packets to 
clock transmissions.  It learns, adaptively, how many packets it can 
send before it receives an ACK.  If back-to-back packets cause 
problems, the second one will be lost; TCP will thus learn that it 
needs to see the ACK for the first one before it sends the second.

Put another way, TCP assumes that all packet loss is due to a congested 
spot that can't handle above a certain arrival rate.  It neither knows 
nor cares that that spot is local rather than some router along the 

Jonathan Stone can, I'm sure, explain this better than I can.

		--Steven M. Bellovin,