tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: problems in TCP RTO calculation



On Sat, Mar 12, 2011 at 09:32:03AM -0500, Greg Troxel wrote:
> + *
> + * NetBSD now stores srtt in units of seconds, with 6 bits to the
> + * right of the radix point, in order to allow for more precision
> + * (15.6 ms instead of 62.5 ms).  TODO: document rttvar storage.

So, I'm wondering: this makes 15.6ms the minimum value that can
be represented, given the odd <= 0 test you pointed out elsewhere
in the code (the math doesn't appear to be able to cope with a "0" RTT
anyway, even were we able to arrange for it to never go negative (how
does it go negative?) but allow 0 values).

Typical LAN latencies today are on the order of 1/20 the smallest value
we can represent.  Latencies on local, routed gigabit networks (for
example, Columbia's and NYU's campus networks) are about 1/10 the
smallest value we can represent, with gigabit bandwidth.

Latencies on metropolitan-area gigabit networks, even with some link
congestion (for example, from NetBSD at ISC to Internet Archive at 300
Paul or from Columbia to NYU) are also about 1/10 the smallest value
we can represent.

Even regional networks (for example, Columbia to MIT) yield RTTs of
about 7ms -- 1/2 the smallest value we can represent.

This suggests to me that with this representation, even with bugs fixed,
many, many cases of interest the RTT will have no effect at all on our
network stack.  Still more precision is required.

Meanwhile, 27 bits to the left of the decimal point is still a *lot*
more seconds than we'll ever see for a TCP RTT.

Do I misunderstand something here?

Thor


Home | Main Index | Thread Index | Old Index