tech-net archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Odd TCP snd_cwnd behaviour, resulting in pathetic throughput?



I've recently been trying to get duplicity running for backups of my
main home server, out to the magical cloud (aws s3 to wasabi.com).

I have discovered that bandwidth oscillates in a sawtooth fashion,
ramping up from near nothing, to saturate my uplink (somewhere around
20-30Mbit/s), then I get a dropped packet, and it drops to near nothing
and repeats, each cycle taking around a minute. Looking at netstat -P
for the PCB in question, I see the snd_cwnd following the pattern,
which makes sense. I've flipped between reno, newreno & cubic, and
while subtly different, they all have the snd_cwnd dropping to near
nothing after a single dropped packet. I didn't think this was
expected behaviour, especially with SACKs enabled.

Reading tcpdump, the only odd thing I see is that the duplicate ack
triggering the fast retransmit is repeated 70+ times. But tracing
other flows, this doesn't seem abnormal.

It's worth noting that running their "speedtest" thru firefox running
on the same machine is fine - and bandwidth is as I'd expect.

Is there anyone willing to take a look at a pcap and tell me what
I'm missing? ie. cluebat, please?

fwiw, I do have npf and altq configured, but disabling altq doesn't
appear to change the behaviour.

fwiw#2, I briefly toyed with the idea of bringing BBR from FreeBSD,
but I think we'd need more infrastructure for doing pacing? And while
it might "fix" this, I think we're better off fixing whatever is
actually broken.

Thanks,
-- 
Paul Ripke
"Great minds discuss ideas, average minds discuss events, small minds
 discuss people."
-- Disputed: Often attributed to Eleanor Roosevelt. 1948.


Home | Main Index | Thread Index | Old Index