IETF-SSH archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Sftp: performance enhancing changes



Bill Sommerfeld <sommerfeld%east.sun.com@localhost> writes:

o Ignoring channel window makes for a screaming
sftp transfer. (unacceptable, breaks multiplexing.)

It's only unacceptable if you're actually doing multiplexing.  If you're just
using SFTP as a secure FTP replacement (which is what many people seem to be
using it for), it's a simple, quick fix for the SFTP performance problems.

My understanding based on reading the earlier discussion is that this is a
red herring -- the fix (which doesn't involve violating the specs) is to
pipeline read/write requests and ensure the channel window is substantially
larger than the TCP window).


Two comments on this..

1. It doesn't involve violating the spec.  You just advertise a maximum-size
   window (for the SSH level) and read a file as a single large chunk rather
   than lots of little bits and pieces (at the SFTP level).  Neither violate
   the spec.

Well, perhaps I misunderstood what you were proposing then.

Ignoring the channel window during data send is certainly
a protocol violation.

However, if the remote side has given the client a huge
window (something the client has no control over), the client
is certainly free to use it.  And more: it should use it.

The limit on the size of a single SFTP read exists for good
reason (as discussed previously.)  I don't think we can
remove it.

There are two different solutions, however, if you want to
achieve the same result:

1. Issue multiple reads, for some reasonable amount of data.
   Feel free to issue all the reads needed to fetch the entire
   file at once, if you want to.

   This requires no change in the protocol, and can be done
   with any previously documented version of the protocol.

2. Add support for multi-response reads to the protocol.  I.e.,
   allow the server to respond to your gaint read request with
   multiple smaller sftp data packets.  That way the server
   doesn't have to commit to sending you a 40 gig sftp data
   packet only to have the last 39gig dissappear before it can
   send it, and be forced to send you 39 gig of zeros before
   it can tell you that an error occured.

2. Pipelining reads/writes and whatnot is a nice theoretical fix, but the fact
   that this problem has been around for what, five years now without anyone
   fixing it

Well, unless I'm mistaken, Markus said that OpenSSH had fixed
it and was now seeing throughput roughly equivilant to SCP.

The next version of VanDyke's software will also contain
optimizations of this nature.

- Joseph




Home | Main Index | Thread Index | Old Index