[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]
Re: pkg_add and remote packages
On Fri, Apr 04, 2008 at 06:26:50PM +0200, Hubert Feyrer wrote:
> On Fri, 4 Apr 2008, Joerg Sonnenberger wrote:
>> What do others think about this?
> I think we went over this before. The current code reuses the one
> connection to fetch one file after the other, and I wrote things that way
> to save the FTP server from opening many connections.
The current code first extracts the package to /var/tmp and checks for
missing dependencies afterwards. This is very similiar. The second step
would fetch all the (compressed) packages to /var/tmp and install
afterwards. I think I can integrate some code based on the pkg_info
implementation to pick into the package first and decide what else is
required (without changing most of the pkg_perform implementation).
That would change the space requirement from the all packages in a
dependency chain to size of binary packages + leaf.
The current connection saving is somewhat problematic as it doesn't deal
well with multiple elements in PKG_PATH. The caching code in libfetch
would handle that part as long as it stays within one process and on the
same host. It should be possible to extend that to multiple hosts
without changing the API.
> For the other two options that you mention, the ups and downs should be
> considered. I.e. if the dependency tree is not too deep, I'd accept using
> multiple connections.
Problem is knowing in advance how deep it will be. Last time I tried to
draw the tree, maximum depths was 17.
> Or maybe implement both schemes, and let the user decide?
The current extract-from-FTP-to-/var/tmp behaviour is what makes
switching pkg_add to libarchive so complicated, so I would prefer if we
don't have to make this optional. That's why I am asking if introducing
a temporary regression here is acceptable. Later it can be decided
whether doing inplace streaming pkg_add from a remote site is a good
idea or not.
Main Index |
Thread Index |