tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: Distributed bulk building for slow machines

I presented some ideas here
and in wip/distbb's
  3) /usr/pkg/share/doc/distbb/README
> 1) A binary package tree on the main master server for any given
> architecture which starts empty and which is rsynced over ssh in both
> directions (either via cron or after the creation of a new package)

> 2) A list of priority packages which is made before a bulk (old
> fashioned or pbulk) build is started
What's the reason to build priority packages and the rest in two steps?
All they can be built in parallel. See 2).

> 3) A method to use a fast machine to create a dependency tree which is
> used after the priority package list is finished
Here if you mean scanning pkgsrc tree for gathering informarion about
packages, then this step is also parallized/distributed in distbb and
doesn't allow fast machines at all.

> Essentially, a sandbox would be created on worker machines where sshd
> is run inside of the sandbox with ssh keys which give the master the
> ability to send commands and rsync files.

> The master would iterate through the package list and remotely run a
> "make package" for each, noting any failures, then possibly sync files
> afterwards.
You need not reimplement the wheel here. DistBB already implements this.
It also provides you tolerance to network failures, automatic task
[re]distributing, automatic periodic querying failed servers and more.

> It is expected that there are so many packages which are dependencies
> of others that unless we had an incredibly large number of volunteer
> machines we'd never have two machines trying to build the same package
> at the same time.
Em-m-m-m. Just don't do this dirty trick. Try to use wip/distbb.  It is
able to do everything you need for distibuted (over Internet) bulk
builds.  I've been running distributed bulk builds in local network
for more than 2 years.

> Therefore, I'm not too worried about the package
> selection logic because we can just use the results of a dependency
> tree from the old bulk build scrips.
See 3)

> This setup seems simple and straightforward, but if anyone has
> suggestions or would like to help,
I don't see any problem is setuping this kind of bulk build at all.
wip/distbb + rsync + ssh. So, yes, I can help to configure it.

> Also, if any developers have fast VAXen (relatively speaking,
> obviously), m68k machines, sh3 (sh4; either endianness), StrongARM, or
> big endian MIPS which you'd like to volunteer, please let me know.
It make sense to run hardware emulators for this zoo.

Best regards, Aleksey Cheusov.

Home | Main Index | Thread Index | Old Index