tech-pkg archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Distributed bulk building for slow machines



Hi, all,

I'm interested in setting up a simple shared bulk build system for slower architectures which allows for developers' systems anywhere on the Internet to participate.

My basic setup would be:

1) A binary package tree on the main master server for any given architecture which starts empty and which is rsynced over ssh in both directions (either via cron or after the creation of a new package)

2) A list of priority packages which is made before a bulk (old fashioned or pbulk) build is started

3) A method to use a fast machine to create a dependency tree which is used after the priority package list is finished

Essentially, a sandbox would be created on worker machines where sshd is run inside of the sandbox with ssh keys which give the master the ability to send commands and rsync files. The master would iterate through the package list and remotely run a "make package" for each, noting any failures, then possibly sync files afterwards.

It is expected that there are so many packages which are dependencies of others that unless we had an incredibly large number of volunteer machines we'd never have two machines trying to build the same package at the same time. Therefore, I'm not too worried about the package selection logic because we can just use the results of a dependency tree from the old bulk build scrips.

This setup seems simple and straightforward, but if anyone has suggestions or would like to help, I'm completely open. But please don't suggest tremendously greater complexity unless you're prepared to help ;)

Also, if any developers have fast VAXen (relatively speaking, obviously), m68k machines, sh3 (sh4; either endianness), StrongARM, or big endian MIPS which you'd like to volunteer, please let me know.

Thanks!
John Klos


Home | Main Index | Thread Index | Old Index