Subject: Re: easy mechanism for parallel builds
To: None <firstname.lastname@example.org>
From: Lars Nordlund <email@example.com>
Date: 06/15/2005 18:11:32
On Wed, 15 Jun 2005 16:32:27 +0200
Geert Hendrickx <firstname.lastname@example.org> wrote:
> There has been a discussion about parallel build algorithms here a while
> ago. Someone proposed a mechanism to split up the pkg dependency tree
> in "independent" parts, and to distribute these independent trees across
> the participating machines. I think this could be done much easier.
Was that someone perhaps me? :-)
In any case.. I do not agree that what I proposed was extremely
hard.. obviously, since I wrote the patch and spent a few days
thinking about it.
Finding a pkg dependany tree for a given pkg is already implemented in
the patch I posted. This is basically what is printed in 'make' syntax
on stdout when one types 'make parallel'.
Spreading a build over several machines should be fairly simple with
help from the clusterit pkg. I have not tried this, since I have not
have found suitable machines for the task, yet. I have run it on a
single machine with two CPUs, getting a fairly good speedup when the
number of packages to build are more than 1-4.. (to avoid a single
I have some outstanding issues which I am aware of:
The first is that when PKGWILDCARD do not catch PKGNAME, the check if a
package is installed will fail. This causes installation failures (and
wasted build time) of that package. I posted about this earlier. Some
packages were corrected so the problem is probably smaller now).
The second issue is that when two packages wants to download the same
DISTFILE, they both will fail. I need to implement some kind of lock
mechanism around the distfiles to avoid having several ftp processes
write to the same file. This can of course be avoided by first making
sure that all distfiles are available. But this limits the usefulness
of the patch too much. I want to be able to use it out of the box, on a
fresh system with many CPUs in it, for example. And not just in a bulk
build situation.. I like building packages from source and do not
really want to use binary packages, myself.
Over to your idea:
> The entire cluster should have write access to a central ftp package
> What do you think?
Implementable. But you need to re-create the dependancy check which is
already built into make. Or have some other mechanism to have the nodes
in the cluster pick packages in an efficient way. You will probably also
need some kind of timeout to prevent that, if one machine fails to
release the lock, it will not stall the entire cluster. Feels like a lot
of work? But, of course, I am influenced by my own idea so I am perhaps
not the best reviewer of this.