tech-toolchain archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: make: dynamic scaling of maxJobs ?

In article <>,
Simon Gerraty  <> wrote:
>I'm playing with a patch to do this, and wondering if there is general
>interest in the idea.
>Basically, we do builds on biggish machines and compute an optimal
>maxJobs based on number of cpu's and empirical testing etc.
>That's all fine for a machine dedicated to a single build - where the
>goal is to maximally consume the machine.
>As soon as you have multiple developers sharing a build machine though,
>there is no static -j value that is "optimal".
>You either opt for the above computed machine - and accept
>oversubscription when 3 builds are running at the same time...
>Or you opt for a lower number, and waste the machine when there's only
>1 build running.
>That problem holds even if you dynamically compute a good value at the
>start of the build (all the others may finish 5 minutes later).
>Rather than add a lot of complexity to make to address this, I decided
>to let it use an external tool.
>make -j /opt/bin/
>and make will run that to get an initial value, and re-run it
>occasionally to adjust.  
>Note: This only applies to the initial instance of make, the sub-makes
>get a normal -j value - being the maxJobTokens value in effect when they
>are started.  The theory being that the sub-makes don't run long enough
>to be a serious issue.
>There's an inherent assumption there - that you are not building via
>tree walks (see 
>but the above keeps things simple.
>A trivial maxJobs script, computes ncpu's and a factor to apply to it.
>Eg. if there a 3 builds running $ncpu / 3, if only one build is running
>just use the normal factor determined above.
>This script can cache its result for a while (eg 2 minutes).
>You can make it as simple or complex as you like - make shouldn't care.
>Anyway that's the basic idea...
>It is probably only interesting if you have builds that take a long time
>on machines shared with other developers.

You could also base forking another job on the current machine load.
I.e. if the load > (factor x ncpu), you don't spawn new jobs, until
one one completes, bringing the load down (or the load goes down because
an external job finished).


Home | Main Index | Thread Index | Old Index