tech-toolchain archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: make: dynamic scaling of maxJobs ?



Alan Barrett writes:
>I sometimes use wrapper scripts that run make -j $(choose_maxjobs.sh).
>Is it really worthwhile to add the complexity to make itself, instead
>of leaving it in a wrapper script?

As I said, that's what we've been doing for many years.
It works great for a machine dedicated to a single build.

But it cannot not work well when N developers are sharing a box with M
cores.

If the optimal value for a single build is M * F.
The 1st build start gets -j M*F
If the 2nd build started gets -j M*F you are now oversubscribed and the
two builds will take longer than 2* the time for a single build.
Add a 3rd build and it gets worse.

If you try and get clever, if the 1st build still gets -j M*F
and maybe the 2nd gets -j M*F/2 - that's clearly not fair.
If the first build gets -j M*F/2 and no other builds are started you are
wasting the machine.  In short you cannot win playing that game.

To date we have just put up with oversubscription.

A colleague just gave me the results of some test builds using the patch
I mentioned, 2,4,8 concurrent builds on a machine with 32 cores.
In each case the N builds finished in just under N* the single build
time, that's about as good as you can hope for, and the implementation 
is both simple and portable.




Home | Main Index | Thread Index | Old Index