tech-toolchain archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: make: dynamic scaling of maxJobs ?



matthew green <mrg%eterna.com.au@localhost> writes:

> load isn't going to great heuristic for this.  when a job is
> waiting on disk/nfs io, it is +1 to the load.  so "idle" things
> add to the load, espcially in a build.
>
> i think you need to mix in actual cpu activity as well.  i
> thank become convinced that an external tool, occasionally run,
> sounds like a good idea.  i think i'd use such a thing on my
> own build machine i often want to run multiple jobs on, without
> having to think about a script to run them etc.

I see your point, but I have a similar case, with a build machine with
12 cpus.  Builds (of netbsd) are run with -j12, which for 1 or 2 is
fine.  For more parallel builds, the machine becomes unresponsive, but
it seems to be much more about disk queues getting long than cpu.  So I
think using load average does make sense from that viewpoint.

It's almost like each parallel build should take its fair shares, where
if you have 12 cpus there are 24 slots available, so each one can
basically register that it's doing something piggy and then 24/N can be
used for each.  (I haven't tried this.)

Attachment: pgp_djo1MYlZp.pgp
Description: PGP signature



Home | Main Index | Thread Index | Old Index