NetBSD-Users archive

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index][Old Index]

Re: make jobs on uniprocessor machines



Hi,

nia wrote:
Has anyone benchmarked increasing numbers of make -jX on
single-processor systems with lots of memory?

Up until now, I've been operating under the assumption that
-j2 helps by not making the process wait on the file system
once the previous job is complete - but I wonder if that's
true at all...

that is an interesting question. I think it depends form a lot of factors: what is being built, RAM available, compilers, file-system....

The old golden rule was to put N+1 where N is the number of CPUs/cores availble.

OpenBSD guys said to restrain that to physical cores and not HT cores, but in my experience that's wrong, given enough RAM.

I found that more jobs means more swap, so if you don't have plenty of RAM and a very fast HDD, at the end, it gets worse, especially for today's compilers with C++, meson and other that suck up until the last bit.
Worse of all is Rust.

So I keep jobs = N. cpus or in extreme cases even less for very big programs. For a mix like pkgsrc I find it the best bet... if you have one specific item building and rebuilding, you may try and time yourself.

In old times, with gcc 2.95, I found it convenient to add N+1!!

Riccardo


Home | Main Index | Thread Index | Old Index