Hi,
nia wrote:
> Has anyone benchmarked increasing numbers of make -jX on
> single-processor systems with lots of memory?
>
> Up until now, I've been operating under the assumption that
> -j2 helps by not making the process wait on the file system
> once the previous job is complete - but I wonder if that's
> true at all...
that is an interesting question. I think it depends form a lot of
factors: what is being built, RAM available, compilers, file-system....
And the cpu
The old golden rule was to put N+1 where N is the number of CPUs/cores
availble.
Yes. Even with lots of ram, building in tmpfs, and no swap, the the final install to disk bogs down (mutter ride things about cross compile libraries). Hence the plus one at least still holds. I see Greg has pushed it higher. I found more than one was a diminishing return and a non responsive desktop.
OpenBSD guys said to restrain that to physical cores and not HT cores,
but in my experience that's wrong, given enough RAM.
Right. When fake cores first appears the rule was to disable them in the bios.
Part because the cores were slow and part because the kernels was confused. The virtual cores are now faster, and kernels are starting to understand that not all cores are equal (thanks arm).
As for hard data.
SSD is a must. It makes unusable old laptops fast.
An aside, if your using VMs keep them running (booting a VM can use two cores) and build in TMP (writing back to the host is painful).
I found that more jobs means more swap, so if you don't have plenty of
RAM and a very fast HDD, at the end, it gets worse, especially for
today's compilers with C++, meson and other that suck up until the last bit.
Worse of all is Rust.
So I keep jobs = N. cpus or in extreme cases even less for very big
programs. For a mix like pkgsrc I find it the best bet... if you have
one specific item building and rebuilding, you may try and time yourself.
In old times, with gcc 2.95, I found it convenient to add N+1!!
Riccardo